Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:53 am on January 1, 2020 Permalink | Reply
    Tags: "Theta and the Future of Accelerator Programming at Argonne", , , , Supercomputing   

    From Argon ALCF via insideHPC: “Theta and the Future of Accelerator Programming at Argonne” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    From insideHPC

    January 1, 2020
    Rich Brueckner


    In this video from the Argonne Training Program on Extreme-Scale Computing 2019, Scott Parker from Argonne presents: Theta and the Future of Accelerator Programming.

    ANL ALCF Theta Cray XC40 supercomputer

    Designed in collaboration with Intel and Cray, Theta is a 6.92-petaflops (Linpack) supercomputer based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

    “Theta’s unique architectural features represent a new and exciting era in simulation science capabilities,” said ALCF Director of Science Katherine Riley. “These same capabilities will also support data-driven and machine-learning problems, which are increasingly becoming significant drivers of large-scale scientific computing.”

    Scott Parker is the Lead for Performance Tools and Programming Models at the ALCF. He received his B.S. in Mechanical Engineering from Lehigh University, and a Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign. Prior to joining Argonne, he worked at the National Center for Supercomputing Applications, where he focused on high-performance computing and scientific applications. At Argonne since 2008, he works on performance tools, performance optimization, and spectral element computational fluid dynamics solvers.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:12 pm on December 20, 2019 Permalink | Reply
    Tags: "Using Satellites and Supercomputers to Track Arctic Volcanoes", , ArcticDEM project, , , NASA Terra MODIS, NASA Terra satellite, Supercomputing   

    From Eos: “Using Satellites and Supercomputers to Track Arctic Volcanoes” 

    From AGU
    Eos news bloc

    From Eos

    New data sets from the ArcticDEM project help scientists track elevation changes from natural hazards like volcanoes and landslides before, during, and long after the events.

    1
    The 2017 Okmok eruption resulted in a new volcanic cone, as well as consistent erosion of that cone’s flanks over subsequent years. Credit: NASA image courtesy of Jeff Schmaltz, MODIS Rapid Response Team, NASA-Goddard Space Flight Center

    NASA Terra MODIS schematic


    NASA Terra satellite

    Conical clues of volcanic activity speckle the Aleutian Islands, a chain that spans the meeting place of the Pacific Ring of Fire and the edge of the Arctic. (The chain also spans the U.S. state of Alaska and the Far Eastern Federal District of Russia.) Scientists are now turning to advanced satellite imagery and supercomputing to measure the scale of natural hazards like volcanic eruptions and landslides in the Aleutians and across the Arctic surface over time.

    When Mount Okmok, Alaska, unexpectedly erupted in July 2008, satellite images informed scientists that a new, 200-meter cone had grown beneath the ashy plume. But scientists suspected that topographic changes didn’t stop with the eruption and its immediate aftermath.

    For long-term monitoring of the eruption, Chunli Dai, a geoscientist and senior research associate at The Ohio State University, accessed an extensive collection of digital elevation models (DEMs) recently released by ArcticDEM, a joint initiative of the National Geospatial-Intelligence Agency and National Science Foundation. With ArcticDEM, satellite images from multiple angles are processed by the Blue Waters petascale supercomputer to provide elevation measures, producing high-resolution models of the Arctic surface.

    NCSA U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer

    3
    In this map of ArcticDEM coverage, warmer colors indicate more overlapping data sets available for time series construction, and symbols indicate different natural events such as landslides (rectangles) and volcanoes (triangles). Credit: Chunli Dai

    Dai first utilized these models to measure variations in lava thickness and estimate the volume that erupted from Tolbachik volcano in Kamchatka, Russia, in work published in Geophysical Research Letters in 2017. The success of that research guided her current applications of ArcticDEM for terrain mapping.

    Monitoring long-term changes in a volcanic landscape is important, said Dai. “Ashes easily can flow away by water and by rain and then cause dramatic changes after the eruption,” she said. “Using this data, we can even see these changes…so that’s pretty new.”

    Creating time series algorithms with the ArcticDEM data set, Dai tracks elevation changes from natural events and demonstrates their potential for monitoring the Arctic region. Her work has already shown that erosion continues years after a volcanic event, providing first-of-their-kind measurements of posteruption changes to the landscape. Dai presented this research at AGU’s Fall Meeting.

    Elevating Measurement Methods

    “This is absolutely the best resolution DEM data we have,” said Hannah Dietterich, a research geophysicist at the U.S. Geological Survey’s Alaska Volcano Observatory not involved in the study. “Certainly, for volcanoes in Alaska, we are excited about this.”

    Volcanic events have traditionally been measured by aerial surveys or drones, which are expensive and time-consuming methods for long-term study. Once a hazardous event occurs, Dietterich explained, the “before” shots in before-and-after image sets are often missing. Now, ArcticDEM measurements spanning over a decade can be utilized to better understand and monitor changes to the Arctic surface shortly following such events, as well as years later.

    For example, the volcanic eruption at Okmok resulted in a sudden 200-meter elevation gain from the new cone’s formation but also showed continuing erosion rates along the cone flanks of up to 15 meters each year.

    Landslides and Climate

    For Dai, landslides provide an even more exciting application of ArcticDEM technology. Landslides are generally unmapped, she explained, whereas “we know the locations of volcanoes, so a lot of studies have been done.”

    Mass redistribution maps for both the Karrat Fjord landslide in Greenland in 2017 and the Taan Fiord landslide in Alaska in 2015 show significant mass wasting captured by DEMs before and after the events.

    “We’re hoping that our project with this new data program [will] provide a mass wasting inventory that’s really new to the community,” said Dai, “and people can use it, especially for seeing the connection to global warming.”

    Climate change is associated with many landslides studied by Dai and her team, who focus on mass wasting caused by thawing permafrost. ArcticDEM is not currently intended for predictive modeling, but as more data are collected over time, patterns may emerge that could help inform future permafrost loss or coastal retreat in the Arctic, according to Dietterich. “It is the best available archive of data for when crises happen.”

    Global climate trends indicate that Arctic environments will continue to change in the coming years. “If we can measure that, then we can get the linkage between global warming and its impact on the Arctic land,” said Dai.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 3:38 pm on December 19, 2019 Permalink | Reply
    Tags: , , , , , Simulations on Summit, , Supercomputing   

    From Oak Ridge National Laboratory: “With ADIOS, Summit processes celestial data at scale of massive future telescope” 

    i1

    From Oak Ridge National Laboratory

    December 19, 2019
    Scott S Jones
    jonesg@ornl.gov
    865.241.6491

    Researchers
    Scott A Klasky
    klasky@ornl.gov
    865.241.9980

    Ruonan Wang
    wangr1@ornl.gov
    865.574.8984

    Norbert Podhorszki
    pnb@ornl.gov
    865.574.7159

    For nearly three decades, scientists and engineers across the globe have worked on the Square Kilometre Array (SKA), a project focused on designing and building the world’s largest radio telescope.

    SKA Square Kilometer Array

    Although the SKA will collect enormous amounts of precise astronomical data in record time, scientific breakthroughs will only be possible with systems able to efficiently process that data.

    Because construction of the SKA is not scheduled to begin until 2021, researchers cannot collect enough observational data to practice analyzing the huge quantities experts anticipate the telescope will produce. Instead, a team from the International Centre for Radio Astronomy Research (ICRAR) in Australia, the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) in the United States, and the Shanghai Astronomical Observatory (SHAO) in China recently used Summit, the world’s most powerful supercomputer, to simulate the SKA’s expected output. Summit is located at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    3
    An artist rendering of the SKA’s low-frequency, cone-shaped antennas in Western Australia. Credit: SKA Project Office.

    “The Summit supercomputer provided a unique opportunity to test a simple SKA dataflow at the scale we are expecting from the telescope array,” said Andreas Wicenec, director of Data Intensive Astronomy at ICRAR.

    To process the simulated data, the team relied on the ORNL-developed Adaptable IO System (ADIOS), an open-source input/output (I/O) framework led by ORNL’s Scott Klasky, who also leads the laboratory’s scientific data group. ADIOS is designed to speed up simulations by increasing the efficiency of I/O operations and to facilitate data transfers between high-performance computing systems and other facilities, which would otherwise be a complex and time-consuming task.

    The SKA simulation on Summit marks the first time radio astronomy data have been processed at such a large scale and proves that scientists have the expertise, software tools, and computing resources that will be necessary to process and understand real data from the SKA.

    “The scientific data group is dedicated to researching next-generation technology that can be developed and deployed for the most scientifically demanding applications on the world’s fastest computers,” Klasky said. “I am proud of all the hard work the ADIOS team and the SKA scientists have done with ICRAR, ORNL, and SHAO.”

    Using two types of radio receivers, the telescope will detect radio light waves emanating from galaxies, the surroundings of black holes, and other objects of interest in outer space to help astronomers answer fundamental questions about the universe. Studying these weak, elusive waves requires an army of antennas.

    The first phase of the SKA will have more than 130,000 low-frequency, cone-shaped antennas located in Western Australia and about 200 higher frequency, dish-shaped antennas located in South Africa. The international project team will eventually manage close to a million antennas to conduct unprecedented studies of astronomical phenomena.

    To emulate the Western Australian portion of the SKA, the researchers ran two models on Summit—one of the antenna array and one of the early universe—through a software simulator designed by scientists from the University of Oxford that mimics the SKA’s data collection. The simulations generated 2.6 petabytes of data at 247 gigabytes per second.

    “Generating such a vast amount of data with the antenna array simulator requires a lot of power and thousands of graphics processing units to work properly,” said ORNL software engineer Ruonan Wang. “Summit is probably the only computer in the world that can do this.”

    Although the simulator typically runs on a single computer, the team used a specialized workflow management tool Wang helped ICRAR develop called the Data Activated Flow Graph Engine (DALiuGE) to efficiently scale the modeling capability up to 4,560 compute nodes on Summit. DALiuGE has built-in fault tolerance, ensuring that minor errors do not impede the workflow.

    “The problem with traditional resources is that one problem can make the entire job fall apart,” Wang said. Wang earned his doctorate degree at the University of Western Australia, which manages ICRAR along with Curtin University.

    The intense influx of data from the array simulations resulted in a performance bottleneck, which the team solved by reducing, processing, and storing the data using ADIOS. Researchers usually plug ADIOS straight into the I/O subsystem of a given application, but the simulator’s unusually complicated software meant the team had to customize a plug-in module to make the two resources compatible.

    “This was far more complex than a normal application,” Wang said.

    Wang began working on ADIOS1, the first iteration of the tool, 6 years ago during his time at ICRAR. Now, he serves as one of the main developers of the latest version, ADIOS2. His team aims to position ADIOS as a superior storage resource for the next generation of astronomy data and the default I/O solution for future telescopes beyond even the SKA’s gargantuan scope.

    “The faster we can process data, the better we can understand the universe,” he said.

    Funding for this work comes from DOE’s Office of Science.

    The International Centre for Radio Astronomy Research (ICRAR) is a joint venture between Curtin University and The University of Western Australia with support and funding from the State Government of Western Australia. ICRAR is helping to design and build the world’s largest radio telescope, the Square Kilometre Array.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 11:46 am on December 19, 2019 Permalink | Reply
    Tags: , , , Mississippi State University, , Orion Dell-EMC supercomputer, Supercomputing   

    From insideHPC: “Orion Supercomputer comes to Mississippi State University” 

    From insideHPC

    December 18, 2019
    Rich Brueckner

    1
    Orion Dell EMC supercomputer

    Today Mississippi State University and the NOAA celebrated one of the country’s most powerful supercomputers with a ribbon-cutting ceremony for the Orion supercomputer, the fourth-fastest computer system in U.S. academia. Funded by NOAA and managed by MSU’s High Performance Computing Collaboratory, the Orion system is powering research and development advancements in weather and climate modeling, autonomous systems, materials, cybersecurity, computational modeling and more.

    3

    With 3.66 Petaflops of performance on the Linpack benchmark, Orion is 60th most powerful supercomputer in the world according to Top500.org, which ranks the world’s most powerful non-distributed computer systems. It is housed in the Malcolm A. Portera High Performance Computing Center, located in MSU’s Thad Cochran Research, Technology and Economic Development Park.

    “Mississippi State has a long history of using advanced computing power to drive innovative research, making an impact in Mississippi and around the world,” said MSU President Mark E. Keenum. “We also have had many successful collaborations with NOAA in support of the agency’s vital work. I am grateful that NOAA has partnered with us to help meet its computing needs, and I look forward to seeing the many scientific advancements that will take place because of this world-class supercomputer.”

    NOAA has provided MSU with $22 million in grants to purchase, install and run Orion. The Dell-EMC system consists of 28 computer cabinets, each cabinet approximately the size of an industrial refrigerator, 72,000 processing cores and 350 terabytes of Random Access Memory.

    “We’re excited to support this powerhouse of computing capacity at Mississippi State,” said Craig McLean, NOAA assistant administrator for Oceanic and Atmospheric Research. “Orion joins NOAA’s network of computer centers around the country, and boosts NOAA’s ability to conduct innovative research to advance weather, climate and ocean forecasting products vital to protecting American lives and property.”

    MSU’s partnerships with NOAA include the university’s leadership of the Northern Gulf Institute, a consortium of six academic institutions that works with NOAA to address national strategic research and education goals in the Gulf of Mexico region. Additionally, MSU’s High Performance Computing Collaboratory provides the computing infrastructure for NOAA’s Exploration Command Center at the NASA Stennis Space Center. The state-of-the-art communications hub enables research scientists at sea and colleagues on shore to communicate in real time and view live video streams of undersea life.

    “NOAA has been an incredible partner in research with MSU, and this is the latest in a clear demonstration of the benefits of this partnership for both the university and the agency,” said MSU Provost and Executive Vice President David Shaw.”

    Orion supports research operations for several MSU centers and institutes, such as the Center for Computational Sciences, Center for Cyber Innovation, Geosystems Research Institute, Center for Advanced Vehicular Systems, Institute for Genomics, Biocomputing and Biogeotechnology, the Northern Gulf Institute and the FAA Alliance for System Safety of UAS through Research Excellence (ASSURE). These centers use high-performance computing to model and simulate real-world phenomena, generating insights that would be impossible or prohibitively expensive to obtain otherwise.

    “With our faculty expertise and our computing capabilities, MSU is able to remain at the forefront of cutting-edge research areas,” said MSU Interim Vice President for Research and Economic Development Julie Jordan. “The Orion supercomputer is a great asset for the state of Mississippi as we work with state, federal and industry partners to solve complex problems and spur new innovations.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:07 pm on December 14, 2019 Permalink | Reply
    Tags: "Simulations Attempt to Reconstruct One of the Most Explosive Events in the Universe: A Neutron Star Merger", , , , , , , , NERSC at LBNL, , , Supercomputing   

    From Lawrence Berkeley National Lab: “Simulations Attempt to Reconstruct One of the Most Explosive Events in the Universe: A Neutron Star Merger” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    December 12, 2019
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    1
    Artist’s now iconic illustration of two merging neutron stars. The rippling space-time grid represents gravitational waves that travel out from the collision, while the narrow beams show the bursts of gamma rays that are shot out just seconds after the gravitational waves. Swirling clouds of material ejected from the merging stars are also depicted. The clouds glow with visible and other wavelengths of light. (Credit: NSF/LIGO/Sonoma State University/A. Simonnet)

    Scientists are getting better at modeling the complex tangle of physics properties at play in one of the most powerful events in the known universe: the merger of two neutron stars.

    Neutron stars are the fast-spinning, ultradense husks of larger stars that exploded as supernovae. They measure about 12 miles across, and a single teaspoon of neutron star matter weighs as much as 1,125 Golden Gate bridges, or 2,735 Empire State buildings.

    2
    A 2D vertical slice of a 3D GRMHD (general relativistic magnetohydrodynamic) simulation of a neutron star merger initialized with a toroidal (doughnut-shaped) magnetic field, showing mass density (red is high density, light blue is low density). The black lines reveal features of the magnetic field lines. Energetic jets (dark blue) form in the aftermath of the merger. View a larger animation. (Credit: Monthly Notices of the Royal Astronomical Society, DOI: 10.1093/mnras/stz2552)

    On Aug. 17, 2017, scientists observed a signature of gravitational waves – ripples in the fabric of space-time – and also an associated explosive burst, known as a kilonova, that were best explained by the merger of two neutron stars. And again on April 25, 2019, another likely neutron-star-merger event, based solely on a gravitational wave measurement.

    MIT /Caltech Advanced aLigo

    6
    Scientists working in the LIGO Hanford control room. Image: Caltech/MIT/LIGO Lab/C. Gray

    LIGO and Virgo Detect Neutron Star Smash-Ups
    News Release • May 2, 2019

    While these events can help to compare and validate the physics models that researchers develop to understand what’s at work in these mergers, researchers must still essentially start from scratch to build the right physics into these models.

    In a study published in the Monthly Notices of the Royal Astronomical Society journal, a team led by scientists at Northwestern University simulated the formation of a disc of matter, a giant burst of ejected matter, and the startup of energetic jets around the remaining object – either a larger neutron star or a black hole – in the aftermath of this merger.

    The team included researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), UC Berkeley, the University of Alberta, and the University of New Hampshire.

    To make the model more realistic than in previous efforts, the team built three separate simulations that tested different geometry for the powerful magnetic fields encircling the merger..

    “We’re starting from a set of physical principles, carrying out a calculation that nobody has done at this level before, and then asking, ‘Are we reasonably close to observations or are we missing something important?’” said Rodrigo Fernández, a co-author of the latest study and a researcher at the University of Alberta.

    The 3D simulations they carried out, which included computing time at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), involved more than 6 million hours of CPU (computer processing unit) time.

    NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC Hopper Cray XE6 supercomputer, named after Grace Hopper, One of the first programmers of the Harvard Mark I computer

    NERSC Cray XC30 Edison supercomputer

    NERSC GPFS for Life Sciences


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF computer cluster in 2003.

    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Future:

    Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

    NERSC is a DOE Office of Science User Facility.

    The simulations account for GRMHD (general relativistic magnetohydrodynamics) effects, which include properties associated with magnetic fields and fluid-like matter, as well as the properties of matter and energy traveling at nearly the speed of light. Researchers noted that the simulations could also prove useful in modeling the merger of a black hole with a neutron star.

    To simulate the kilonova outbursts – an element-creating event that scientists believe is responsible for seeding space with heavy elements – the team produced estimates of its total ejected mass, its average velocity, and its composition.

    “With these three quantities one can estimate whether the light curve would have the right luminosity, color, and evolution time,” Fernández said.

    3
    This simulation, sampled on a sphere with a 6,200-mile radius that is centered at a black hole, shows an explosive event known as a kilonova that is associated with a neutron-star merger. One component, which lasts for days, has an associated signature of blue-frequency light (blue), and another component that lasts for weeks has an associated color peak of near-infrared light (red). The green shows the signature of associated energetic jets that are created in the merger. (Credit: Monthly Notices of the Royal Astronomical Society, DOI: 10.1093/mnras/stz2552)

    There are two generalized components of these kilonova outbursts – one evolves over the course of days and is characterized by the signature blue-frequency light it gives off at its peak, and the other lasts for weeks and has an associated color peak of near-infrared light.

    The latest simulations are designed to model these blue and red components of kilonovae.

    The simulations also help to explain the launch of powerful energy jets that emanate outward in the merger aftermath, including a “striped” character of the jets due to the effects of powerful, alternating magnetic fields. These jets can be observed as a burst of gamma rays, as with the 2017 event.

    Daniel Kasen, a scientist in the Nuclear Science Division at Berkeley Lab and an associate professor of physics and astronomy at UC Berkeley, said, “Magnetic fields provide a way to tap the energy of a spinning black hole and use it to shoot jets of gas moving at near the speed of light. Such jets can produce bursts of gamma-rays, as well as extended radio and x-ray emission, all of which were seen in the 2017 event.”

    Fernández acknowledged that the simulations don’t precisely mirror observations yet – the simulations showed a lower mass for the blue kilonova contribution compared to the red – and that better models of the hypermassive neutron star resulting from the merger and of the abundant neutrinos – ghostly particles that travel through most types of matter unaffected – associated with the merger event are needed to improve the models.

    The model did benefit from models of the discs of matter (accretion discs) circling black holes, as well as models of neutrino-cooling properties, the volume of neutrons and protons associated with the merger event, and the matter-creating process associated with the kilonova.

    Kasen noted that computing resources at Berkeley Lab “let us peer into the most extreme environments – like this turbulent whirlpool sloshing outside a newly born black hole – and watch and learn how the heavy elements were made.”

    The simulations suggest that the neutron-star merger observed in August 2017 likely did not form a black hole in its immediate aftermath, and that the strongest magnetic fields were donut-shaped. Also, the simulations largely agreed with some long-standing models for fluid behavior.

    NERSC is a DOE Office of Science User Facility.

    This study was supported by the Natural Sciences and Engineering Research Council of Canada, the University of Alberta, the Simons Foundation, the Gordon and Betty Moore Foundation, and NASA.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    LBNL Molecular Foundry

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

     
  • richardmitnick 5:13 pm on December 6, 2019 Permalink | Reply
    Tags: "IBM Powers AiMos Supercomputer at Rensselaer Polytechnic Institute", , , , Supercomputing   

    From insideHPC: “IBM Powers AiMos Supercomputer at Rensselaer Polytechnic Institute” 

    From insideHPC


    AiMos the newest Supercomputer at Rensselaer Polytechnic Institute
    In this video, Christopher Carothers, director of the Center for Computational Innovations, discusses AiMos, the newest supercomputer at Rensselaer Polytechnic Institute.

    The most powerful supercomputer to debut on the November 2019 Top500 ranking of supercomputers will be unveiled today at the Rensselaer Polytechnic Institute Center for Computational Innovations (CCI). Part of a collaboration between IBM, Empire State Development (ESD), and NY CREATES, the eight petaflop IBM POWER9-equipped AI supercomputer is configured to help enable users to explore new AI applications and accelerate economic development from New York’s smallest startups to its largest enterprises.

    1

    Named AiMOS (short for Artificial Intelligence Multiprocessing Optimized System) in honor of Rensselaer co-founder Amos Eaton, the machine will serve as a test bed for the IBM Research AI Hardware Center, which opened on the SUNY Polytechnic Institute (SUNY Poly) campus in Albany earlier this year. The AI Hardware Center aims to advance the development of computing chips and systems that are designed and optimized for AI workloads to push the boundaries of AI performance. AiMOS will provide the modeling, simulation, and computation necessary to support the development of this hardware.

    “Computer artificial intelligence, or more appropriately, human augmented intelligence (AI), will help solve pressing problems, from healthcare to security to climate change. In order to realize AI’s full potential, special purpose computing hardware is emerging as the next big opportunity,” said Dr. John E. Kelly III, IBM Executive Vice President. “IBM is proud to have built the most powerful and smartest computers in the world today, and to be collaborating with New York State, SUNY, and RPI on the new AiMOS system. Our collective goal is to make AI systems 1,000 times more efficient within the next decade.”

    According to the recently released November 2019 Top500 and Green500 supercomputer rankings, AiMOS is the most powerful supercomputer housed at a private university. Overall, it is the 24th most powerful supercomputer in the world and third-most energy efficient. Built using the same IBM Power Systems technology as the world’s smartest supercomputers, the US Dept. of Energy’s Summit and Sierra supercomputers, AiMOS uses a heterogenous system architecture that includes IBM POWER9 CPUs and NVIDIA GPUs. This enables AiMOS with a capacity of eight quadrillion calculations per second.


    In this video, Rensselaer Polytechnic Institute President Shirley Ann Jackson discusses AiMOS, the newest supercomputer at Rensselaer Polytechnic Institute.

    “As the home of one of the top high-performance computing systems in the U.S. and in the world, Rensselaer is excited to accelerate our ongoing research in AI, deep learning, and in fields across a broad intellectual front,” said Rensselaer President Shirley Ann Jackson. “The creation of new paradigms requires forward-thinking collaborators, and we look forward to working with IBM and the state of New York to address global challenges in ways that were previously impossible.”

    AiMOS will be available for use by public and private industry partners.

    Dr. Douglas Grose, Future President of NY CREATES said, “The unveiling of AiMOS and its incredible computational capabilities is a testament to New York State’s international high-tech leadership. As a test bed for the AI Hardware Center, AiMOS furthers the powerful potential of the AI Hardware Center and its partners, including New York State, IBM, Rensselaer, and SUNY, and we look forward to the advancement of AI systems as a result of this milestone.”

    SUNY Polytechnic Institute Interim President Grace Wang said, “SUNY Poly is proud to work with New York State, IBM, and Rensselaer Polytechnic Institute and our other partners as part of the AI Hardware Center initiative which will drive exciting artificial intelligence innovations. We look forward to efforts like this providing unique and collaborative research opportunities for our faculty and researchers, as well as leading-edge educational opportunities for our top-tier students. As the exciting benefits of AiMOS and the AI Hardware Center come to fruition, SUNY Poly is thrilled to play a key role in supporting the continued technological leadership of New York State and the United States in this critical research sector.”

    AiMOS will also support the work of Rensselaer faculty, students, and staff who are engaged in a number of ongoing collaborations that employ and advance AI technology, many of which involve IBM Research. These initiatives include the Rensselaer-IBM Artificial Intelligence Research Collaboration (AIRC), which brings researchers at both institutions together to explore new frontiers in AI, Cognitive and Immersive Systems Lab (CISL), and The Jefferson Project, which combines Internet of Things technology and powerful analytics to help manage and protect one of New York’s largest lakes, while creating a data-based blueprint for preserving bodies of fresh water around the globe.

    “The established expertise in computation and data analytics at Rensselaer, when combined with AiMOS, will enable many of our research projects to make significant strides that simply were not possible on our previous platform,” said Christopher Carothers, director of the CCI and professor of computer science at Rensselaer. “Our message to the campus and beyond is that, if you are doing work on large-scale data analytics, machine learning, AI, and scientific computing then it should be running at the CCI.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:57 pm on December 3, 2019 Permalink | Reply
    Tags: "When laser beams meet plasma- New data addresses gap in fusion research", , , , Supercomputing,   

    From University of Rochester: “When laser beams meet plasma- New data addresses gap in fusion research” 


    From University of Rochester

    December 2, 2019
    Lindsey Valich
    lvalich@ur.rochester.edu

    1
    Researchers used the Omega Laser Facility at the Rochester’s Laboratory for Laser Energetics to make highly detailed measurements of laser-heated plasmas. (University photo / J. Adam Fenster)

    New research from the University of Rochester will enhance the accuracy of computer models used in simulations of laser-driven implosions. The research, published in the journal Nature Physics, addresses one of the challenges in scientists’ longstanding quest to achieve fusion.

    In laser-driven inertial confinement fusion (ICF) experiments, such as the experiments conducted at the University of Rochester’s Laboratory for Laser Energetics (LLE), short beams consisting of intense pulses of light—pulses lasting mere billionths of a second—deliver energy to heat and compress a target of hydrogen fuel cells. Ideally, this process would release more energy than was used to heat the system.

    Laser-driven ICF experiments require that many laser beams propagate through a plasma—a hot soup of free moving electrons and ions—to deposit their radiation energy precisely at their intended target. But, as the beams do so, they interact with the plasma in ways that can complicate the intended result.

    “ICF necessarily generates environments in which many laser beams overlap in a hot plasma surrounding the target, and it has been recognized for many years that the laser beams can interact and exchange energy,” says David Turnbull, an LLE scientist and the first author of the paper.

    To accurately model this interaction, scientists need to know exactly how the energy from the laser beam interacts with the plasma. While researchers have offered theories about the ways in which laser beams alter a plasma, none has ever before been demonstrated experimentally.

    Now, researchers at the LLE, along with their colleagues at Lawrence Livermore National Laboratory in California and the Centre National de la Recherche Scientifique in France, have directly demonstrated for the first time how laser beams modify the conditions of the underlying plasma, in turn affecting the transfer of energy in fusion experiments.

    “The results are a great demonstration of the innovation at the Laboratory and the importance of building a solid understanding of laser-plasma instabilities for the national fusion program,” says Michael Campbell, the director of the LLE.

    Using supercomputers to model fusion

    I asked U Rochester to tell me the supercomputers used in this work.
    Statement from U Rochester:

    “Hi Richard,
    This was experimental research that was conducted using the Omega laser facility at the University of Rochester’s Laboratory for Laser Energetics. The researchers used a novel high-power laser beam with a tunable wavelength to study the energy transfer between laser beams while simultaneously measuring the plasma conditions. This research was not conducted using supercomputers, but, rather, the experiments were designed to gather data that will be input into computer models to improve the predictive capabilities of models used in supercomputer simulations of inertial confinement fusion (ICF) experiments.”

    Researchers often use supercomputers to study the implosions involved in fusion experiments. It is important, therefore, that these computer models accurately depict the physical processes involved, including the exchange of energy from the laser beams to the plasma and eventually to the target.

    For the past decade, researchers have used computer models describing the mutual laser beam interaction involved in laser-driven fusion experiments. However, the models have generally assumed that the energy from the laser beams interacts in a type of equilibrium known as Maxwellian distribution—an equilibrium one would expect in the exchange when no lasers are present.

    “But, of course, lasers are present,” says Dustin Froula, a senior scientist at the LLE.

    Froula notes that scientists predicted almost 40 years ago that lasers alter the underlying plasma conditions in important ways. In 1980, a theory was presented that predicted these non-Maxwellian distribution functions in laser plasmas due to the preferential heating of slow electrons by the laser beams. In subsequent years, Rochester graduate Bedros Afeyan ’89 (PhD) predicted that the effect of these non-Maxwellian electron distribution functions would change how laser energy is transferred between beams.

    But lacking experimental evidence to verify that prediction, researchers did not account for it in their simulations.

    Turnbull, Froula, and physics and astronomy graduate student Avram Milder conducted experiments at the Omega Laser Facility at the LLE to make highly detailed measurements of the laser-heated plasmas. The results of these experiments show for the first time that the distribution of electron energies in a plasma is affected by their interaction with the laser radiation and can no longer be accurately described by prevailing models.

    The new research not only validates a longstanding theory, but it also shows that laser-plasma interaction strongly modifies the transfer of energy.

    “New inline models that better account for the underlying plasma conditions are currently under development, which should improve the predictive capability of integrated implosion simulations,” Turnbull says.

    This research is based upon work supported by the US Department of Energy National Nuclear Security Administration and the New York State Energy Research and Development Authority.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Rochester Campus

    The University of Rochester is one of the country’s top-tier research universities. Our 158 buildings house more than 200 academic majors, more than 2,000 faculty and instructional staff, and some 10,500 students—approximately half of whom are women.

    Learning at the University of Rochester is also on a very personal scale. Rochester remains one of the smallest and most collegiate among top research universities, with smaller classes, a low 10:1 student to teacher ratio, and increased interactions with faculty.

     
  • richardmitnick 11:00 am on November 23, 2019 Permalink | Reply
    Tags: , Argonne Leadership Computing Facility, , , Cray Intel SC18 Shasta Aurora exascale supercomputer, , Supercomputing   

    From Argonne Leadership Computing Facility: “Argonne teams up with Altair to manage use of upcoming Aurora supercomputer” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    November 19, 2019
    Jo Napolitano

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    The U.S. Department of Energy’s (DOE) Argonne National Laboratory has teamed up with the global technology company Altair to implement a new scheduling system that will be employed on the Aurora supercomputer, slated for delivery in 2021.

    Aurora will be one of the nation’s first exascale systems; capable of performing a billion billion – that’s a quintillion – calculations per second. It will be nearly 100 times faster than Argonne’s current supercomputer, Theta, which went online just two years ago.

    Aurora will be in high demand from researchers around the world and, as a result, will need a sophisticated workload manager to sort and prioritize requested jobs.

    It found a natural partner in Altair to meet that need. Founded in 1985 and headquartered in Troy, Michigan, the company provides software and cloud solutions in the areas of product development, high-performance computing (HPC) and data analytics.

    Argonne was initially planning an update to its own workload manager COBALT (Component-Based Lightweight Toolkit) which was developed 20 years ago within the lab’s own Mathematics and Computer Science Division.

    COBALT has served the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, for years, but after careful consideration of several factors, including cost and efficiency, the laboratory determined that a collaboration with Altair on the PBS Professional™ open source solution was the best path forward.

    “When we went to talk to Altair, we were looking for a resource manager (one of the components in a workload manager) we could use,” said Bill Allcock, manager of the Advanced Integration Group at the ALCF. ​“We decided to collaborate on the entire workload manager rather than just the resource manager because our future roadmaps were well aligned.”

    Altair was already working on a couple of important features that the laboratory wanted to employ with Aurora, Allcock said.

    And most importantly, the teams meshed well together.

    “Exascale will be a huge milestone in HPC — to make better products, to make better decisions, to make the world a better place,” said Bill Nitzberg, chief technology officer of Altair PBS Works™. ​“Getting to exascale requires innovation, especially in systems software, like job scheduling. The partnership between Altair and Argonne will enable effective exascale scheduling, not only for Aurora, but also for the wider HPC world. This is a real 1+1=3 partnership.”

    Aurora is expected to have a significant impact on nearly every field of scientific endeavor, including artificial intelligence. It will improve extreme weather forecasting, accelerate medical treatments, help map the human brain, develop new materials and further our understanding of the universe.

    It will also play a pivotal role in national security and human health.

    “We want to enable researchers to conduct the most important science possible, projects that cannot be done anywhere else in the world because they demand a machine of this size, and this partnership will help us reach this goal,” said Allcock.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:07 pm on November 19, 2019 Permalink | Reply
    Tags: "Lenovo and Intel team up for Harvard Supercomputer and New Exascale Visionary Council", , , From insideHPC, Supercomputing   

    From insideHPC: “Lenovo and Intel team up for Harvard Supercomputer and New Exascale Visionary Council” 

    From insideHPC

    1

    Today Lenovo announced the deployment of Cannon, Harvard University’s first liquid-Cooled supercomputer. Developed in cooperation with Intel, the new system’s advanced supercomputing infrastructure will enable discoveries into areas such earthquake forecasting, predicting the spread of disease, and star formation.

    “Cannon performs 3-4x faster than its predecessor with the upgrade to Lenovo’s ThinkSystem SD650 NeXtScale servers, Neptune liquid cooling, and 2nd Generation Intel Xeon Platinum 8268 processors.”

    Leveraging Lenovo and Intel’s long-standing collaboration to advance HPC and artificial intelligence (AI) in the data center, FASRC sought to refresh its previous cluster, Odyssey. Harvard’s Faculty of Arts and Sciences Research Computing unit (FASRC) wanted to keep the processor count high and increase the performance of each individual processor, knowing that 25 percent of all calculations are run on a single core. Liquid cooling is paramount to support the increased levels of performance today, and the extra capacity needed to scale in the future.

    Cannon, comprised of more than 30,000 2nd gen Intel Xeon Scalable processor cores, includes Lenovo’s Neptune liquid cooling technology, which uses the superior heat conducting efficiency of water versus air. Now, critical server components can operate at lower temperatures allowing for greater performance and energy savings. The dramatically enhanced performance enabled by the new system reflects Lenovo’s focus of bringing exascale level technologies to a broad universe of users everywhere – what Lenovo has coined “From Exascale to Everyscale.”

    Though the Cannon storage system is spread across multiple locations, the primary compute is housed in the Massachusetts Green High Performance Computing Center, a LEED Platinum-certified data center in Holyoke, MA. The Cannon cluster includes 670 Lenovo ThinkSystem SD650 NeXtScale servers featuring direct-to-node water-cooling, and Intel Xeon Platinum 8268 processors consisting of 24 cores per socket and 48 cores per node. Each Cannon node is now several times faster than any previous cluster node, with jobs like geophysics models of the Earth performing 3-4 times faster than the previous system. In the first four weeks of production operation, Cannon completed over 4.2 million jobs utilizing over 21 million CPU hours.

    “Science is all about iteration and repeatability. But iteration is a luxury that is not always possible in the field of university research because you are often working against the clock to meet a deadline,” said Scott Yockel, director of research computing at Harvard University’s Faculty of Arts and Sciences. “With the increased compute performance and faster processing of the Cannon cluster, our researchers now have the opportunity to try something in their data experiment, fail, and try again. Allowing failure to be an option makes our researchers more competitive.”

    The additional cores and enhanced performance of the system are also attracting researchers from additional departments at the university, such as Psychology and the School of Public Health, to more frequently leverage its machine learning capabilities to speed and improve their discoveries.

    Lenovo Launches Exascale Visionary Council

    In related news, Lenovo and Intel announced the creation of an exascale visionary council called Project Everyscale. The project mission is to enable broad adoption of exascale-focused technologies for organizations of all sizes.

    Project Everyscale, will address the range of component technologies being developed to make exascale computing possible. Areas of focus will touch all aspects of the design of HPC systems, including everything from alternative cooling technologies to efficiency, density, racks, storage, the convergence of traditional HPC and AI and more. The visionaries on the council will bring to bear their insights as customers to set the direction for exascale innovation that everyone can use, working together to bring to life a cohesive picture of the future for the industry.

    “Working with Intel, we are now bringing together some of the biggest names and brightest minds of HPC to develop an innovation roadmap that will push the design and dissemination of exascale technologies to users of all sizes,” said Scott Tease, general manager for HPC and AI, Lenovo Data Center Group.”

    3

    Member organizations are leading the way on groundbreaking research into some of the world’s greatest challenges in fields such as computational chemistry, geospatial analysis, astronomy, climate change, healthcare, and meteorology.

    “Intel is proud to be an integral part of this important endeavor in supercomputing along with Lenovo and other leaders in HPC,” said Trish Damkroger, vice president and general manager of the Extreme Computing Organization at Intel. “With Project Everyscale, our goal is to democratize exascale technologies and bring leading Xeon scalable processors, accelerators, storage, fabrics, software and more to HPC customers of every scale or any workload.”
    The Council is slated to kick-off its work early in 2020.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:22 pm on November 14, 2019 Permalink | Reply
    Tags: "Tomorrow’s Data Centers", , , Bringing the speed high data capacity and low-energy use of light (optics) to advanced internet infrastructure architecture., Supercomputing, The amount of worldwide data traffic is driving up the capacity inside data centers to unprecedented levels and today’s engineering solutions break down., The deluge of data we transmit across the globe via the internet-enabled devices and services that come online every day has required us to become much more efficient., The keys according to Blumenthal are to shorten the distance between optics and electronics., This challenge is a now job for Blumenthal’s FRESCO: FREquency Stabilized COherent Optical Low-Energy Wavelength Division Multiplexing DC Interconnects., , While still in early stages the FRESCO team’s technology is very promising.   

    From UC Santa Barbara: “Tomorrow’s Data Centers” 

    UC Santa Barbara Name bloc
    From UC Santa Barbara

    November 12, 2019
    Sonia Fernandez

    1
    The deluge of data we transmit across the globe via the internet-enabled devices and services that come online every day has required us to become much more efficient with the power, bandwidth and physical space needed to maintain the technology of our modern online lives and businesses.

    2
    L to r: Electrical and computer engineering professor Dan Blumenthal, and doctoral student researchers Grant Brodnik and Mark Harrington
    Photo Credit: Sonia Fernandez

    “Much of the world today is interconnected and relies on data centers for everything from business to financial to social interactions,” said Daniel Blumenthal, a professor of electrical and computer engineering at UC Santa Barbara. The amount of data now being processed is growing so fast that the power needed just to get it from one place to another along the so-called information superhighway constitutes a significant portion of the world’s total energy consumption, he said. This is particularly true of interconnects — the part of the internet infrastructure tasked with getting data from one location to another.

    “Think of interconnects as the highways and the roads that move data,” Blumenthal said. There are several levels of interconnects, from the local types that move data from one device on a circuit to the next, to versions that are responsible for linkages between data centers. The energy required to power interconnects alone is 10% of the world’s total energy consumption and climbing, thanks to the growing amount of data that these components need to turn from electronic signals to light, and back to electronic signals. The energy needed to keep the data servers cool also adds to total power consumption.

    “The amount of worldwide data traffic is driving up the capacity inside data centers to unprecedented levels and today’s engineering solutions break down,” Blumenthal explained. “Using conventional methods as this capacity explodes places a tax on the energy and cost requirements of physical equipment, so we need drastically new approaches.”

    As the demand for additional infrastructure to maintain the performance of the superhighway increases, the physical space needed for all these components and data centers is becoming a limiting factor, creating bottlenecks of information flow even as data processing chipsets increase their capacity to a whopping 100 terabytes per second.

    “The challenge we have is to ramp up for when that happens,” said Blumenthal, who also serves as director for UC Santa Barbara’s Terabit Optical Ethernet Center, and represents UC Santa Barbara in Microsoft’s Optics for the Cloud Research Alliance.

    This challenge is a now job for Blumenthal’s FRESCO: FREquency Stabilized COherent Optical Low-Energy Wavelength Division Multiplexing DC Interconnects. Bringing the speed, high data capacity and low-energy use of light (optics) to advanced internet infrastructure architecture, the FRESCO team aims to solve the data center bottleneck while bringing energy usage and space needs to a more sustainable level.

    The effort is funded by ARPA-e under the OPEN 2018 program and represents an important industry-university partnership with emphasis on technology transition. The FRESCO project involves important industry partners like Microsoft and Barefoot Networks (now Intel), who are looking to transition new technologies to solve the problems of exploding chip and data center capacities.

    The keys, according to Blumenthal, are to shorten the distance between optics and electronics, while also drastically increasing the efficiency of maintaining the synchrony of the optical signal between the transmitting and receiving end of the interconnect.

    FRESCO can accomplish this by bringing the performance of optical technology — currently relegated to long-haul transmission via fiberoptic cable — to the chip and co-locating both optic and electronic components on the same switch chip.

    “The way FRESCO is able to do this is by bringing to bear techniques from large-scale physics experiments to the chip scale,” Blumenthal said. It’s a departure from the more conventional faceplate-and-plug technology, which requires signal to travel some distance to be converted before moving it along.

    From Big Physics to Small Chips

    Optical signals can be stacked in a technique known as coherent wave-division multiplexing (WDM), which allows signal to be sent over different frequencies — colors — over a single optical fiber. However, because of space constraints, Blumenthal said, the traditional measures used to process long-haul optical signals, including electronic digital signal processing (DSP) chips and very high bandwidth circuits, have to be removed from the interconnect links.

    FRESCO does away with these components with an elegant and powerful technique that “anchors” the light at both transmitting and receiving ends, creating spectrally pure stable light that Blumenthal has coined “quiet light.”

    “In order to do that we actually bring in light stabilization techniques and technologies that have been developed over the years for atomic clocks, precision metrology and gravitational wave detection, and use this stable, quiet light to solve the data center problem,” Blumenthal said. “Bringing key technologies from the big physics lab to the chip scale is the challenging and fun part of this work.”

    Specifically, he and his team have been using a phenomenon called stimulated Brillouin scattering, which is characterized by the interaction of light — photons — with sound produced inside the material through which it is traveling. These sound waves — phonons — are the result of the collective light-stimulated vibration of the material’s atoms, which act to buffer and quiet otherwise “noisy” light frequencies, creating a spectrally pure source at the transmitting and receiving ends. The second part of the solution is to anchor or stabilize these pure light sources using optical cavities that store energy with such high quality that the lasers are anchored in a way that allows them to be aligned using low-energy electronic circuits used in the radio world.

    The act of alignment requires that the light frequency and phase are kept equal so that data can be recovered. This normally requires high power analog electronics or high powered digital signal processors (DSPs), which are not viable solutions for bringing this capacity inside the data center (they have 100,000s of fiber connections in the data center, as compared to 10s of connections in the long-haul). Also, the more energy and space the technologies inside the data center take, an equal number or more get expended on the cooling of the data center.

    “There is very little energy needed to just keep them aligned and finding each other,” Blumenthal said of FRESCO, “similar to that of electronic circuits used for radio. “That is the exciting part — we are enabling a transmission carrier at 400 THz to carry data using low-energy simple electronic circuits, as opposed to the use of DSPs and high bandwidth circuitry, which in essence throws a lot of processing power at the optical signal to hunt down and match the frequency and phase of the optical signal so that data can be recovered.” With the FRESCO method, the lasers from the the transmitting and receiving ends are “anchored within each other’s sights in the first place, and drift very slowly on the order of minutes, requiring very little effort to track one with the other,” according to Blumenthal.

    On the Horizon, and Beyond

    While still in early stages, the FRESCO team’s technology is very promising. Having developed discrete components, the team is poised to demonstrate the concept by linking those components, measuring energy use, then transmitting the highest data capacity over a single frequency with the lowest energy to date on a frequency stabilized link. Future steps include demonstrating multiple frequencies using a technology called optical frequency combs that are integral to atomic clocks, astrophysics and other precision sciences. The team is in the process of integrating these components onto a single chip, ultimately aiming to develop manufacturing processes that will allow for transition to FRESCO technology.

    This technology is likely only the tip of the iceberg when it comes to possible innovations in the realm of optical telecommunications.

    “We see our chipset replacing over a data center link what today would take between four to 10 racks of equipment,” Blumenthal said. “The fundamental knowledge gained by developing this technology could easily enable applications we have yet to invent, for example in quantum communications and computing, precision metrology and precision timing and navigation.”

    “If you look at trends, over time you can see something that in the past took up a room full of equipment become something that was personally accessible through a technology innovation — for example supercomputers that became laptops through nanometer transistors,” he said of the disruption that became the wave in personal computing and everything that it enabled. “We know now how we want to apply the FRESCO technology to the data center scaling problem, but we think there also are going to be other unforeseen applications too. This is one of the primary reasons for research exploration and investment without knowing all the answers or applications beforehand.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education CoalitionUC Santa Barbara Seal
    The University of California, Santa Barbara (commonly referred to as UC Santa Barbara or UCSB) is a public research university and one of the 10 general campuses of the University of California system. Founded in 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system. The university is a comprehensive doctoral university and is organized into five colleges offering 87 undergraduate degrees and 55 graduate degrees. In 2012, UCSB was ranked 41st among “National Universities” and 10th among public universities by U.S. News & World Report. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: