Tagged: ASCR Discovery Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:18 am on February 7, 2023 Permalink | Reply
    Tags: "High and dry", A new supercomputer drought model projects dry times ahead for much of the nation-especially the Midwest., , ASCR Discovery, , ,   

    From The DOE’s “ASCR Discovery”: “High and dry” 

    From The DOE’s “ASCR Discovery”

    02/23

    A new supercomputer drought model projects dry times ahead for much of the nation-especially the Midwest.

    1
    A detail from a recent map that depicts areas of intensive drought in the West and Midwest. Image courtesy of the North American Drought Monitor.

    Midwesterners needn’t bother choosing their poison: droughts or floods. They get a double dose of both.

    The region is experiencing what weather experts call a flash drought, says Rao Kotamarthi, who heads climate and earth system science at the DOE’s Argonne National Laboratory near Chicago.

    “One of the clearest indicators of climate change is that you get intense periods of precipitation,” he says. The Midwest today can experience intense downpours with drought-like conditions lasting for several weeks in between. “Now some farmers actually have to start irrigating even in northern Illinois, which is a big change from before.”

    Kotamarthi’s team published climate-modeling data in Scientific Reports [below] last year that could help U.S. policymakers better anticipate droughts and floods.

    Figure 1
    2
    June, July, August, and September 2003 PDSI, SPEI (1-month), EDDI, and SDVI_NLDAS (SVDI). The black box represents a Flash Drought area from July 1–September 2, 2003. The USDM index is a weekly index, and dates represent the week ending that date. The SVDI index is a daily index, and the monthly value is averaged for each month. The EDDI index is averaged on the last day of each month for the previous 30 days. The SVDI, PDSI, SPEI, and EDDI plots were generated using the Matplotlib41 library for the Python programming language (https://matplotlib.org/). The USDM maps are courtesy of NDMC-UNL and were accessed from https://droughtmonitor.unl.edu/NADM/Maps.aspx. The USDM is jointly produced by the National Drought Mitigation Center (NDMC) at the University of Nebraska-Lincoln(UNL), the United States Department of Agriculture, and the National Oceanic and Atmospheric Administration.

    The paper expands upon an Argonne-AT&T collaboration that led to a 2019 AT&T white paper, The Road to Climate Resiliency, focusing on the southeastern U.S. AT&T and DOE’s Biological and Environmental Research and Advance Scientific Computing Research programs supported the latest work.

    The project also led to release of a Climate Risk and Resilience Portal (ClimRR), developed by Argonne’s Center for Climate Resilience and Decision Science in collaboration with AT&T and the Federal Emergency Management Agency. ClimRR lets users explore future precipitation, temperature and wind for the continental U.S. at high spatial resolution.

    There are more than 50 metrics for gauging when a drought occurs – factors such as temperature, precipitation and evapotranspiration. None of them, however, can quickly project drought onset. The Argonne team has established the Standardized Vapor Pressure Deficit drought index (SVDI) to do just that.

    “You can calculate vapor pressure deficit with temperature and relative humidity. It doesn’t include precipitation,” says Argonne’s Brandi Gamelin, lead author of the Scientific Reports paper. Instead, it includes a measure of evaporative demand.

    “If you have higher evaporative demand, it’s going to pull more moisture out of vegetation and the soil,” drying them, she says. The team also produced a separate wildfire index that is highly correlated to SVDI. In fact, SVDI also works for drought.

    Many indices rely on measures of reduced rainfall to define drought. But “We can go months in many California locations without rainfall,” says Gamelin, a native of the state. “It’s difficult to use the same measure in California than you would use, say, in the Midwest related to drought and agriculture or wildfire risk.”

    Gamelin compared her new drought index against other available measures and showed that it works just as well.

    “Her models gave us confidence that this is a good way to go,” Kotamarthi says. “Vapor pressure deficit is not complicated either to model or measure. One of the things that we push in the paper is how this is useful in the bigger context as climate change increases flash droughts.”

    The models forecast climate change at a high spatial resolution, calculating projections for areas measuring 12 square kilometers (4.6 square miles). The team ran the code on supercomputers at the National Energy Scientific Computing Center at The DOE’s Lawrence Berkeley National Laboratory, and at The DOE’s Argonne Leadership Computing Facility.

    The team aims to tighten the resolution to four square kilometers (1.5 square miles). That would generate about 4 petabytes of data, the equivalent of 200 billion pages of text.

    The goal: help zoom in the global climate model, which operates at a scale of 10,000 square kilometers (3,861 square miles), to 100 square kilometers (38.6 miles) and now to around 16 square kilometers. The simulations focus on extreme and quickly occurring events, generating data at three-hour intervals.

    The Scientific Reports paper projected the frequency of droughts that happen once in 10, 25 and 50 years. A 50-year drought, for example, has a 5% chance of happening and would be widespread, affecting the Midwest, Southwest and Northwest. “The areas affected by drought do increase,” Kotamarthi says. “By mid-century, you see larger portions of the Midwest experiencing drought in general.”

    Since publishing its drought index, the Argonne team has applied new machine-learning methods to identify both short-term and long-term drought. The team caps projections at 50 years to keep uncertainty values within reasonable limits.

    Last year’s study focused on a short-term drought index that worked well, Gamelin says. “Now we’re looking to understand long-term drought better with it.”

    A new study will test the index to see if it can identify where and when droughts happened between 1980 and 2021 and probe more deeply into why they began and ended. Then the researchers will apply the methods to projecting future droughts.

    Drought can dry the soil but so can wildfire. The heat from wildfires forms a barrier on mountain slopes, resulting in hydrophobic, or water-repellent, soils “so, you have a higher risk or incidence of flash flooding,” Gamelin says. These conditions often lead to droughts and floods happening in tandem. Water flowing downhill also lubricates debris flows, adding to the calamity. “That’s a risk up and down California in the mountains and hills.”

    Kotamarthi and Gamelin stress the importance of quantifying uncertainty when considering models of future droughts. “These are projections. They’re not predictions,” Gamelin says.

    To calculate uncertainty, the team statistically samples time series and location data 500 times from the Argonne model and three others that are widely used. Each new series then undergoes an extreme value analysis to create a range of minimum and maximum values. The uncertainty figures may help policymakers decide how best to plan for extreme events of varying magnitude.

    “Money is not unlimited,” Kotamarthi says. “You may want to make your system resilient to a 50-year drought or a once-a-year drought. We are hoping that this kind of information provides decision-makers some points to think about.”

    Scientific Reports

    See the full article here.

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCR Discovery is a publication of The U.S. Department of Energy

    The United States Department of Energy (DOE) is a cabinet-level department of the United States Government concerned with the United States’ policies regarding energy and safety in handling nuclear material. Its responsibilities include the nation’s nuclear weapons program; nuclear reactor production for the United States Navy; energy conservation; energy-related research; radioactive waste disposal; and domestic energy production. It also directs research in genomics. the Human Genome Project originated in a DOE initiative. DOE sponsors more research in the physical sciences than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The agency is led by the United States Secretary of Energy, and its headquarters are located in Southwest Washington, D.C., on Independence Avenue in the James V. Forrestal Building, named for James Forrestal, as well as in Germantown, Maryland.

    Formation and consolidation

    In 1942, during World War II, the United States started the Manhattan Project, a project to develop the atomic bomb, under the eye of the U.S. Army Corps of Engineers. After the war in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. In 1974, the AEC gave way to the Nuclear Regulatory Commission, which was tasked with regulating the nuclear power industry and the Energy Research and Development Administration, which was tasked to manage the nuclear weapon; naval reactor; and energy development programs.

    The 1973 oil crisis called attention to the need to consolidate energy policy. On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977 (Pub.L. 95–91, 91 Stat. 565, enacted August 4, 1977), which created the Department of Energy. The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration; the Energy Research and Development Administration; the Federal Power Commission; and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary.

    President Carter created the Department of Energy with the goal of promoting energy conservation and developing alternative sources of energy. He wanted to not be dependent on foreign oil and reduce the use of fossil fuels. With international energy’s future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three-Mile Island disaster, Carter was able to intervene with the help of the department. Carter made switches within the Nuclear Regulatory Commission in this case to fix the management and procedures. This was possible as nuclear energy and weapons are responsibility of the Department of Energy.

    Recent

    On March 28, 2017, a supervisor in the Office of International Climate and Clean Energy asked staff to avoid the phrases “climate change,” “emissions reduction,” or “Paris Agreement” in written memos, briefings or other written communication. A DOE spokesperson denied that phrases had been banned.

    In a May 2019 press release concerning natural gas exports from a Texas facility, the DOE used the term ‘freedom gas’ to refer to natural gas. The phrase originated from a speech made by Secretary Rick Perry in Brussels earlier that month. Washington Governor Jay Inslee decried the term “a joke”.

    Facilities
    Supercomputing

    The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows:

    Ames Laboratory
    Argonne National Laboratory
    Brookhaven National Laboratory
    Fermi National Accelerator Laboratory
    Idaho National Laboratory
    Lawrence Berkeley National Laboratory
    Lawrence Livermore National Laboratory
    Los Alamos National Laboratory
    National Renewable Energy Laboratory
    Oak Ridge National Laboratory
    Pacific Northwest National Laboratory
    Princeton Plasma Physics Laboratory
    Sandia National Laboratories
    Savannah River National Laboratory
    SLAC National Accelerator Laboratory
    Thomas Jefferson National Accelerator Facility
    Other major DOE facilities include:
    Albany Research Center
    Bannister Federal Complex
    Bettis Atomic Power Laboratory – focuses on the design and development of nuclear power for the U.S. Navy
    Kansas City Plant
    Knolls Atomic Power Laboratory – operates for Naval Reactors Program Research under the DOE (not a National Laboratory)
    National Petroleum Technology Office
    Nevada Test Site
    New Brunswick Laboratory
    Office of Fossil Energy
    Office of River Protection
    Pantex
    Radiological and Environmental Sciences Laboratory
    Y-12 National Security Complex
    Yucca Mountain nuclear waste repository
    Other:

    Pahute Mesa Airstrip – Nye County, Nevada, in supporting Nevada National Security Site

     
  • richardmitnick 9:26 am on November 19, 2020 Permalink | Reply
    Tags: "Climate on a new scale", ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer being built at DOE's Argonne National Laboratory., , ASCR Discovery, , , Frontier Shasta based Exascale supercomputer at DOE's Oak Ridge National Laboratory.   

    From ASCR Discovery: “Climate on a new scale” 

    From ASCR Discovery

    1
    An E3SM simulation showing global eddy activity. Credit: M. Petersen, P. Wolfram and T. Ringler/E3SM/Los Alamos National Laboratory.

    November 2020

    Note: Sandia National Laboratories’ Mark Taylor is co-author of a paper, A Performance-Portable Nonhydrostatic Atmospheric Dycore for the Energy Exascale Earth System Model Running at Cloud-Resolving Resolutions, being presented Nov. 19 at SC20.

    How stable will the Antarctic ice sheet be over the next 40 years? And what will models of important water cycle features – such as precipitation, rainfall patterns, and droughts – reveal about river flow and freshwater supplies at the watershed scale?

    These are two key questions Department of Energy (DOE) researchers are exploring via simulations on the Energy Exascale Earth System Model (E3SM). “The DOE’s Office of Science has a mission to study the Earth’s system – particularly focusing on the societal impacts around the water cycle and sea level rise,” says Mark Taylor, chief computational scientist for Sandia National Laboratories’ E3SM project. “We’re interested in how they will affect agriculture and energy production.”

    E3SM encompasses atmosphere, ocean and land models, plus sea- and land-ice models. “It’s a big operation, used by lots of groups of scientists and researchers to study all aspects of the climate system,” Taylor says.

    “Big operation” describes most climate system models and simulations, which often require enormous data-handling and computing power. DOE is tapping exascale computing, capable of a quintillion (1018) calculations per second, to help improve them.

    “The DOE wants a model to address their science questions,” Taylor says. “Each of the different models are big in their own right, so this whole process involves a lot of software – on the order of one million to two million lines of code.”

    DOE is building two exascale computers– one at Oak Ridge National Laboratory, the other at Argonne National Laboratory – and they’re expected to come on line for research in 2021. They will be among the world’s fastest supercomputers.

    ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology, being built at DOE’s Oak Ridge National Laboratory.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer, being built at DOE’s Argonne National Laboratory.

    Taylor is a mathematician who specializes in numerical methods for parallel computing, the ubiquitous practice of dividing a big problem between many processors to quickly reach a solution. He focuses on the E3SM project’s computing and performance end. The move to exascale, Taylor says, is “a big and challenging task.”

    The Sandia work is part of a DOE INCITE (Innovative and Novel Computational Impact on Theory and Experiment) award, for which Taylor is the principal investigator.

    The original Community Earth System Model (CESM) was in the Fortran programming language for standard central processing unit (CPU) computers and developed during a decades-long collaboration between the National Science Foundation and DOE laboratories.

    “To get to exascale, the DOE is switching to graphics processing units (GPUs), which are designed to perform complex calculations for graphics rendering,” Taylor explains. “And since most of the horsepower in exascale machines will come from GPUs, we need to adapt our code” to them.

    So far, code is the biggest challenge they’ve faced in the transition to exascale computing. “It’s been a long process,” Taylor says, that includes developing a new programming model. “During the past four years, we’ve converted about half of the code, “rewriting some of it in the C++ language. and using a directed language called OpenACC for another program.

    The effort, Taylor adds, requires about 150 people, spread out over six laboratories.

    Another challenge: “We wanted to figure out which types of simulations are better for traditional CPU machines,” he says, “because we really want to use the new computers for things that we couldn’t do on the older computers they’re replacing. Some things are ideally suited for CPUs, and you really shouldn’t try to get them to run on GPUs.”

    It turns out that tasks like simulating clouds work best on GPUs. So he and his colleagues are creating a new version of a cloud-resolving model. “We’re pushing the resolution in the calculations up so we can resolve the processes responsible for cloud formations and thunderstorms,” he says. “This allows us to start modeling individual thunderstorms within this global model of the Earth’s atmosphere. It’s really cool to see these realistic models of them developing – simulated on a computer.”

    Exascale power will help researchers simulate and explore changes within the hydrological cycle, with a special focus on precipitation and surface water in mountains, such as the ranges of the western United States and Amazon headwaters. It also should help with a study of how rapid melting of the Antarctic ice sheet by adjacent warming waters could trigger its collapse – with the global simulation to include dynamic ice-shelf/ocean interactions.

    DOE’s ultimate goal for E3SM is to “advance a robust predictive understanding of Earth’s climate and environmental systems” to help create sustainable solutions to energy and environmental challenges. The first version of E3SM was available for use in 2018. The next is expected to be ready in 2021.

    The original Community Earth System Model (CESM) was in the Fortran programming language for standard central processing unit (CPU) computers and developed during a decades-long collaboration between the National Science Foundation and DOE laboratories

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 10:54 am on October 14, 2020 Permalink | Reply
    Tags: "Banishing blackouts", ANL-Argonne National Laboratory, , ASCR Discovery,   

    From ASCR Discovery: “Banishing blackouts” 

    From ASCR Discovery

    An Argonne researcher upgrades supercomputer optimization algorithms to boost reliability and resilience in U.S. power systems.

    1
    Early Career awardee Kibaek Kim’s research thrust: keeping the lights on. Image courtesy of Argonne National Laboratory.

    The most extensive blackout in North American history occurred on Aug. 14, 2003, affecting an estimated 55 million people in nine U.S. states across the Midwest and Northeast and the Canadian province of Ontario.

    Such high-impact outages are rare, and Kibaek Kim, a computational mathematician at Argonne National Laboratory, aims to keep them that way. He’s in the second year of a five-year, $2.5 million Department of Energy Office of Science Early Career Research Award to develop data-driven optimization algorithms that account for power grid uncertainties.

    “Right now, one of the issues in the grid system is that we have a lot of distributed energy resources,” says Kim, who received his doctorate from Northwestern University before joining Argonne’s Mathematics and Computer Science Division. Such resources include solar panels, electric storage systems, or other energy-related devices located at privately owned homes or commercial buildings. “These resources are not controllable by system operators – utilities, for example. These are considered as uncertain generating units.”

    Such components put uncontrollable and potentially risky uncertainties into the grid. If the system somehow becomes imbalanced, “it can lead to power outages and even wider-area blackouts,” Kim notes. “The question we’re trying to address is how to avoid these blackouts or outages, how to make the system reliable or resilient.”

    In 2004, the U.S.-Canada Power System Outage Task Force issued a report on the causes of the 2003 blackout and made recommendations to prevent similar events. “Providing reliable electricity is an enormously complex technical challenge, even on the most routine of days,” the report says. “It involves real-time assessment, control and coordination of electricity production at thousands of generators, moving electricity across an interconnected network of transmission lines, and ultimately delivering the electricity to millions of customers by means of a distribution network.”

    Kim’s project builds on experience he gathered in recent Advanced Grid Modeling program collaborations with researchers in Argonne’s Energy Systems Division. Bolstered by access to Argonne’s Laboratory Computing Resource Center, his initiative relies on machine learning to help make grid planning and operation more reliable. Just last year he led one of 18 teams selected from national laboratories, industry and academia that participated in the first Grid Optimization Competition, sponsored by DOE’s Advanced Research Projects Agency-Energy.

    His new research aims to more fully integrate techniques from two fields: numerical optimization and data science and statistics, which so far have been only loosely connected.

    “I see this as a new paradigm,” Kim says. “As we see more and more data available from sensors, computing and simulation, this is probably the time to integrate the two areas.”

    The work will include investigating what he calls “statistically robust design and control” mechanisms for complex systems, particularly data sampling in the energy sector. “Systems decisions would be made with future uncertainties in mind.”

    Kim will use a sophisticated tool called distributionally robust multistage stochastic optimization to deal with data uncertainty. “This is a relatively new area that considers realistic situations where we can make decisions over time, sequentially, with streaming observation of data points for uncertainties.”

    The optimization algorithms Kim has previously developed and now proposes for his current project will decompose, or systematically split, a large problem into smaller ones. “The adaptive decomposition that I propose here would dynamically adjust the structures, meaning we have a larger piece that would dynamically change the smaller pieces based on algorithm performance.”

    Kim’s work takes into account how electrical grid systems can be disrupted by extreme weather, adversarial attacks or decentralized control mechanisms, including unexpected consumption patterns. Once a system is disrupted, the control mechanism would act to rapidly restore it to normal operations.

    Motivating Kim’s interest in the project is the increasing availability of sensor data that flow from the grid’s many power systems and related computer networks. The nation’s electrical systems are increasingly incorporating smart-grid sensors, energy storage, microgrids and distributed-energy resources such as solar panels. His work also incorporates data from an Argonne-based National Science Foundation-supported project called the Array of Things that collects and processes real-time power-use and infrastructure data from geographically scattered sensors.

    This trend presents optimization problems requiring algorithms that can operate at multiple scales on powerful computers. Specifically, Kim is developing methods for this purpose to run on the Aurora supercomputer, scheduled for delivery to Argonne next year.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer.

    It will be one of the nation’s first exascale machines, capable of performing a quintillion calculations per second, almost 100 times faster than Argonne’s current top supercomputer.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility.

    Processing ever-shifting variables in the vast, highly complex U.S. power grid requires Aurora’s high-performance computing brawn. An outage in just a few of the nation’s 20,000 power plants and 60,000 substations, with electricity flowing at nearly the speed of light over 3 million miles of power lines, can lead to more than a billion potential failure scenarios.

    Kim will adapt his algorithms for parallel computing on machines like Aurora to sift through this massive data flow, a departure from current serial-computing tools that are ill-equipped to make full use of high-performance computers for optimization.

    His research will encompass two applications: resilient smart distribution systems and distributed learning to “serve as a support framework for processing data such as building energy consumption and ambient weather conditions through design and control systems for smart grids” he notes. “With the data-support system we should be able to rapidly control these smart grids, particularly for contingency situations.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 1:20 pm on July 14, 2020 Permalink | Reply
    Tags: "Supernova plunge", ASCR Discovery, , , , , , Most neutron stars become pulsars., , Neutrinos are the 800-pound gorilla because they carry the vast majority of energy away from the star during the explosions., The neutron stars left behind after supernovae are packed so tightly with neutrons and neutrinos that they have a billion times the gravity of Earth.   

    From ASCR Discovery: “Supernova plunge” 

    From ASCR Discovery

    Astrophysicists use DOE supercomputers to reveal supernovae secrets.

    1
    A snapshot a massive star’s core during a multi-bubble explosion. The roiling, aspherical dynamics mark the start of a supernova explosion. The gray veil is the shock wave; the arrows represent matter-parcel trajectories. Image courtesy of Adam Burrows/Princeton University and Joe Insley, Silvio Rizzi/Argonne National Laboratory.

    Behemoth exploding stars we know as supernovae are among the wildest cosmic events astronomers witness from Earth. They are powerful enough to outshine a galaxy comprised of billions of stars, including our own Milky Way, for a few days, weeks or months. Over recorded history, some of the most powerful supernovae have even been visible in daylight.

    In the past few decades, Princeton University Professor Adam Burrows and his colleagues have focused on the mechanisms that cause supernovae to burst. Though there are several types of exploding stars, the kind that commands Burrows and his team’s attention is the massive-star variety that’s roughly eight to 15 times the weight of our own sun and whose imminent death after 10 million years or so announces the birth of a neutron star, a truly oddball cosmic object.

    Initially the team used Department of Energy (DOE) supercomputing time to primarily simulate the high-powered explosions – which have the potential energy of trillions upon trillions of nuclear weapons – in one or two dimensions. More recently, they’re combining advanced software coding and Argonne National Laboratory’s Cray XC40 system, named Theta, a massively parallel supercomputer to simulate many more supernovae explosions in 3-D.

    2
    Volume rendering of entropy from a 3-D simulation of a core-collapse supernova explosion, including rotation and magnetic fields. The shock wave is visible in blue; plumes of neutrino-driven convection (orange) are above the nascent neutron star (purple). This simulation was executed as part of the INCITE project “Extreme-scale Simulation of Pulsars and Magnetars from Realistic Progenitors.” Image courtesy of Sean Couch, Michigan State University.

    The nature of supernovae is one of astrophysics’ greatest unsolved problems, Burrows says. “Improved calculations and techniques, coupled with access” to high-performance computing (HPC) systems, “have allowed us to gain new insights into the characteristics of supernovae explosions. That’s a breakthrough.”

    These exploding stars, known as core-collapse supernovae, are heavyweights that age until they consume all the hydrogen and helium in their cores. But they still have the mass and pressure needed to fuse carbon, letting them create a suite of heavier elements, including iron, which become their new cores. In this phase these stars resemble a layered onion, with heavy elements inside and lighter elements outside.

    Once the star’s iron core grows to a particular mass, searing temperatures cause it to implode, unable to resist gravity. The protons of atoms inside the core capture electrons, and extreme gravity mashes them together to create neutrons and tiny but mighty subatomic particles called neutrinos.

    Then, huge amounts of infalling material bounce off the core and rebound outward. This creates a titanic shockwave, believed to contain huge amounts of neutrinos, that explodes into a massive, asymmetrical fireball and spews bubbles, tendrils and fingers of material, including all the elements found on Earth, into space.

    “Think of it like a pan of boiling water,” Burrows says. “Neutrinos are heating the star material, causing it to overturn and convect, which drives turbulence. We need to get the turbulence in these simulations right to better understand the cycles of core-collapse supernovae.”

    Neutrinos are so-called ghost particles because they have no electrical charge and whiz through space at about light speed. These particles – a million times lighter than electrons and formed celestially during the Big Bang, in supernovae and in the sun – are so ubiquitous that scientists say trillions pass, unabated, through a human body each second.

    The neutron stars left behind after supernovae are packed so tightly with neutrons and neutrinos that they have a billion times the gravity of Earth. They are so dense they morph into objects as small as Philadelphia and can spin a thousand times a second. A neutron star is so dense a teaspoonful would outweigh Mt. Everest.

    Most neutron stars become pulsars, which are rapidly rotating objects with significant magnetic fields. From Earth, they can look like blinking stars.

    Dame Susan Jocelyn Bell Burnell, discovered pulsars with radio astronomy. Jocelyn Bell at the Mullard Radio Astronomy Observatory, Cambridge University, taken for the Daily Herald newspaper in 1968. Denied the Nobel.

    “Radiation from the magnetic poles of pulsars acts like a lighthouse beam – we see it, then it rotates out of our line of sight, then we see it when it rotates back in again,” says Sean Couch, a Michigan State University assistant professor who heads a second team awarded Argonne supercomputer time to chart supernovae properties. “We want to know how these magnetic fields form, and how pulsars are born.”

    How complex are the 3-D supercomputer calculations? Depicting a second or two of a supernova’s life and death can take weeks or months.

    “Our 3-D simulations have gone from just getting stars to blow up to putting our results up against real data to see if we can build a truly predictive and explanatory model of core-collapse supernovae,” Couch says. “Neutrinos are the 800-pound gorilla because they carry the vast majority of energy away from the star during the explosions.”

    Renewal support for the core-collapse supernovae research Burrows, Couch and their respective teams conduct comes from DOE’s Innovative and Novel Computational Impact on Theory and Experiment, or INCITE program. It annually grants large chunks of HPC time to high-impact projects in science, engineering and computer science.

    Burrows’ team was awarded 2 million node-hours on Theta, while Couch and his team won a million node-hours on the same machine, housed at the Argonne Leadership Computing Facility, a DOE Office of Science user facility.

    ANL ALCF Theta Cray XC40 supercomputer

    A third team headed by DOE Oak Ridge National Laboratory scientist William Raphael Hix is embarking on another INCITE effort to better understand these astronomical fireworks. It’s armed with 600,000 hours on Summit, an IBM AC922 supercomputer that is the nation’s most powerful.

    ORNL IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Hix and his colleagues are exploring how stellar rotation affects the explosion mechanism of core-collapse supernovae and associated observations. The team also is interested in simulating the raging winds that blow off the surface of newly made neutron stars during these blasts.

    All three groups belong to TEAMS, for Toward Exascale Astrophysics of Mergers and Supernovae, led by Hix. It’s a DOE partnership between SciDAC (Scientific Discovery through Advanced Computing) program and the Office of Nuclear Physics.

    “The fundamental problem with astrophysics, compared to other fields of physics, is we don’t get to design our experiments,” says Hix, who also is a University of Tennessee professor. “Mother Nature periodically conducts an experiment in our view, and we race to collect whatever information we can.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 3:13 pm on November 20, 2019 Permalink | Reply
    Tags: ASCR Discovery, , ,   

    From ASCR Discovery: “Tracking tungsten” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing


    From From ASCR Discovery

    November 2019

    Supercomputer simulations provide a snapshot of how plasma reacts with – and can damage – components in large fusion reactors.

    1
    A cross-section view of plasma (hotter yellow to cooler blues and purples) as it interacts with the tungsten surface of a tokamak fusion reactor divertor (gray walls in lower half of image), which funnels away gases and impurities. Tungsten atoms can sputter, migrate and redeposit (red squiggles), and smaller ions of helium, deuterium and tritium (red circles) can implant. Some of these interactions are beneficial, but other effects can degrade the tungsten surface and deplete and even quench the fusion reaction over time. Image courtesy of Tim Younkin, University of Tennessee.

    Nuclear fusion offers the tantalizing possibility of clean, sustainable power – if tremendous scientific and engineering challenges are overcome. One key issue: Nuclear engineers must understand how extreme temperatures, particle speeds and magnetic field variations will affect the plasma – the superheated gas where fusion happens – and the reactor materials designed to contain it. Predicting these plasma-material interactions is critical for understanding the function and safety of these machines.

    Brian Wirth of the University of Tennessee and the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) is working with colleagues on one piece of this complex challenge: simulating tungsten, the metal that armors a key reactor component in ITER, the France-based world’s largest tokamak fusion reactor.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    ITER is expected to begin first plasma experiments in 2025 with the hope of producing 10 times more power than is required to heat it. Wirth’s team is part of DOE’s Scientific Discovery through Advanced Computing (SciDAC) program, and has collaborated with the Advanced Tokamak Modeling (AToM), another SciDAC project to develop computer codes that model the full range of plasma physics and material reactions inside a tokamak.

    “There’s no place today in a laboratory that can provide a similar environment to what we’re expecting on ITER,” Wirth says. “SciDAC and the high-performance computing (HPC) environment really give us an opportunity to simulate in advance how we expect the materials to perform, how we expect the plasma to perform, how we expect them to interact and talk to each other.” Modeling these features will help scientists learn about the effects of particular conditions and how long components might last. Such insights could support better design choices for fusion reactors.

    A tokamak’s doughnut-shaped reaction chamber confines rapidly moving, extremely hot, gaseous hydrogen ions – deuterium and tritium – and electrons within a strong magnetic field as a plasma, the fourth state of matter. The ions collide and fuse, spitting out alpha particles (two neutrons and two protons bound together) and neutrons. The particles release their kinetic energy as heat, which can boil water to produce steam that spins electricity-generating turbines. Today’s tokamaks don’t employ temperatures and magnetic fields high enough to produce self-sustaining fusion, but ITER could approach those benchmarks, over the next decades, toward producing 500 MW from 50 MW of input heat.

    Fusion plasmas must reach core temperatures up to hundreds of millions of degrees, and tokamak components could routinely experience temperatures approaching a thousand degrees – extreme conditions across a large range. Wirth’s group focuses on a component called the divertor, comprising 54 cassette assemblies that ring the doughnut’s base to funnel away waste gas and impurities. Each assembly includes a tungsten-armored plate supported by stainless steel. The divertor faces intensive plasma interactions. As the deuterium and tritium ions fuse, fast-moving neutrons, alpha particles and debris fall to the bottom of the reaction vessel and strike the divertor surface. Though only one part of the larger system, interactions between the metal and the reactive plasma have important implications for sustaining a fusion reaction and the durability of the divertor materials.

    Until recently, carbon fiber composites, protected divertors and other plasma-facing tokamak components, but such surfaces can react with tritium and retain it, a process that also limits recycling, the return of tritium to the plasma to continue the fusion reaction. Tungsten, with a melting point of more than 3,400 degrees, is expected to be more resilient. However, as plasma interacts with it, the ions can implant in the metal, forming bubbles or even diffusing hundreds of nanometers below the surface. Wirth and his colleagues are looking at how that process degrades the tungsten and quantifying the extent to which these interactions deplete tritium from the plasma. Both of these issues affect the rate of fusion reactions over time and can even entirely shut down, or quench, the fusion plasma.

    Exploring these questions requires integrating approaches at different time and length scales. The researchers use other SciDAC project codes to model the fundamental characteristics of the background plasma at steady state and how that energetic soup will interact with the divertor surface. Those results feed in to hPIC and F-TRIDYN, codes developed by Davide Curreli at the University of Illinois at Urbana-Champaign that describe the angles and energies of ions and alpha particles as they strike the tungsten surface. Building on those results, Wirth’s team can apply its own codes to characterize plasma particles as they interact with the tungsten and affect its surface.

    Developing these codes required combining top-down and bottom-up design approaches. To understand tungsten and its interaction with the helium ions (alpha particles) the fusion reaction produces, Wirth’s team has used molecular dynamics (MD) techniques. The simulations examined 20 million atoms, a relatively modest number compared with the largest calculations that approach 100 times that size, he notes. But they follow the materials for longer times, approximately 1.5 microseconds, approximately 1,500 times longer than most MD simulations. Those longer spans provide physics benchmarks for the top-down approach they developed to simulate the interactions of tungsten and plasma particles within cluster dynamics in a code called Xolotl, after the Aztec god of lightning and death. As part of this work, University of Tennessee graduate student Tim Younkin also has developed GITR (pronounced “guitar” for Global Impurity Transport). “With GITR we simulate all the species that are eroded off the surface, where do they ionize, what are their orbits following the plasma physics and dynamics of the electromagnetism, where do they redeposit,” Wirth says.

    The combination of codes has simulated several divertor operational scenarios on ITER, including a 100-second-long discharge of deuterium and tritium plasma designed to generate 100 MW of fusion power, about 20 percent of that which researchers plan to achieve on ITER. Overall the team found that the plasma causes tungsten to erode and re-deposit. Helium particles tend to erode tungsten, which could be a potential problem, Wirth says, though sometimes they also seem to block tritium from embedding deep within the tungsten, which could be beneficial overall because it would improve recycling.

    Although these simulations are contributing important insights, they are just the first steps toward understanding realistic conditions within ITER. These initial models simulate plasma with steady heat and ion-particle fluxes, but conditions in an operating tokamak constantly change, Wirth notes, and could affect overall material performance. His group plans to incorporate those changes in future simulations.

    The researchers also want to model beryllium, an element used to armor the main fusion chamber walls. Beryllium will also be eroded, transported and deposited into divertors, possibly altering the tungsten surface’s behavior.

    The researchers must validate all of these results with experiments, some of which must await ITER’s operation. Wirth and his team also collaborate with the smaller WEST tokamak in France on experiments to validate their coupled SciDAC plasma-surface interaction codes.

    Ultimately Wirth hopes these integrated codes will provide HPC tools that can truly predict physical response in these extreme systems. With that validation, he says, “we can think about using them to design better-functioning material components for even more aggressive operating conditions that could enable fusion to put energy on the grid.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 6:15 pm on May 22, 2019 Permalink | Reply
    Tags: , ASCR Discovery, , , ,   

    From ASCR Discovery: “Lessons machine-learned” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    From ASCR Discovery

    May 2019

    1
    The University of Arizona’s Joshua Levine is using his Department of Energy Early Career Research Program award to combine machine learning and topology data-analysis tools to better understand trends within climate simulations. These map pairs represent data from January 1950 (top) and January 2010. The left panels depict near-surface air temperatures from hot (red) to cool (blue). In the multicolored images, Levine has used topological, or shape-based, data analysis to organize and color-code the temperature data into a tree-like hierarchy. As the time passes, the data behavior around the North Pole (right panels) breaks into smaller chunks. These changes highlight the need for machine-learning tools to understand how these structures evolve over time. Images courtesy of Joshua Levine, University of Arizona, with data from CMIP6/ESGF.

    Quantifying the risks buried nuclear waste pose to soil and water near the Department of Energy’s (DOE’s) Hanford site in Washington state is not easy. Researchers can’t measure the earth’s permeability, a key factor in how far chemicals might travel, and mathematical models of how substances move underground are incomplete, says Paris Perdikaris of the University of Pennsylvania.

    But where traditional experimental and computational tools fall short, artificial intelligence algorithms can help, building their own inferences based on patterns in the data. “We can’t directly measure the quantities we’re interested in,” he says. “But using this underlying mathematical structure, we can construct machine-learning algorithms that can predict what we care about.”

    Perdikaris’ project is one of several sponsored by the DOE Early Career Research Program that apply machine-learning methods. One piece of his challenge is combining disparate data types such as images, simulations and time-resolved sensor information to find patterns. He will also constrain these models using physics and math, so the resulting predictions respect the underlying science and don’t make spurious connections based on data artifacts. “The byproduct of this is that you can significantly reduce the amount of data you need to make robust predictions. So you can save a lot in data efficiency terms.”

    Another key obstacle is quantifying the uncertainty within these calculations. Missing aspects of the physical model or physical data can affect the prediction’s quality. Besides studying subsurface transport, such algorithms could also be useful for designing new materials.

    Machine learning belongs to a branch of artificial intelligence algorithms that already support our smartphone assistants, manage our home devices and curate our movie and music playlists. Many machine-learning algorithms depend on tools known as neural networks, which mimic the human brain’s ability to filter, classify and draw insights from the patterns within data. Machine-learning methods could help scientists interpret a range of information. In some disciplines, experiments generate more data than researchers can hope to analyze on their own. In others, scientists might be looking for insights about their data and observations.

    But industry’s tools alone won’t solve science’s problems. Today’s machine-learning algorithms, though powerful, make inferences researchers can’t verify against established theory. And such algorithms might flag experimental noise as meaningful. But with algorithms designed to handle science’s tenets, machine learning could boost computational efficiency, allow researchers to compare, integrate and improve physical models, and shift the ways that scientists work.

    Much of industrial artificial intelligence work started with distinguishing, say, cats from Corvettes – analyzing millions of digital images in which data are abundant and have regular, pixelated structures. But with science, researchers don’t have the same luxury. Unlike the ubiquitous digital photos and language snippets that have powered image and voice recognition, scientific data can be expensive to generate, such as in molecular research experiments or large-scale simulations, says Argonne National Laboratory’s Prasanna Balaprakash.

    With his early-career award, he’s designing machine-learning methods that incorporate scientific knowledge. “How do we leverage that? How do we bring in the physics, the domain knowledge, so that an algorithm doesn’t need a lot of data to learn?” He’s also focused on adapting machine-learning algorithms to accept a wider range of data types, including graph-like structures used for encoding molecules or large-scale traffic network scenarios.

    Balaprakash also is exploring ways to automate the development of new machine-learning algorithms on supercomputers – a neural network for designing new neural networks. Writing these algorithms requires a lot of trial-and-error work, and a neural network built with one data type often can’t be used on a new data type.

    Although some fields have data bottlenecks, in other situations scientific instruments generate gobs of data – gigabytes, even petabytes, of results that are beyond human capability to review and analyze. Machine learning could help researchers sift this information and glean important insights. For example, experiments on Sandia National Laboratories’ Z machine, which compresses energy to produce X-rays and to study nuclear fusion, spew out data about material properties under these extreme conditions.

    Sandia Z machine

    When superheated, samples studied in the Z machine mix in a complex process that researchers don’t fully understand yet, says Sandia’s Eric Cyr. He’s exploring data-driven algorithms that can divine an initial model of this mixing, giving theoretical physicists a starting point to work from. In addition, combining machine-learning tools with simulation data could help researchers streamline their use of the Z machine, reducing the number of experiments needed to achieve accurate results and minimizing costs.

    To reach that goal, Cyr focuses on scalable machine algorithms, a technology known as layer-parallel methods. Today’s machine-learning algorithms have expanded from a handful of processing layers to hundreds. As researchers spread these layers over multiple graphics processing units (GPUs), the computational efficiency eventually breaks down. Cyr’s algorithms would split the neural-network layers across processors as the algorithm trains on the problem of interest, he says. “That way if you want to double the number of layers, basically make your neural network twice as deep, you can use twice as many processors and do it in the same amount of time.”

    With problems such as climate and weather modeling, researchers struggle to incorporate the vast range of scales, from globe-circling currents to local eddies. To tackle this problem, Oklahoma State University’s Omer San will apply machine learning to study turbulence in these types of geophysical flows. Researchers must construct a computational grid to run these simulations, but they have to define the scale of the mesh, perhaps 100 kilometers across, to encompass the globe and produce a calculation of manageable size. At that scale, it’s impossible to simulate a range of smaller factors, such as vortices just a few meters wide that can produce important, outsized effects across the whole system because of nonlinear interactions. Machine learning could provide a way to add back in some of these fine details, San says, like software that sharpens a blurry photo.

    Machine learning also could help guide researchers as they choose from the available closure models, or ways to model smaller-scale features, as they examine various flow types. It could be a decision-support system, San says, using local data to determine whether Model A or Model B is a better choice. His group also is examining ways to connect existing numerical methods within neural networks, to allow those techniques to partially inform the systems during the learning process, rather than doing blind analysis. San wants “to connect all of these dots: physics, numerics and the learning framework.”

    Machine learning also promises to help researchers extend the use of mathematical strategies that already support data analysis. At the University of Arizona, Joshua Levine is combining machine learning with topological data-analysis tools.

    These strategies capture data’s shape, which can be useful for visualizing and understanding climate patterns, such as surface temperatures over time. Levine wants to extend topology, which helps researchers analyze a single simulation, to multiple climate simulations with different parameters to understand them as a whole.

    As climate scientists use different models, they often struggle to figure out which ones are correct. “More importantly, we don’t always know where they agree and disagree,” Levine says. “It turns out agreement is a little bit more tractable as a problem.” Researchers can do coarse comparisons – calculating the average temperature across the Earth and checking the models to see if those simple numbers agree. But that basic comparison says little about what happened within a simulation.

    Topology can help match those average values with their locations, Levine says. “So it’s not just that it was hotter over the last 50 years, but maybe it was much hotter in Africa over the last 50 years than it was in South America.”

    All of these projects involve blending machine learning with other disciplines to capitalize on each area’s relative strengths. Computational physics, for example, is built on well-defined principles and mathematical models. Such models provide a good baseline for study, Penn’s Perdikaris says. “But they’re a little bit sterilized and they don’t directly reflect the complexity of the real world.” By contrast, up to now machine learning has only relied on data and observations, he says, throwing away a scientist’s physical knowledge of the world. “Bridging the two approaches will be key in advancing our understanding and enhancing our ability to analyze and predict complex phenomena in the future.”

    Although Argonne’s Balaprakash notes that machine learning has been oversold in some cases, he also believes it will be a transformative research tool, much like the Hubble telescope was for astronomy. “It’s a really promising research area.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 1:38 pm on December 5, 2018 Permalink | Reply
    Tags: ASCR Discovery, Astronomical magnetism, , , NASA’s Pleiades supercomputer, Nick Featherstone- University of Colorado Boulder   

    From ASCR Discovery: “Astronomical magnetism” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    Modeling solar and planetary magnetic fields is a big job that requires a big code.

    1
    Convection models of the sun, with increasing amounts of rotation from left to right. Warm flows (red) rise to the surface while others cool (blue). These simulations are the most comprehensive high-resolution models of solar convection so far. See video here.

    Image courtesy of Nick Featherstone, University of Colorado Boulder.

    It’s easy to take the Earth’s magnetic field for granted. It’s always on the job, shielding our life-giving atmosphere from the corrosive effects of unending solar radiation. Its constant presence also gives animals – and us — clues to find our way around.

    This vital force has protected the planet since long before humans evolved, yet its source – the giant generator of a heat-radiating, electricity-conducting liquid iron core swirling as the planet rotates – still holds mysteries. Understanding the vast and complex turbulent features of Earth’s dynamo – and that of other planets and celestial bodies – has challenged physicists for decades.

    “You can always do the problem you want to, but just a little bit,” says Nick Featherstone, research associate at the University of Colorado Boulder. Thanks to his efforts, however, researchers now have a computer code that lets them come closer than ever to simulating these features in detail across a whole planet or star. The program, known as Rayleigh, is open-source and available to anyone.

    To demonstrate the power of Rayleigh’s algorithms, a research team has simulated the dynamics of the sun, Jupiter and Earth in unprecedented detail. The project has been supported with a Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program allocation of 260 million processor hours on Mira, an IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility, a Department of Energy user facility.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    2
    Earth’s liquid metal core produces a complex combination of outward (red) and inward (blue) flows in this dynamo simulation. Image courtesy of Rakesh Yadav, Harvard University.

    This big code stemmed from Featherstone’s research in solar physics. Previously scientists had used computation to model solar features on as many as a few hundred processor cores simultaneously, or in parallel. But Featherstone wanted to tackle larger problems that were intractable using available technology. “I spent a lot of time actually looking at the parallel algorithms that were used in that code and seeing where I could change things,” he says.

    When University of California, Los Angeles geophysicist Jonathan Aurnou saw Featherstone present his work at a conference in 2012, he was immediately impressed. “Nick has built this huge, huge capability,” says Aurnou, who leads the Geodynamo Working Group in the Computational Infrastructure for Geodynamics (CIG) based at the University of California, Davis. Though stars and planets can behave very differently, the dynamo in these bodies can be modeled with adjustments to the same fundamental algorithms.

    Aurnou soon recruited Featherstone to develop a community code – one researchers could share and improve – based on his earlier algorithms. The team initially performed simulations on up to 10,000 cores of NASA’s Pleiades supercomputer.

    NASA SGI Intel Advanced Supercomputing Center Pleiades Supercomputer

    But the scientists wanted to go bigger. Previous codes are like claw hammers, but “this code – it’s a 30-pound sledge,” Aurnou says. “That changes what you can swing at.”

    In 2014 Aurnou, Featherstone and their colleagues proposed three big INCITE projects focusing on three bodies in our solar system: the sun, a star; Jupiter, a gas giant planet; and Earth, a rocky planet. Mira’s 786,000 processor cores let the team scale up their calculations by a factor of 100, Featherstone says. Adds Aurnou, “You can think of Mira as a place to let codes run wild, a safari park for big codes.”

    The group focused on one problem each year, starting with Featherstone’s specialty: the sun. In its core, hydrogen atoms fuse to form helium, releasing high-energy photons that bounce around a dense core for thousands of years. They eventually diffuse to an outer convecting layer, where they warm plasma pockets, causing them to rise to the surface. Finally, the energy reaches the surface, the photosphere, where it can escape, reaching Earth as light within minutes. Like planets, the sun rotates, producing chaotic forces and its own magnetic poles that reverse every 11 years. The processes that cause this magnetic reversal remain largely unknown.

    Featherstone broke down this complex mixture of activity into components across the whole star. “What I’ve been able to do with the INCITE program is to start modeling convection in the sun both with and without rotation turned on and at very, very high resolution,” Featherstone says. The researchers plan to incorporate magnetism into the models next.

    The team then moved on to Jupiter, aiming to predict and model the results of NASA’s Juno probe, which orbits that planet. In Jupiter’s core – the innermost 95 percent – hydrogen is compressed so tightly that the electrons pop off. The mass behaves like a metal ball, Aurnou says. Its core also releases heat in an amount equal to what the planet receives from the sun. All that convective turbulence also rotates, creating a potent planetary magnetic field, he says.

    Until recent results from Juno, scientists didn’t know that surface jets on Jupiter extend deep – thousands of kilometers – into the planet. Juno’s images reveal clusters of geometric turbulence – pentagons, octagons and more – grouped around the Jovian poles.

    3
    A model of interacting vortices simulating turbulent jets that resemble those observed on Jupiter. Yellow features are rotating counterclockwise, while blue features rotate clockwise. Image courtesy of Moritz Heimpel, University of Alberta.

    Even before the Juno results were published in March, the CIG team had simulated deep jets and their interactions with Jupiter’s surface and magnetic core. The team is well-poised to help physicists better understand these unusual stormy features, Aurnou adds. “We’re going to be using our big simulations and the analysis that we’re now carrying out to try to understand the Juno data.”

    In its third year the team modeled the behavior of Earth’s magnetic field, a system where they had far more data from observations. Nonetheless, our home still harbors geophysical puzzles. Earth has an outer core of molten iron and a hard rocky crust that contains it. The magnetic poles drift – and can even flip – but the process takes a few hundred thousand years and doesn’t occur on a regular schedule. “Earth’s magnetic field is complex – messy – both in time and space,” Aurnou says. “That mess is where all the fun is.”

    Turbulence is difficult to simulate because it includes the cumulative effects of minuscule changes coupled with processes that are occurring over large parts of a planet.

    “[In our Earth model] we’ve made, in a sense, as messy a dynamo simulation as possible,” Aurnou says. Previous researchers modeling Earth have argued that tweaks to physics were needed to explain features such as the constant magnetic-pole shifts. “We’ve actually found with our Mira runs, that, no, we don’t need any extra ingredients. We just need turbulence.”

    With these results, the team hopes to pare down simulations to incorporate the simplest set of inputs needed to understand our complex terrestrial system.

    The INCITE project results are fueling new research opportunities already. Based on the team’s solar findings, in 2017 Featherstone received a $1 million grant from NASA’s Heliophysics Grand Challenge program, which supports research into solar physics problems that require both theory and observation.

    The project shows how federal funding can dovetail to help important science reach its potential, Aurnou says. CIG originally hired Featherstone using National Science Foundation funds, which led to the INCITE grant, followed by this NASA project, which will model even more of the sun’s fundamental physics. That information could help protect astronauts from solar radiation and shield our electrical grids from damage and outages during periods of high solar activity.

    Eventually the team would like to model the reversal of magnetic poles on Earth, which requires accounting for daily rotation over hundreds of thousands of years. “That’s going to cost us,” Aurnou says. “We need to get a more efficient code for that and faster computers.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 5:58 pm on October 17, 2018 Permalink | Reply
    Tags: ASCR Discovery, , , Quantum predictions   

    From ASCR Discovery: “Quantum predictions” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    1
    Mechanical strain, pressure or temperature changes or adding chemical doping agents can prompt an abrupt switch from insulator to conductor in materials such as nickel oxide (pictured here). Nickel ions (blue) and oxygen ions (red) surround a dopant ion of potassium (yellow). Quantum Monte Carlo methods can accurately predict regions where charge density (purple) will accumulate in these materials. Image courtesy of Anouar Benali, Argonne National Laboratory.

    Solving a complex problem quickly requires careful tradeoffs – and simulating the behavior of materials is no exception. To get answers that predict molecular workings feasibly, scientists must swap in mathematical approximations that speed computation at accuracy’s expense.

    But magnetism, electrical conductivity and other properties can be quite delicate, says Paul R.C. Kent of the Department of Energy’s (DOE’s) Oak Ridge National Laboratory. These properties depend on quantum mechanics, the movements and interactions of myriad electrons and atoms that form materials and determine their properties. Researchers who study such features must model large groups of atoms and molecules rather than just a few. This problem’s complexity demands boosting computational tools’ efficiency and accuracy.

    That’s where a method called quantum Monte Carlo (QMC) modeling comes in. Many other techniques approximate electrons’ behavior as an overall average, for example, rather than considering them individually. QMC enables accounting for the individual behavior of all of the electrons without major approximations, reducing systematic errors in simulations and producing reliable results, Kent says.

    Kent’s interest in QMC dates back to his Ph.D. research at Cambridge University in the 1990s. At ORNL, he recently returned to the method because advances in both supercomputer hardware and in algorithms had allowed researchers to improve its accuracy.

    “We can do new materials and a wider fraction of elements across the periodic table,” Kent says. “More importantly, we can start to do some of the materials and properties where the more approximate methods that we use day to day are just unreliable.”

    Even with these advances, simulations of these types of materials, ones that include up to a few hundred atoms and thousands of electrons, requires computational heavy lifting. Kent leads a DOE Basic Energy Sciences Center, the Center for Predictive Simulations of Functional Materials (CPSFM) that includes researchers from ORNL, Argonne National Laboratory, Sandia National Laboratories, Lawrence Livermore National Laboratory, the University of California, Berkeley and North Carolina State University.

    Their work is supported by a DOE Innovative and Novel Computational Impact on Theory and Experiments (INCITE) allocation of 140 million processor hours, split between Oak Ridge Leadership Computing Facility’s Titan and Argonne Leadership Computing Facility’s Mira supercomputers. Both computing centers are DOE Office of Science user facilities.

    ORNL Cray Titan XK7 Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    To take QMC to the next level, Kent and colleagues start with materials such as vanadium dioxide that display unusual electronic behavior. At cooler temperatures, this material insulates against the flow of electricity. But at just above room temperature, vanadium dioxide abruptly changes its structure and behavior.

    Suddenly this material becomes metallic and conducts electricity efficiently. Scientists still don’t understand exactly how and why this occurs. Factors such as mechanical strain, pressure or doping the materials with other elements also induce this rapid transition from insulator to conductor.

    However, if scientists and engineers could control this behavior, these materials could be used as switches, sensors or, possibly, the basis for new electronic devices. “This big change in conductivity of a material is the type of thing we’d like to be able to predict reliably,” Kent says.

    Laboratory researchers also are studying these insulator-to-conductors with experiments. That validation effort lends confidence to the predictive power of their computational methods in a range of materials. The team has built open-source software, known as QMCPACK, that is now available online and on all of the DOE Office of Science computational facilities.

    Kent and his colleagues hope to build up to high-temperature superconductors and other complex and mysterious materials. Although scientists know these materials’ broad properties, Kent says, “we can’t relate those to the actual structure and the elements in the materials yet. So that’s a really grand challenge for the condensed-matter physics field.”

    The most accurate quantum mechanical modeling methods restrict scientists to examining just a few atoms or molecules. When scientists want to study larger systems, the computation costs rapidly become unwieldy. QMC offers a compromise: a calculation’s size increases cubically relative to the number of electrons, a more manageable challenge. QMC incorporates only a few controlled approximations and can be applied to the numerous atoms and electrons needed. It’s well suited for today’s petascale supercomputers – capable of one quadrillion calculations or more each second – and tomorrow’s exascale supercomputers, which will be at least a thousand times faster. The method maps simulation elements relatively easily onto the compute nodes in these systems.

    The CPSFM team continues to optimize QMCPACK for ever-faster supercomputers, including OLCF’s Summit, which will be fully operational in January 2019.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The higher memory capacity on that machine’s Nvidia Volta GPUs – 16 gigabytes per graphics processing unit compared with 6 gigabytes on Titan – already boosts computation speed. With the help of OLCF’s Ed D’Azevedo and Andreas Tillack, the researchers have implemented improved algorithms that can double the speed of their larger calculations.

    QMCPACK is part of DOE’s Exascale Computing Project, and the team is already anticipating additional scaling challenges for running QMCPACK on future machines. To perform the desired simulations within roughly 12 hours on an exascale supercomputer, Kent estimates that they’ll need algorithms that are 30 times more scalable than those within the current version.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Even with improved hardware and algorithms, QMC calculations will always be expensive. So Kent and his team would like to use QMCPACK to understand where cheaper methods go wrong so that they can improve them. Then they can save QMC calculations for the most challenging problems in materials science, Kent says. “Ideally we will learn what’s causing these materials to be very tricky to model and then improve cheaper approaches so that we can do much wider scans of different materials.”

    The combination of improved QMC methods and a suite of computationally cheaper modeling approaches could lead the way to new materials and an understanding of their properties. Designing and testing new compounds in the laboratory is expensive, Kent says. Scientists could save valuable time and resources if they could first predict the behavior of novel materials in a simulation.

    Plus, he notes, reliable computational methods could help scientists understand properties and processes that depend on individual atoms that are extremely difficult to observe using experiments. “That’s a place where there’s a lot of interest in going after the fundamental science, predicting new materials and enabling technological applications.”

    Oak Ridge National Laboratory is supported by the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 11:57 am on August 22, 2018 Permalink | Reply
    Tags: , , , ASCR Discovery, Fine-tuning physics, ,   

    From ASCR Discovery and Argonne National Lab: “Fine-tuning physics” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    August 2018

    Argonne applies supercomputing heft to boost precision in particle predictions.

    2
    A depiction of a scattering event on the Large Hadron Collider. Image courtesy of Argonne National Laboratories.

    Advancing science at the smallest scales calls for vast data from the world’s most powerful particle accelerator, leavened with the precise theoretical predictions made possible through many hours of supercomputer processing.

    The combination has worked before, when scientists from the Department of Energy’s Argonne National Laboratory provided timely predictions about the Higgs particle at the Large Hadron Collider in Switzerland. Their predictions contributed to the 2012 discovery of the Higgs, the subatomic particle that gives mass to all elementary particles.


    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    “That we are able to predict so precisely what happens around us in nature is a remarkable achievement,” Argonne physicist Radja Boughezal says. “To put all these pieces together to get a number that agrees with the measurement that was made with something so complicated as the LHC is always exciting.”

    Earlier this year, she was allocated more than 98 million processor hours on the Mira and Theta supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science user facility, through DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ANL ALCF Theta Cray XC40 supercomputer

    Her previous INCITE allocation helped solve problems that scientists saw as insurmountable just two or three years ago.

    These problems stem from the increasingly intricate and precise measurements and theoretical calculations associated with scrutinizing the Higgs boson and from searches for subtle deviations from the standard model that underpins the behavior of matter and energy.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    The approach she and her associates developed led to early, high-precision LHC predictions that describe so-called strong-force interactions between quarks and gluons, which comprise subatomic particles such as protons and neutrons.

    The theory governing strong-force interactions is called QCD, for quantum chromodynamics. In QCD, the thing that quantifies the strong force when exerted in any direction is called the strong coupling constant.

    “At high energies, when collisions happen, quarks and gluons are very close to each other, so the strong force is very weak. It’s almost turned off,” Boughezal explains. How this strong coupling grows – a small parameter called perturbative expansion – gives physicists a yardstick to calculate their predictions. Perturbative expansion is “a method we have used over and over to get these predictions, and it has provided powerful tests of QCD to date.”

    Crucial to these tests is the N-jettiness framework Boughezal and her Argonne and Northwestern University collaborators devised to obtain high-precision predictions for particle scattering processes. Specially adapted for high-performance computing systems, the framework’s novelty stems from its incorporation of existing low-precision numerical codes to achieve part of the desired result. The scientists fill in algorithmic gaps with simple analytic calculations.

    The LHC data lined up completely with predictions the team had obtained from running the N-jettiness code on the Mira supercomputer at Argonne. The agreement carries important implications for the precision goals physicists are setting for future accelerators such as the proposed Electron-Ion Collider (EIC).

    “One of the things that has puzzled us for 30 years is the spin of the proton,” Boughezal says. Planners hope the EIC reveals how the spin of the proton, matter’s basic building block, emerges from its elementary constituents, quarks and gluons.

    Boughezal also is working with LHC scientists in the search for dark matter, which accounts for 96 percent of stuff in the universe. The remainder is ordinary matter, the atoms and molecules that form stars, planets and people.

    “Scientists believe that the mysterious dark matter in the universe could leave a missing energy footprint at the LHC,” she says. Such a footprint would reveal the existence of a new particle that’s currently missing from the standard model. Dark matter particles interact weakly with the LHC’s detectors. “We cannot see them directly.”

    They could, however, be produced with a jet – a spray of standard-model particles made from LHC proton collisions. “We can measure that jet. We can see it. We can tag it.” And by using simple laws of physics such as the conservation of momentum, even if the particles are invisible, scientists would be able to detect them by measuring the jet’s energy.

    For example, when subatomic particles called Z bosons are produced with particle jets, the bosons can decay into neutrinos, ghostly specks that rarely interact with ordinary matter. The neutrinos appear as missing energy in the LHC’s detectors, just as a dark matter particle would.

    In July 2017, Boughezal and three co-authors published a paper in the Journal of High Energy Particle Physics. It was the first to describe new proton-structure details derived from precision high-energy Z-boson experimental data.

    
“If you want to know whether what you have produced is actually coming from a standard model process or something else that we have not seen before, you need to predict your standard model process very well,” she says. If the theoretical predictions deviate from the experimental data, it suggests new physics at play.


    In fact, Boughezal and her associates have precisely predicted the standard model jet process and it agrees with the data. “So far we haven’t produced dark matter at the LHC.”

    Previously, however, the results were so imprecise – and the margin of uncertainty so high – that physicists couldn’t tell whether they’d produced a standard-model jet or something entirely new.

    What surprises will higher-precision calculations reveal in future LHC experiments?

    “There is still a lot of territory that we can probe and look for something new,” Boughezal says. “The standard model is not a complete theory because there is a lot it doesn’t explain, like dark matter. We know that there has to be something bigger than the standard model.”

    Argonne is managed by UChicago Argonne LLC for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 3:07 pm on March 15, 2017 Permalink | Reply
    Tags: ASCR Discovery, Coding a Starkiller, , ,   

    From OLCF via ASCR and DOE: “Coding a Starkiller” 

    i1

    Oak Ridge National Laboratory

    OLCF

    ASCR

    March 2017

    The Titan supercomputer and a tool called Starkiller help Stony Brook University-led team simulate key moments in exploding stars.

    1
    A volume rendering of the density after 0.6 and 0.9 solar mass white dwarfs merge. The image is derived from a calculation performed on the Oak Ridge Leadership Computing facility’s Titan supercomputer. The model used Castro, an adaptive mesh astrophysical radiation hydrodynamics simulation code. Image courtesy of Stony Brook University / Max Katz et al.

    The spectacular Supernova 1987A, whose light reached Earth on Feb. 23 of the year it’s named for, captured the public’s fancy. It’s located at the edge of the Milky Way, in a dwarf galaxy called the Large Magellanic Cloud. It had been four centuries since earthlings had witnessed light from a star exploding in our galaxy.

    1
    NASA

    A supernova’s awesome light show heralds a giant star’s death, and the next supernova’s post-mortem will generate reams of data, compared to the paltry dozen or so neutrinos and X-rays harvested from the 1987 event.

    Astrophysicists Michael Zingale and Bronson Messer aren’t waiting. They’re aggressively anticipating the next supernova by leading teams in high-performance computer simulations of explosive stellar events, including different supernova types and their accompanying X-ray bursts. Zingale, of Stony Brook University, and Messer, of the Department of Energy’s Oak Ridge National Laboratory (ORNL), are in the midst of an award from the DOE Office of Science’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. It provides an allocation of 45 million processor hours of computer time on Titan, a Cray XK7 that’s one of the world’s most powerful supercomputers, at the Oak Ridge Leadership Computing Facility, or OLCF – a DOE Office of Science user facility.

    The simulations run on workhorse codes developed by the INCITE collaborators and at the DOE’s Lawrence Berkeley National Laboratory – codes that “are often modified toward specific problems,” Zingale says. “And the common problem we share with ORNL is that we have to put more and more of our algorithms on the Titan graphics processor units (GPUs),” specialized computer chips that accelerate calculations. While the phenomena they’re modeling “are really far away and on scales that are hard to imagine,” the codes have other applications closer to home: “terrestrial phenomena, like terrestrial combustion.” The team’s codes – Maestro, Castro, Chimera and FLASH – are available to other modelers free through online code repository Github.

    With a previous INCITE award, the researchers realized the possibility of attacking the GPU problem together. They envisioned codes comprised of multiphysics modules that compute common pieces of most kinds of explosive activities, Messer says. They dubbed the growing collection of GPU-enabled modules Starkiller.

    “Starkiller ties this INCITE project together,” he says. “We realized we didn’t want to reinvent the wheel with each new simulation.” For example, a module that tracks nuclear burning helps the researchers create larger networks for nucleosynthesis, a supernova process in which elements form in the turbulent flow on the stellar surface.

    “In the past, we were able to do only a little more than a dozen different elements, and now we’re routinely doing 150,” Messer says. “We can make the GPU run so much faster. That’s part of Titan’s advantage to us.”

    Supernova 1987A, a type II supernova, arose from the gravitational collapse of a stellar core, the consistent fate of massive stars. Type Ia supernovae follow from intense thermonuclear activities that eventually drive the explosion of a white dwarf – a star that has used up all its hydrogen. Zingale’s group is focused on type Ia, Messer’s on type II. A type II leaves a remnant star; a type Ia does not.

    Stars like the sun burn hydrogen into helium and, over enormous stretches of time, burn the helium into carbon. Once our sun starts burning carbon, it will gradually peter out, Messer says, because it’s not massive enough to turn the carbon into something heavier.

    “A star begins life as a big ball of hydrogen, and its whole life is this fight between gravity trying to suck it into the middle and thermonuclear reactions keeping it supported against its own gravity,” he adds. “Once it gets to the point where it’s burning some carbon, the sun will just give up. It will blow a big smoke ring into space and become a planetary nebula, and at the center it will become a white dwarf.”

    Zingale is modeling two distinct thermonuclear modes. One is for a white dwarf in a binary system – two stars orbiting one another – that consumes additional material from its partner. As the white dwarf grows in mass, it gets hotter and denser in the center, creating conditions that drive thermonuclear reactions.

    “This star is made mostly of carbon and oxygen,” Zingale says. “When you get up to a few hundred million K, you have densities of a few billion grams per cubic centimeter. Carbon nuclei get fused and make things like neon and sodium and magnesium, and the star gets energy out in that process. We are modeling the star’s convection, the creation of a rippling burning front that converts the carbon and oxygen into heavier elements such as iron and nickel. This creates such an enormous amount of energy that it overcomes the force of gravity that’s holding the star together, and the whole thing blows apart.”

    The other mode is being modeled with former Stony Brook graduate student and INCITE co-principal investigator Max Katz, who want to understand whether merging stars can create a burning point that leads to a supernova, as some observations suggest. His simulations feature two white dwarfs so close that they emit gravitational radiation, robbing energy from the system and causing the stars to spiral inward. Eventually, they get so close that the more massive one rips the lesser apart via tidal energy.

    Zingale’s group also continues to model the convective burning on stars, known as X-ray bursts, providing a springboard to more in-depth studies. He says they’re the first to simulate them in three dimensions. That work and additional supernova studies were supported by the DOE Office of Science and performed at OLCF and the National Energy Research Scientific Computing Center, a DOE Office of Science user facility at Lawrence Berkeley National Laboratory.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: