Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:28 pm on January 23, 2022 Permalink | Reply
    Tags: "Updated exascale system for Earth simulations delivers twice the speed", , Supercomputing,   

    From DOE’s Oak Ridge National Laboratory (US): “Updated exascale system for Earth simulations delivers twice the speed” 

    From DOE’s Oak Ridge National Laboratory (US)

    January 20, 2022

    Kimberly A Askey
    askeyka@ornl.gov
    865.576.2841

    1
    The Energy Exascale Earth System Model project reliably simulates aspects of earth system variability and projects decadal changes that will critically impact the U.S. energy sector in the future. A new version of the model delivers twice the performance of its predecessor. Credit: E3SM, Dept. of Energy.

    A new version of the Energy Exascale Earth System Model, or E3SM, is two times faster than an earlier version released in 2018.

    Earth system models have weather-scale resolution and use advanced computers to simulate aspects of Earth’s variability and anticipate decadal changes that will critically impact the U.S. energy sector in coming years.

    Scientists at the Department of Energy’s Oak Ridge National Laboratory are part of the team that developed version 2 of the model — E3SMv2 — which was released to the scientific community in September 2021.

    “E3SMv2 delivered twice the performance over E3SMv1 when using identical computational resources,” said ORNL computational scientist Sarat Sreepathi, who co-leads the E3SM Performance Group. “This is a significant achievement as the performance boost is reflected while running the fully integrated Earth system model and not just confined to smaller model components.”

    The Earth, with its myriad interactions of atmosphere, oceans, land and ice components, presents an extraordinarily complex system for investigation. Earth system simulation involves solving approximations of physical, chemical and biological governing equations on spatial grids at resolutions that are as fine in scale as computing resources will allow.

    “Even with the addition of new features in E3SMv2 to the atmosphere model and how it represents precipitation and clouds, we still doubled the model throughput,” Sreepathi said. “To put it another way, we cut the computational run time or time-to-solution in half.”

    “E3SMv2 allows us to more realistically simulate the present, which gives us more confidence to simulate the future,” said David Bader, Lawrence Livermore National Laboratory scientist and lead of the E3SM project. “The increase in computing power allows us to add more detail to processes and interactions that results in more accurate and useful simulations than the previous version.”

    Achieving these improvements required collaboration across the national laboratory system. Sreepathi, along with ORNL’s Gaurab KC and Youngsung Kim, accelerated the effort by creating a comprehensive monitoring framework called PACE, or Performance Analytics for Computational Experiments. The PACE web portal provided both an automatic data collection system and a streamlined interface for scientists to evaluate the performance of E3SM experiments executed on DOE supercomputers. These data facilitated feedback-driven E3SMv2 model development and allowed researchers to optimize their experiments.

    “Using the PACE web portal helped the multi-laboratory team understand how new model features were impacting computational performance,” said Sreepathi. “We were able to accurately track the evolution of the model’s performance.”

    The E3SM project reliably simulates aspects of Earth system variability, including regional air and water temperatures, which can strain energy grids; water availability, which affects power plant operations; extreme water-cycle events, such as floods and droughts, which impact infrastructure and bioenergy resources; and sea-level rise and coastal flooding, which threaten coastal infrastructure.

    In addition, the resolution has been refined due to more powerful computers. There are now two fully coupled configurations: 100-kilometer, or km, globally uniform resolution atmosphere model and a regionally refined model, or RRM, with a resolution with 25 km over North America and 100 km elsewhere. The refined mesh configuration is particularly well suited for DOE applications.

    “Thanks to the performance improvements, the RRM configuration of E3SMv2 runs as fast as E3SMv1 did in its standard resolution configuration (100 km) a few years ago. We are essentially getting the much higher resolution for ‘free,’” said LLNL atmospheric scientist Chris Golaz.

    The team is now conducting the simulation campaign with E3SMv2. Team members have already simulated several thousand years, and are planning to run several thousand more.

    The project includes more than 100 scientists and software engineers at multiple DOE laboratories as well as several universities; the DOE laboratories include DOE’s Argonne National Laboratory(US), DOE’s Brookhaven National Laboratory(US), DOE’s Lawrence Livermore National Laboratory(US), DOE’s Lawrence Berkeley National Laboratory (US), DOE’s Los Alamos National Lab (US), Oak Ridge, DOE’s Pacific Northwest National Laboratory (US) and DOE’s Sandia National Laboratory (US). In recognition of unifying the DOE earth system modeling community to perform high-resolution coupled simulations, the E3SM executive committee was awarded the Secretary of Energy’s Achievement Award in 2015.

    In addition, the E3SM project benefits from DOE programmatic collaborations, including The Exascale Computing Project and research efforts in Scientific Discovery Through Advanced Computing, Climate Model Development and Validation, Atmospheric Radiation Measurement, Program for Climate Model Diagnosis and Intercomparison, International Land Model Benchmarking Project, Community Earth System Model and Next Generation Ecosystem Experiments for the Arctic and the Tropics.

    The E3SM project is supported by the Biological and Environmental Research program in DOE’s Office of Science.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    Established in 1942, DOE’s Oak Ridge National Laboratory (US) is the largest science and energy national laboratory in the Department of Energy system (by size) and third largest by annual budget. It is located in the Roane County section of Oak Ridge, Tennessee. Its scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries.

    ORNL has several of the world’s top supercomputers, including Summit, ranked by the TOP500 as Earth’s second-most powerful.

    ORNL OLCF IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500..

    The lab is a leading neutron and nuclear power research facility that includes the Spallation Neutron Source and High Flux Isotope Reactor.

    It hosts the Center for Nanophase Materials Sciences, the BioEnergy Science Center, and the Consortium for Advanced Simulation of Light Water Nuclear Reactors.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    Areas of research

    ORNL conducts research and development activities that span a wide range of scientific disciplines. Many research areas have a significant overlap with each other; researchers often work in two or more of the fields listed here. The laboratory’s major research areas are described briefly below.

    Chemical sciences – ORNL conducts both fundamental and applied research in a number of areas, including catalysis, surface science and interfacial chemistry; molecular transformations and fuel chemistry; heavy element chemistry and radioactive materials characterization; aqueous solution chemistry and geochemistry; mass spectrometry and laser spectroscopy; separations chemistry; materials chemistry including synthesis and characterization of polymers and other soft materials; chemical biosciences; and neutron science.
    Electron microscopy – ORNL’s electron microscopy program investigates key issues in condensed matter, materials, chemical and nanosciences.
    Nuclear medicine – The laboratory’s nuclear medicine research is focused on the development of improved reactor production and processing methods to provide medical radioisotopes, the development of new radionuclide generator systems, the design and evaluation of new radiopharmaceuticals for applications in nuclear medicine and oncology.
    Physics – Physics research at ORNL is focused primarily on studies of the fundamental properties of matter at the atomic, nuclear, and subnuclear levels and the development of experimental devices in support of these studies.
    Population – ORNL provides federal, state and international organizations with a gridded population database, called Landscan, for estimating ambient population. LandScan is a raster image, or grid, of population counts, which provides human population estimates every 30 x 30 arc seconds, which translates roughly to population estimates for 1 kilometer square windows or grid cells at the equator, with cell width decreasing at higher latitudes. Though many population datasets exist, LandScan is the best spatial population dataset, which also covers the globe. Updated annually (although data releases are generally one year behind the current year) offers continuous, updated values of population, based on the most recent information. Landscan data are accessible through GIS applications and a USAID public domain application called Population Explorer.

     
  • richardmitnick 2:35 pm on January 21, 2022 Permalink | Reply
    Tags: "In a Numerical Coincidence Some See Evidence for String Theory", "Massive Gravity", "String universality": a monopoly of string theories among viable fundamental theories of nature, , Asymptotically safe quantum gravity, , Graviton: A graviton is a closed string-or loop-in its lowest-energy vibration mode in which an equal number of waves travel clockwise and counterclockwise around the loop., Lorentz invariance: the same laws of physics must hold from all vantage points., , , , , Supercomputing, ,   

    From Quanta Magazine (US): “In a Numerical Coincidence Some See Evidence for String Theory” 

    From Quanta Magazine (US)

    January 21, 2022
    Natalie Wolchover

    1
    Dorine Leenders for Quanta Magazine.

    In a quest to map out a quantum theory of gravity, researchers have used logical rules to calculate how much Einstein’s theory must change. The result matches string theory perfectly.

    Quantum gravity researchers use “α” to denote the size of the biggest quantum correction to Albert Einstein’s Theory of General Relativity.

    Recently, three physicists calculated a number pertaining to the quantum nature of gravity. When they saw the value, “we couldn’t believe it,” said Pedro Vieira, one of the three.

    Gravity’s quantum-scale details are not something physicists usually know how to quantify, but the trio attacked the problem using an approach that has lately been racking up stunners in other areas of physics. It’s called the bootstrap.

    To bootstrap is to deduce new facts about the world by figuring out what’s compatible with known facts — science’s version of picking yourself up by your own bootstraps. With this method, the trio found a surprising coincidence: Their bootstrapped number closely matched the prediction for the number made by string theory. The leading candidate for the fundamental theory of gravity and everything else, string theory holds that all elementary particles are, close-up, vibrating loops and strings.

    Vieira, Andrea Guerrieri of The Tel Aviv University (IL), and João Penedones of The EPFL (Swiss Federal Institute of Technology in Lausanne) [École polytechnique fédérale de Lausanne](CH) reported their number and the match with string theory’s prediction in Physical Review Letters in August 2021. Quantum gravity theorists have been reading the tea leaves ever since.

    Some interpret the result as a new kind of evidence for string theory, a framework that sorely lacks even the prospect of experimental confirmation, due to the pointlike minuteness of the postulated strings.

    “The hope is that you could prove the inevitability of string theory using these ‘bootstrap’ methods,” said David Simmons-Duffin, a theoretical physicist at The California Institute of Technology (US). “And I think this is a great first step towards that.”

    2
    From left: Pedro Vieira, Andrea Guerrieri and João Penedones.
    Credit: Gabriela Secara / The Perimeter Institute for Theoretical Physics (CA); Courtesy of Andrea Guerrieri; The Swiss National Centres of Competence in Research (NCCRs) [Pôle national suisse de recherche en recherche][Schweizerisches Nationales Kompetenzzentrum für Forschung](CH) SwissMAP (CH)

    Irene Valenzuela, a theoretical physicist at the Institute for Theoretical Physics at The Autonomous University of Madrid [Universidad Autónoma de Madrid](ES), agreed. “One of the questions is if string theory is the unique theory of quantum gravity or not,” she said. “This goes along the lines that string theory is unique.”

    Other commentators saw that as too bold a leap, pointing to caveats about the way the calculation was done.

    Einstein, Corrected

    The number that Vieira, Guerrieri and Penedones calculated is the minimum possible value of “α” (alpha). Roughly, “α” is the size of the first and largest mathematical term that you have to add to Albert Einstein’s gravity equations in order to describe, say, an interaction between two gravitons — the presumed quantum units of gravity.

    Albert Einstein’s 1915 Theory of General Relativity paints gravity as curves in the space-time continuum created by matter and energy. It perfectly describes large-scale behavior such as a planet orbiting a star. But when matter is packed into too-small spaces, General Relativity short-circuits. “Some correction to Einsteinian gravity has to be there,” said Simon Caron-Huot, a theoretical physicist at McGill University (CA).

    Physicists can tidily organize their lack of knowledge of gravity’s microscopic nature using a scheme devised in the 1960s by Kenneth Wilson and Steven Weinberg: They simply add a series of possible “corrections” to General Relativity that might become important at short distances. Say you want to predict the chance that two gravitons will interact in a certain way. You start with the standard mathematical term from Relativity, then add new terms (using any and all relevant variables as building blocks) that matter more as distances get smaller. These mocked-up terms are fronted by unknown numbers labeled “α”, “β”, “γ” and so on, which set their sizes. “Different theories of quantum gravity will lead to different such corrections,” said Vieira, who has joint appointments at The Perimeter Institute for Theoretical Physics (CA), and The International Centre for Theoretical Physics at The South American Institute for Fundamental Research [Instituto sul-Americano de Pesquisa Fundamental] (BR). “So these corrections are our first way to tell such possibilities apart.”

    In practice, “α” has only been explicitly calculated in string theory, and even then only for highly symmetric 10-dimensional universes. The English string theorist Michael Green and colleagues determined in the 1990s that in such worlds “α” must be at least 0.1389. In a given stringy universe it might be higher; how much higher depends on the string coupling constant, or a string’s propensity to spontaneously split into two. (This coupling constant varies between versions of string theory, but all versions unite in a master framework called “M-theory”, where string coupling constants correspond to different positions in an extra 11th dimension.)

    Meanwhile, alternative quantum gravity ideas remain unable to make predictions about “α”. And since physicists can’t actually detect gravitons — the force of gravity is too weak — they haven’t been able to directly measure “α” as a way of investigating and testing quantum gravity theories.

    Then a few years ago, Penedones, Vieira and Guerrieri started talking about using the bootstrap method to constrain what can happen during particle interactions. They first successfully applied the approach to particles called pions. “We said, OK, here it’s working very well, so why not go for gravity?” Guerrieri said.

    Bootstrapping the Bound

    The trick of using accepted truths to constrain unknown possibilities was devised by particle physicists in the 1960s, then forgotten, then revived to fantastic effect over the past decade by researchers with supercomputers, which can solve the formidable formulas that bootstrapping tends to produce.

    Guerrieri, Vieira and Penedones set out to determine what “α” has to be in order to satisfy two consistency conditions. The first, known as unitarity, states that the probabilities of different outcomes must always add up to 100%. The second, known as Lorentz invariance, says that the same laws of physics must hold from all vantage points.

    The trio specifically considered the range of values of “α” permitted by those two principles in supersymmetric 10D universes. Not only is the calculation simple enough to pull off in that setting (not so, currently, for “α” in 4D universes like our own), but it also allowed them to compare their bootstrapped range to string theory’s prediction that “α” in that 10D setting is 0.1389 or higher.

    Unitarity and Lorentz invariance impose constraints on what can happen in a two-graviton interaction in the following way: When the gravitons approach and scatter off each other, they might fly apart as two gravitons, or morph into three gravitons or any number of other particles. As you crank up the energies of the approaching gravitons, the chance they’ll emerge from the encounter as two gravitons changes — but unitarity demands that this probability never surpass 100%. Lorentz invariance means the probability can’t depend on how an observer is moving relative to the gravitons, restricting the form of the equations. Together the rules yield a complicated bootstrapped expression that “α” must satisfy. Guerrieri, Penedones and Vieira programmed the Perimeter Institute’s computer clusters to solve for values that make the two-graviton interactions unitary and Lorentz-invariant.

    The computer spit out its lower bound for “α”: 0.14, give or take a hundredth — an extremely close and potentially exact match with string theory’s lower bound of 0.1389. In other words, string theory seems to span the whole space of allowed “α” values — at least in the 10D place where the researchers checked. “That was a huge surprise,” Vieira said.

    10-Dimensional Coincidence

    What might the numerical coincidence mean? According to Simmons-Duffin, whose work a few years ago helped drive the bootstrap’s resurgence, “they’re trying to tackle a question [that’s] fundamental and important. Which is: To what extent does string theory as we know it cover the space of all possible theories of quantum gravity?”

    String theory emerged in the 1960s as a putative picture of the stringy glue that binds composite particles called mesons. A different description ended up prevailing for that purpose, but years later people realized that string theory could set its sights higher: If strings are small — so small they look like points — they could serve as nature’s elementary building blocks. Electrons, photons and so on would all be the same kind of fundamental string strummed in different ways. The theory’s selling point is that it gives a quantum description of gravity: A graviton is a closed string, or loop, in its lowest-energy vibration mode, in which an equal number of waves travel clockwise and counterclockwise around the loop. This feature would underlie macroscopic properties of gravity like the corkscrew-patterned polarization of gravitational waves.

    But matching the theory to all other aspects of reality takes some fiddling. To get rid of negative energies that would correspond to unphysical, faster-than-light particles, string theory needs a property called “Supersymmetry”, which doubles the number of its string vibration modes. Every vibration mode corresponding to a matter particle must come with another mode signifying a force particle. String theory also requires the existence of 10 space-time dimensions for the strings to wiggle around in. Yet we haven’t found any supersymmetric partner particles, and our universe looks 4D, with three dimensions of space and one of time.

    Standard Model of Supersymmetry

    Both of these data points present something of a problem.

    If string theory describes our world, Supersymmetry must be broken here. That means the partner particles, if they exist, must be far heavier than the known set of particles — too heavy to muster in experiments. And if there really are 10 dimensions, six must be curled up so small they’re imperceptible to us — tight little knots of extra directions you can go in at any point in space. These “compactified” dimensions in a 4D-looking universe could have countless possible arrangements, all affecting strings (and numbers like “α”) differently.

    Broken Supersymmetry and invisible dimensions have led many quantum gravity researchers to seek or prefer alternative, non-stringy ideas.

    Mordehai Milgrom, MOND theorist, is an Israeli physicist and professor in the department of Condensed Matter Physics at The Weizmann Institute of Science (IL) in Rehovot, Israel http://cosmos.nautil.us

    MOND Rotation Curves with MOND Tully-Fisher

    MOND 1

    But so far the rival approaches have struggled to produce the kind of concrete calculations about things like graviton interactions that string theory can.

    Some physicists hope to see string theory win hearts and minds by default, by being the only microscopic description of gravity that’s logically consistent. If researchers can prove “string universality,” as this is sometimes called — a monopoly of string theories among viable fundamental theories of nature — we’ll have no choice but to believe in hidden dimensions and an inaudible orchestra of strings.

    To string theory sympathizers, the new bootstrap calculation opens a route to eventually proving string universality, and it gets the journey off to a rip-roaring start.

    Other researchers disagree with those implications. Astrid Eichhorn, a theoretical physicist at The South Danish University [Syddansk Universitet](DK) and The Ruprecht Karl University of Heidelberg [Ruprecht-Karls-Universität Heidelberg](DE) who specializes in a non-stringy approach called asymptotically safe quantum gravity, told me, “I would consider the relevant setting to collect evidence for or against a given quantum theory of gravity to be four-dimensional and non-supersymmetric” universes, since this “best describes our world, at least so far.”

    Eichhorn pointed out that there might be unitary, Lorentz-invariant descriptions of gravitons in 4D that don’t make any sense in 10D. “Simply by this choice of setting one might have ruled out alternative quantum gravity approaches” that are viable, she said.

    Vieira acknowledged that string universality might hold only in 10 dimensions, saying, “It could be that in 10D with supersymmetry, there’s only string theory, and when you go to 4D, there are many theories.” But, he said, “I doubt it.”

    Another critique, though, is that even if string theory saturates the range of allowed “α” values in the 10-dimensional setting the researchers probed, that doesn’t stop other theories from lying in the permitted range. “I don’t see any practical way we’re going to conclude that string theory is the only answer,” said Andrew Tolley of Imperial College London (UK).

    Just the Beginning

    Assessing the meaning of the coincidence will become easier if bootstrappers can generalize and extend similar results to more settings. “At the moment, many, many people are pursuing these ideas in various variations,” said Alexander Zhiboedov, a theoretical physicist at The European Organization for Nuclear Research [Organización Europea para la Investigación Nuclear][Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN], Europe’s particle physics laboratory.

    Guerrieri, Penedones and Vieira have already completed a “dual” bootstrap calculation, which bounds “α” from below by ruling out solutions less than the minimum rather than solving for viable “α” values above the bound, as they did previously. This dual calculation shows that their computer clusters didn’t simply miss smaller allowed “α” values, which would correspond to additional viable quantum gravity theories outside string theory’s range.

    They also plan to bootstrap the lower bound for worlds with nine large dimensions, where string theory calculations are still under some control (since only one dimension is curled up), to look for more evidence of a correlation. Aside from “α”, bootstrappers also aim to calculate “β” and “γ” — the allowed sizes of the second- and third-biggest quantum gravity corrections— and they have ideas for how to approach harder calculations about worlds where supersymmetry is broken or nonexistent, as it appears to be in reality. In this way they’ll try to carve out the space of allowed quantum gravity theories, and test string universality in the process.

    Claudia de Rham, a theorist at Imperial College, emphasized the need to be “agnostic,” noting that bootstrap principles are useful for exploring more ideas than just string theory. She and Tolley have used positivity — the rule that probabilities are always positive — to constrain a theory called “Massive Gravity”, which may or may not be a realization of string theory. They discovered potentially testable consequences, showing that massive gravity only satisfies positivity if certain exotic particles exist. De Rham sees bootstrap principles and positivity bounds as “one of the most exciting research developments at the moment” in fundamental physics.

    “No one has done this job of taking everything we know and taking consistency and putting it together,” said Zhiboedov. It’s “exciting,” he added, that theorists have work to do “at a very basic level.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 4:07 pm on January 20, 2022 Permalink | Reply
    Tags: "Going beyond the exascale", , , Classical computers have been central to physics research for decades., , , , Fermilab has used classical computing to simulate lattice quantum chromodynamics., , , , Planning for a future that is still decades out., Quantum computers could enable physicists to tackle questions even the most powerful computers cannot handle., , Quantum computing is here—sort of., , Solving equations on a quantum computer requires completely new ways of thinking about programming and algorithms., Supercomputing, , The biggest place where quantum simulators will have an impact is in discovery science.   

    From Symmetry: “Going beyond the exascale” 

    Symmetry Mag

    From Symmetry

    01/20/22
    Emily Ayshford

    1
    Illustration by Sandbox Studio, Chicago with Ana Kova.

    Quantum computers could enable physicists to tackle questions even the most powerful computers cannot handle.

    After years of speculation, quantum computing is here—sort of.

    Physicists are beginning to consider how quantum computing could provide answers to the deepest questions in the field. But most aren’t getting caught up in the hype. Instead, they are taking what for them is a familiar tack—planning for a future that is still decades out, while making room for pivots, turns and potential breakthroughs along the way.

    “When we’re working on building a new particle collider, that sort of project can take 40 years,” says Hank Lamm, an associate scientist at The DOE’s Fermi National Accelerator Laboratory (US). “This is on the same timeline. I hope to start seeing quantum computing provide big answers for particle physics before I die. But that doesn’t mean there isn’t interesting physics to do along the way.”

    Equations that overpower even supercomputers.

    Classical computers have been central to physics research for decades, and simulations that run on classical computers have guided many breakthroughs. Fermilab, for example, has used classical computing to simulate lattice quantum chromodynamics. Lattice QCD is a set of equations that describe the interactions of quarks and gluons via the strong force.

    Theorists developed lattice QCD in the 1970s. But applying its equations proved extremely difficult. “Even back in the 1980s, many people said that even if they had an exascale computer [a computer that can perform a billion billion calculations per second], they still couldn’t calculate lattice QCD,” Lamm says.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer, to be built at DOE’s Argonne National Laboratory (US).

    Depiction of ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology , being built at DOE’s Oak Ridge National Laboratory (US).

    But that turned out not to be true.

    Within the past 10 to 15 years, researchers have discovered the algorithms needed to make their calculations more manageable, while learning to understand theoretical errors and how to ameliorate them. These advances have allowed them to use a lattice simulation, a simulation that uses a volume of a specified grid of points in space and time as a substitute for the continuous vastness of reality.

    Lattice simulations have allowed physicists to calculate the mass of the proton—a particle made up of quarks and gluons all interacting via the strong force—and find that the theoretical prediction lines up well with the experimental result. The simulations have also allowed them to accurately predict the temperature at which quarks should detach from one another in a quark-gluon plasma.

    Quark-Gluon Plasma from BNL Relative Heavy Ion Collider (US).

    DOE’s Brookhaven National Laboratory(US) RHIC Campus

    The limit of these calculations? Along with being approximate, or based on a confined, hypothetical area of space, only certain properties can be computed efficiently. Try to look at more than that, and even the biggest high-performance computer cannot handle all of the possibilities.

    Enter quantum computers.

    Quantum computers are all about possibilities. Classical computers don’t have the memory to compute the many possible outcomes of lattice QCD problems, but quantum computers take advantage of quantum mechanics to calculate differently.

    Quantum computing isn’t an easy answer, though. Solving equations on a quantum computer requires completely new ways of thinking about programming and algorithms.

    Using a classical computer, when you program code, you can look at its state at all times. You can check a classical computer’s work before it’s done and trouble-shoot if things go wrong. But under the laws of quantum mechanics, you cannot observe any intermediate step of a quantum computation without corrupting the computation; you can observe only the final state.

    That means you can’t store any information in an intermediate state and bring it back later, and you cannot clone information from one set of qubits into another, making error correction difficult.

    “It can be a nightmare designing an algorithm for quantum computation,” says Lamm, who spends his days trying to figure out how to do quantum simulations for high-energy physics. “Everything has to be redesigned from the ground up. We are right at the beginning of understanding how to do this.”

    Just getting started

    Quantum computers have already proved useful in basic research. Condensed matter physicists—whose research relates to phases of matter—have spent much more time than particle physicists thinking about how quantum computers and simulators can help them. They have used quantum simulators to explore quantum spin liquid states [Science] and to observe a previously unobserved phase of matter called a prethermal time crystal [Science].

    “The biggest place where quantum simulators will have an impact is in discovery science, in discovering new phenomena like this that exist in nature,” says Norman Yao, an assistant professor at The University of California-Berkeley (US) and co-author on the time crystal paper.

    Quantum computers are showing promise in particle physics and astrophysics. Many physics and astrophysics researchers are using quantum computers to simulate “toy problems”—small, simple versions of much more complicated problems. They have, for example, used quantum computing to test parts of theories of quantum gravity [npj Quantum Information] or create proof-of-principle models, like models of the parton showers that emit from particle colliders [Physical Review Letters] such as the Large Hadron Collider.

    The European Organization for Nuclear Research [Organización Europea para la Investigación Nuclear][Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN].

    The European Organization for Nuclear Research [Organización Europea para la Investigación Nuclear][Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH)[CERN] map.

    CERN LHC tube in the tunnel. Credit: Maximilien Brice and Julien Marius Ordan.

    SixTRack CERN LHC particles.

    “Physicists are taking on the small problems, ones that they can solve with other ways, to try to understand how quantum computing can have an advantage,” says Roni Harnik, a scientist at Fermilab. “Learning from this, they can build a ladder of simulations, through trial and error, to more difficult problems.”

    But just which approaches will succeed, and which will lead to dead ends, remains to be seen. Estimates of how many qubits will be needed to simulate big enough problems in physics to get breakthroughs range from thousands to (more likely) millions. Many in the field expect this to be possible in the 2030s or 2040s.

    “In high-energy physics, problems like these are clearly a regime in which quantum computers will have an advantage,” says Ning Bao, associate computational scientist at DOE’s Brookhaven National Laboratory (US). “The problem is that quantum computers are still too limited in what they can do.”

    Starting with physics

    Some physicists are coming at things from a different perspective: They’re looking to physics to better understand quantum computing.

    John Preskill is a physics professor at The California Institute of Technology (US) and an early leader in the field of quantum computing. A few years ago, he and Patrick Hayden, professor of physics at Stanford University (US), showed that if you entangled two photons and threw one into a black hole, decoding the information that eventually came back out via Hawking radiation would be significantly easier than if you had used non-entangled particles. Physicists Beni Yoshida and Alexei Kitaev then came up with an explicit protocol for such decoding, and Yao went a step further, showing that protocol could also be a powerful tool in characterizing quantum computers.

    “We took something that was thought about in terms of high-energy physics and quantum information science, then thought of it as a tool that could be used in quantum computing,” Yao says.

    That sort of cross-disciplinary thinking will be key to moving the field forward, physicists say.

    “Everyone is coming into this field with different expertise,” Bao says. “From computing, or physics, or quantum information theory—everyone gets together to bring different perspectives and figure out problems. There are probably many ways of using quantum computing to study physics that we can’t predict right now, and it will just be a matter of getting the right two people in a room together.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 9:59 pm on January 15, 2022 Permalink | Reply
    Tags: "Scientists use Summit supercomputer and deep learning to predict protein functions at genome scale", A grand challenge in biology: translating genetic code into meaningful functions., , “Structure determines function” is the adage when it comes to proteins., , , Deep learning is shifting the paradigm by quickly narrowing the vast field of candidate genes to the most interesting few for further study., , Geneticists are now dealing with the amount of data that astrophysicists deal with., High-performance computing is necessary to take that sequencing data and come up with useful inferences to narrow the field for experiments., One of the tools in the deep learning pipeline is called Sequence Alignments from deep-Learning of Structural Alignments or SAdLSA., Supercomputing, The research team is focusing on organisms critical to DOE missions., With advances in DNA sequencing technology data are available for about 350 million protein sequences — a number that continues to climb.   

    From DOE’s Oak Ridge National Laboratory (US): “Scientists use Summit supercomputer and deep learning to predict protein functions at genome scale” 

    From DOE’s Oak Ridge National Laboratory (US)

    January 10, 2022

    Kimberly A Askey
    askeyka@ornl.gov
    865.576.2841

    1
    This protein drives key processes for sulfide use in many microorganisms that produce methane, including Thermosipho melanesiensis. Researchers used supercomputing and deep learning tools to predict its structure, which has eluded experimental methods such as crystallography. Credit: Ada Sedova/ORNL, Department of Energy (US).

    A team of scientists led by the Department of Energy’s Oak Ridge National Laboratory and The Georgia Institute of Technology (US) is using supercomputing and revolutionary deep learning tools to predict the structures and roles of thousands of proteins with unknown functions.

    Their deep learning-driven approaches infer protein structure and function from DNA sequences, accelerating new discoveries that could inform advances in biotechnology, biosecurity, bioenergy and solutions for environmental pollution and climate change.

    Researchers are using the Summit supercomputer [below] at ORNL and tools developed by Google’s DeepMind and Georgia Tech to speed the accurate identification of protein structures and functions across the entire genomes of organisms. The team recently published [IEEE Xplore] details of the high-performance computing toolkit and its deployment on Summit.

    These powerful computational tools are a significant leap toward resolving a grand challenge in biology: translating genetic code into meaningful functions.

    Proteins are a key component of solving this challenge. They are also central to resolving many scientific questions about the health of humans, ecosystems and the planet. As the workhorses of the cell, proteins drive nearly every process necessary for life — from metabolism to immune defense to communication between cells.

    “Structure determines function” is the adage when it comes to proteins; their complex 3D shapes guide how they interact with other proteins to do the work of the cell. Understanding a protein’s structure and function based on lengthy strings of nucleotides — written as the letters A, C, T and G — that make up DNA has long been a bottleneck in the life sciences as researchers relied on educated guesses and painstaking laboratory experiments to validate structures.

    With advances in DNA sequencing technology data are available for about 350 million protein sequences — a number that continues to climb. Because of the need for extensive experimental work to determine three dimensional structures, scientists have only solved the structures for about 170,000 of those proteins. This is a tremendous gap.

    “We’re now dealing with the amount of data that astrophysicists deal with, all because of the genome sequencing revolution,” said ORNL researcher Ada Sedova. “We want to be able to use high-performance computing to take that sequencing data and come up with useful inferences to narrow the field for experiments. We want to quickly answer questions such as ‘what does this protein do, and how does it affect the cell? How can we harness proteins to achieve goals such as making needed chemicals, medicines and sustainable fuels, or to engineer organisms that can help mitigate the effects of climate change?’”

    The research team is focusing on organisms critical to DOE missions. They have modeled the full proteomes — all the proteins coded in an organism’s genome — for four microbes, each with approximately 5,000 proteins. Two of these microbes have been found to generate important materials for manufacturing plastics. The other two are known to break down and transform metals. The structural data can inform new advances in synthetic biology and strategies to reduce the spread of contaminants such as mercury in the environment.

    The team also generated models of the 24,000 proteins at work in sphagnum moss. Sphagnum plays a critical role in storing vast amounts of carbon in peat bogs, which hold more carbon than all the world’s forests. These data can help scientists pinpoint which genes are most important in enhancing sphagnum’s ability to sequester carbon and withstand climate change.

    Speeding scientific discovery

    In search of the genes that enable sphagnum moss to tolerate rising temperatures, ORNL scientists start by comparing its DNA sequences to the model organism Arabidopsis, a thoroughly investigated plant species in the mustard family.

    “Sphagnum moss is about 515 million years diverged from that model,” said Bryan Piatkowski, a biologist and ORNL Liane B. Russell Fellow. “Even for plants more closely related to Arabidopsis, we don’t have a lot of empirical evidence for how these proteins behave. There is only so much we can infer about function from comparing the nucleotide sequences with the model.”

    Being able to see the structures of proteins adds another layer that can help scientists home in on the most promising gene candidates for experiments.

    Piatkowski, for instance, has been studying moss populations from Maine to Florida with the aim of identifying differences in their genes that could be adaptive to climate. He has a long list of genes that might regulate heat tolerance. Some of the gene sequences are only different by one nucleotide, or in the language of the genetic code, by a single letter.

    “These protein structures will help us look for whether these nucleotide changes cause changes to the protein function and if so, how? Do those protein changes end up helping plants survive in extreme temperatures?” Piatkowski said.

    Looking for similarities in sequences to determine function is only part of the challenge. DNA sequences are translated into the amino acids that make up proteins. Through evolution, some of the sequences can mutate over time, replacing one amino acid with another that has similar properties. These changes do not always cause differences in function.

    “You could have proteins with very different sequences — less than 20% sequence match — and get the same structure and possibly the same function,” Sedova said. “Computational tools that only compare sequences can fail to find two proteins with very similar structures.”

    Until recently, scientists have not had tools that can reliably predict protein structure based on genetic sequences. Applying these new deep learning tools is a game changer.

    Though protein structure and function will still need confirmation via physical experiments and methods such as X-ray crystallography, deep learning is shifting the paradigm by quickly narrowing the vast field of candidate genes to the most interesting few for further study.

    Revolutionary tools

    One of the tools in the deep learning pipeline is called Sequence Alignments from deep-Learning of Structural Alignments or SAdLSA. Developed by collaborators Mu Gao and Jeffrey Skolnick at Georgia Tech, the computational tool is trained in a similar way as other deep learning models that predict protein structure. SAdLSA has the capability to compare sequences by implicitly understanding the protein structure, even if the sequences only share 10% similarity.

    “SAdLSA can detect distantly related proteins that may or may not have the same function,” said Jerry Parks, ORNL computational chemist and group leader. “Combine that with AlphaFold, which provides a 3D structural model of the protein, and you can analyze the active site to determine which amino acids are doing the chemistry and how they contribute to the function.”

    DeepMind’s tool, AlphaFold 2, demonstrated accuracy approaching that of X-ray crystallography in determining the structures of unknown proteins in the 2020 Critical Assessment of protein Structure Prediction, or CASP, competition. In this worldwide biennial experiment, organizers use unpublished protein structures that have been solved and validated to gauge the success of state-of-the-art software programs in predicting protein structure.

    AlphaFold 2 is the first and only program to achieve this level of accuracy since CASP began in 1994. As a bonus, it can also predict protein-protein interactions. This is important as proteins rarely work in isolation.

    “I’ve used AlphaFold to generate models of protein complexes, and it works phenomenally well,” Parks said. “It predicts not only the structure of the individual proteins but also how they interact with each other.”

    With AlphaFold’s success, the European Bioinformatics Institute, or EBI, has partnered with them to model over 100 million proteins — starting with model organisms and those with applications for medicine and human health.

    ORNL researchers and their collaborators are complementing EBI’s efforts by focusing on organisms that are critical to DOE missions. They are working to make the toolkit available to other users on Summit and to share the thousands of protein structures they’ve modeled as downloadable datasets to facilitate science.

    “This is a technology that is difficult for many research groups to just spin up,” Sedova said. “We hope to make it more accessible now that we’ve formatted it for Summit.”

    Using AlphaFold 2, with its many software modules and 1.5 terabyte database, requires significant amounts of memory and many powerful parallel processing units. Running it on Summit was a multi-step process that required a team of experts at the Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility.

    ORNL’s Ryan Prout, Subil Abraham, Nicholas Quentin Haas, Wael Elwasif and Mark Coletti were critical to the implementation process, which relied in part on a unique capability called a Singularity container that was originally developed by DOE’s Lawrence Berkeley National Laboratory (US). Mu Gao contributed by deconstructing DeepMind’s AlphaFold 2 workflow so it could make efficient use of the OLCF resources, including Summit and the Andes system.

    The work will evolve as the tools change, including the advancement to exascale computing with the Frontier system being built at ORNL, expected to exceed a quintillion, or 1018, calculations per second.

    Depiction of ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology , being built at DOE’s Oak Ridge National Laboratory.

    Sedova is excited about the possibilities.

    “With these kinds of tools in our tool belt that are both structure-based and deep learning-based, this resource can help give us information about these proteins of unknown function — sequences that have no matches to other sequences in the entire repository of known proteins,” Sedova said. “This unlocks a lot of new knowledge and potential to address national priorities through bioengineering. For instance, there are potentially many enzymes with useful functions that have not yet been discovered.”

    The research team includes ORNL’s Ada Sedova and Jerry Parks, Georgia Tech’s Jeffrey Skolnick and Mu Gao and Jianlin Cheng from The University of Missouri (US). Sedova virtually presented their work at the Machine Learning in HPC Environments workshop chaired by ORNL’s Seung-Hwan Lim as part of SC21, the International Conference for High Performance Computing, Networking, Storage and Analysis.

    The project is supported through the Biological and Environmental Research program in DOE’s Office of Science and through an award from the DOE Office of Advanced Scientific Computing Research’s Leadership Computing Challenge. Piatkowski’s research on sphagnum moss is supported through ORNL’s Laboratory Directed Research and Development funds.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    Established in 1942, DOE’s Oak Ridge National Laboratory (US) is the largest science and energy national laboratory in the Department of Energy system (by size) and third largest by annual budget. It is located in the Roane County section of Oak Ridge, Tennessee. Its scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries.

    ORNL has several of the world’s top supercomputers, including Summit, ranked by the TOP500 as Earth’s second-most powerful.

    ORNL OLCF IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500..

    The lab is a leading neutron and nuclear power research facility that includes the Spallation Neutron Source and High Flux Isotope Reactor.

    It hosts the Center for Nanophase Materials Sciences, the BioEnergy Science Center, and the Consortium for Advanced Simulation of Light Water Nuclear Reactors.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    Areas of research

    ORNL conducts research and development activities that span a wide range of scientific disciplines. Many research areas have a significant overlap with each other; researchers often work in two or more of the fields listed here. The laboratory’s major research areas are described briefly below.

    Chemical sciences – ORNL conducts both fundamental and applied research in a number of areas, including catalysis, surface science and interfacial chemistry; molecular transformations and fuel chemistry; heavy element chemistry and radioactive materials characterization; aqueous solution chemistry and geochemistry; mass spectrometry and laser spectroscopy; separations chemistry; materials chemistry including synthesis and characterization of polymers and other soft materials; chemical biosciences; and neutron science.
    Electron microscopy – ORNL’s electron microscopy program investigates key issues in condensed matter, materials, chemical and nanosciences.
    Nuclear medicine – The laboratory’s nuclear medicine research is focused on the development of improved reactor production and processing methods to provide medical radioisotopes, the development of new radionuclide generator systems, the design and evaluation of new radiopharmaceuticals for applications in nuclear medicine and oncology.
    Physics – Physics research at ORNL is focused primarily on studies of the fundamental properties of matter at the atomic, nuclear, and subnuclear levels and the development of experimental devices in support of these studies.
    Population – ORNL provides federal, state and international organizations with a gridded population database, called Landscan, for estimating ambient population. LandScan is a raster image, or grid, of population counts, which provides human population estimates every 30 x 30 arc seconds, which translates roughly to population estimates for 1 kilometer square windows or grid cells at the equator, with cell width decreasing at higher latitudes. Though many population datasets exist, LandScan is the best spatial population dataset, which also covers the globe. Updated annually (although data releases are generally one year behind the current year) offers continuous, updated values of population, based on the most recent information. Landscan data are accessible through GIS applications and a USAID public domain application called Population Explorer.

     
  • richardmitnick 2:37 pm on November 13, 2021 Permalink | Reply
    Tags: "6 Things to Know About Supercomputing at NASA", , , Supercomputing,   

    From The National Aeronautics Space Agency (US) : “6 Things to Know About Supercomputing at NASA” 

    From The National Aeronautics Space Agency (US)

    Nov 12, 2021

    Rachel Hoover
    The NASA Ames Research Center (US), Silicon Valley, Calif.
    650-604-4789
    rachel.hoover@nasa.gov

    Rob Gutro
    The Goddard Space Flight Center-NASA (US), Greenbelt, Maryland
    301-286-4044
    robert.j.gutro@nasa.gov

    From exploring the solar system and outer space to improving life here on Earth, supercomputing is vital to NASA missions. The agency will host a virtual exhibit to showcase how these powerful machines enable science and engineering advances during the International Conference for High Performance Computing, Networking, Storage and Analysis. This year’s conference is being held both in St. Louis, and virtually Nov. 14–19, 2021.

    Here are six things to know about NASA supercomputing research projects:

    1. Supercomputers lead to environmentally friendly, all-electric plane designs
    2
    A simulation snapshot shows NASA’s X-57 Maxwell airplane with all of the high-lift propellers operating. Credits: Karen Deere and Jeffrey Viken/NASA.

    NASA’s X-57 Maxwell is a revolutionary experimental airplane designed to demonstrate that an all-electric airplane can be more efficient, quieter, and more environmentally friendly than those powered by traditional gas piston engines. The X-57 uses a unique distributed electric propulsion, or DEP, system that includes 12 electrically powered propellers for additional lift at takeoff and landing.

    NASA created a computational fluid dynamics, or CFD, aerodynamic database for a piloted simulator, airworthiness assessments, and safety analyses prior to the first flight. Aerospace engineers at NASA’s Langley Research Center in Hampton, Virginia are using the Pleiades supercomputer at the NASA Advanced Supercomputing, or NAS, facility at the agency’s Ames Research Center in California’s Silicon Valley to perform their CFD.

    NASA SGI Intel Advanced Supercomputing Center Pleiades Supercomputer, housed at the NASA Advanced Supercomputing (NAS) facility at NASA’s Ames Research Center.

    This year the researchers focused their simulations on determining the impacts of DEP motor failures, which are used in simulators so test pilots can prepare for the possibility of system failures during real X-57 flight tests. This work will directly impact flight safety during the real flight tests, and ultimately the life of the pilot.

    2. Massive simulations help keep astronauts safe
    3
    A simulated view from behind the main flame deflector at NASA’s Kennedy Space Center’s Launch Complex 39B, showing liquid water in blue, water vapor in white, and vehicle exhaust in purple and yellow. Credit: Jordan Angel and Timothy Sandstrom/ NASA.

    With NASA’s first Artemis mission scheduled to take off in 2022, engineers and researchers across the agency are steadily working to ensure the safety and success of the missions that will use NASA’s Space Launch System, or SLS, and the Orion crew module.

    To support upcoming missions from Launch Complex 39B at NASA’s Kennedy Space Center, engineers at the NAS facility developed a new CFD approach to accurately simulate the launch environment, including its water-based sound suppression system, to estimate acoustic loads during liftoff — a necessary step for the design and certification of the agency’s vehicles and flight instruments.

    Concurrently, engineers at NASA’s Marshall Space Flight Center in Huntsville, Alabama are running simulations on NASA supercomputers to better understand and predict non-acoustic aspects of the SLS liftoff environment — specifically, how the solid rocket boosters will interact with the water-based sound suppression system at ignition, which may create large, but short-lived, pressure forces.

    Orion also has a wide range of safety systems, including a state-of-the-art launch abort system that can quickly pull astronauts to safety in the event of an emergency. A team at NAS is using detailed turbulence-resolving simulations to understand how the system’s abort motor firing can impact the crew module for abort scenarios that were not flight tested — for example, a high-altitude, near-hypersonic abort scenario at the edge of Earth’s atmosphere.

    3. Drought and flood forecasts help responses to water and food security crises
    4
    A snapshot of a simulation modeling drought potential over East Africa from the NASA Hydrological Forecast System and Analysis System. Abnormally dry areas are highlighted in shades of red and orange.
    Credit: Abheera Hazra and Trent Schindler/ NASA.

    For millions of Africans, droughts and floods affect not only their livelihoods but also life itself when water and food become scarce. Government agencies and relief organizations rely on early warning systems to help residents cope with extreme or prolonged events.

    Water scientists at NASA’s Goddard Space Flight Center in Greenbelt, Maryland combined computer models and observational data into the NASA Hydrological Forecast and Analysis System, or NHyFAS.

    NHyFAS produces monthly drought and flood potential forecasts for Africa and the Middle East. Since it runs several land surface models in an ensemble, NHyFAS requires the processing power of the Discover supercomputer operated by the NASA Center for Climate Simulation, or NCCS, at NASA Goddard.

    NASA Discover SGI Supercomputer- NASA’s Center for Climate Simulation Primary Computing Platform.

    From the NCCS DataPortal NHyFAS forecast products go directly to food security analysts working with the U.S. Agency for International Development Famine Early Warning Systems Network to inform humanitarian response.

    4. Ultra-high resolution simulations help scientists study impacts of climate change
    5
    A snapshot from an ocean-atmosphere simulation showing large- and medium-scale eddies, tides, and the surface signature of internal waves. This enormous simulation ran for nearly a year on NASA’s Pleiades and Aitken supercomputers. Credit: Nina McCurdy and David Ellsworth/NASA.

    Over the last few years, NASA supercomputing resources have revolutionized Earth science by enabling increasingly realistic global simulations of the atmosphere and the ocean, using two flagship NASA data assimilating models — the Goddard Earth Observing System and Estimating the Circulation and Climate of the Ocean. Now, researchers have combined these models to produce a high-resolution simulation of air-sea interactions around the globe.

    By analyzing the results using advanced visualization tools developed by experts at the NAS facility, scientists are gaining a deeper understanding of how the atmosphere and the ocean interact, and how that interaction influences storm tracks, ocean eddies, and equatorial currents. Insights gained also will help researchers design future Earth-observing satellite missions. Learning more about how all of these complex processes work together helps us understand our planet’s climate and weather, and how both are changing.

    5. Dazzling simulations let you fly through our galaxy in virtual reality
    6
    This screenshot from inside a virtual reality headset shows the supermassive black hole at the center of the Milky Way Galaxy. The simulation includes the stars, their orbit lines, stellar winds (shown in red and yellow), with their X-ray emission (shown in blue and cyan) within a few light-years of the black hole. Credit: Christopher Russell/NASA/ The University of Delaware (US)/The Catholic University of America (US).

    NASA’s Chandra X-ray Observatory spent nearly three months peering at the supermassive black hole in the middle of the Milky Way, revealing a nearby reservoir of extremely hot gas.

    National Aeronautics and Space Administration Chandra X-ray telescope(US)

    SGR A* Credit: Pennsylvania State University(US) and National Aeronautics Space Agency(US) Chandra X-ray Observatory (US) .
    To help explain the observations, a team of university researchers developed and ran models of the center of our galaxy on the Pleiades supercomputer [above], simulating the evolution of the black hole and the 25 key stars that orbit it.

    The simulations show how the stars spew materials that flow supersonically into space until they collide. The ensuing shocks create the hot gas that Chandra sees. The model closely matches the observations, giving the researchers confidence that the unobservable features in the simulation are on the right track.

    To showcase their results, the team created the Galactic Center VR app, which includes the black hole, stars, ejected stellar winds, and X-ray emissions. Users can move forward or backward in time over 500 years of evolution and pause the simulation to get an in-depth view.

    6. Innovations in computing are expanding research and improving collaboration
    7
    An illustration showing how data from NASA remote sensing instruments and Earth science models developed through Open Science tools enables access to science by scientists and engineers worldwide. Credit: Sujay Kumar and Barbara Talbott/NASA.

    Among these innovations is NASA’s Modular Supercomputing Facility, managed by the NAS Division at Ames, which uses an environmentally conscious and expandable design to provide power efficiency and cost savings.

    The NAS Division also is expanding its research in machine learning to help improve processing of vast amounts of data from Earth- and space-based instruments, and to explore advances in areas such as pattern recognition and anomaly detection — building tools that will assist NASA in unprecedented ways in future missions.

    Another innovative system is the Science Managed Cloud Environment, or SMCE, recently created by engineers at NASA Goddard. The SMCE team and collaborators developed the NASA Earth Information System, or EIS: a flexible, rapid-response computing capability that leverages the versatility of the Amazon Web Services Cloud to yield a collaborative, Open Science environment. EIS delivers analysis-ready products for use by scientists and non-scientists alike, with pilot studies focused on fire, freshwater, and sea level change.

    For more information about NASA’s virtual exhibit at the International Conference for High Performance Computing, Networking, Storage and Analysis, visit:

    https://www.nas.nasa.gov/sc21

    For more information about supercomputers run by NASA’s High-End Computing Program, visit:

    https://hec.nasa.gov/

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.
    Stem Education Coalition

    The The National Aeronautics and Space Administration (NASA) is the agency of the United States government that is responsible for the nation’s civilian space program and for aeronautics and aerospace research.

    President Dwight D. Eisenhower established the National Aeronautics and Space Administration (NASA) in 1958 with a distinctly civilian (rather than military) orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA). The new agency became operational on October 1, 1958.

    Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program (LSP) which provides oversight of launch operations and countdown management for unmanned NASA launches. Most recently, NASA announced a new Space Launch System that it said would take the agency’s astronauts farther into space than ever before and lay the cornerstone for future human space exploration efforts by the U.S.

    NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate’s Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories [Hubble, Chandra,
    Spitzer , and associated programs. NASA shares data with various national and international organizations such as from the [JAXA]Greenhouse Gases Observing Satellite.

     
  • richardmitnick 9:26 am on October 30, 2021 Permalink | Reply
    Tags: "Taming The Data Deluge", , , , , , Brain imaging neuroscience, , , , , , , , , , , Supercomputing   

    From Kavli MIT Institute For Astrophysics and Space Research : “Taming The Data Deluge” 

    KavliFoundation

    http://www.kavlifoundation.org/institutes

    MIT Kavli Institute for Astrophysics and Space Research.

    From Kavli MIT Institute For Astrophysics and Space Research

    October 29, 2021

    Sandi Miller | Department of Physics

    An oncoming tsunami of data threatens to overwhelm huge data-rich research projects on such areas that range from the tiny neutrino to an exploding supernova, as well as the mysteries deep within the brain.

    2
    Left to right: Erik Katsavounidis of MIT’s Kavli Institute, Philip Harris of the Department of Physics, and Song Han of the Department of Electrical Engineering and Computer Science are part of a team from nine institutions that secured $15 million in National Science Foundation funding to set up the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute. Photo: Sandi Miller.

    When LIGO picks up a gravitational-wave signal from a distant collision of black holes and neutron stars, a clock starts ticking for capturing the earliest possible light that may accompany them: time is of the essence in this race.

    Caltech /MIT Advanced aLigo

    Data collected from electrical sensors monitoring brain activity are outpacing computing capacity. Information from the Large Hadron Collider (LHC)’s smashed particle beams will soon exceed 1 petabit per second.

    To tackle this approaching data bottleneck in real-time, a team of researchers from nine institutions led by The University of Washington (US), including The Massachusetts Institute of Technology (US), has received $15 million in funding to establish the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute. From MIT, the research team includes Philip Harris, assistant professor of physics, who will serve as the deputy director of the A3D3 Institute; Song Han, assistant professor of electrical engineering and computer science, who will serve as the A3D3’s co-PI; and Erik Katsavounidis, senior research scientist with the MIT Kavli Institute for Astrophysics and Space Research.

    Infused with this five-year Harnessing the Data Revolution Big Idea grant, and jointly funded by the Office of Advanced Cyberinfrastructure, A3D3 will focus on three data-rich fields: multi-messenger astrophysics, high-energy particle physics, and brain imaging neuroscience. By enriching AI algorithms with new processors, A3D3 seeks to speed up AI algorithms for solving fundamental problems in collider physics, neutrino physics, astronomy, gravitational-wave physics, computer science, and neuroscience.

    “I am very excited about the new Institute’s opportunities for research in nuclear and particle physics,” says Laboratory for Nuclear Science Director Boleslaw Wyslouch. “Modern particle detectors produce an enormous amount of data, and we are looking for extraordinarily rare signatures. The application of extremely fast processors to sift through these mountains of data will make a huge difference in what we will measure and discover.”

    The seeds of A3D3 were planted in 2017, when Harris and his colleagues at DOE’s Fermi National Accelerator Laboratory (US) and The European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN] decided to integrate real-time AI algorithms to process the incredible rates of data at the LHC. Through email correspondence with Han, Harris’ team built a compiler, HLS4ML, that could run an AI algorithm in nanoseconds.

    “Before the development of HLS4ML, the fastest processing that we knew of was roughly a millisecond per AI inference, maybe a little faster,” says Harris. “We realized all the AI algorithms were designed to solve much slower problems, such as image and voice recognition. To get to nanosecond inference timescales, we recognized we could make smaller algorithms and rely on custom implementations with Field Programmable Gate Array (FPGA) processors in an approach that was largely different from what others were doing.”

    A few months later, Harris presented their research at a physics faculty meeting, where Katsavounidis became intrigued. Over coffee in Building 7, they discussed combining Harris’ FPGA with Katsavounidis’s use of machine learning for finding gravitational waves. FPGAs and other new processor types, such as graphics processing units (GPUs), accelerate AI algorithms to more quickly analyze huge amounts of data.

    “I had worked with the first FPGAs that were out in the market in the early ’90s and have witnessed first-hand how they revolutionized front-end electronics and data acquisition in big high-energy physics experiments I was working on back then,” recalls Katsavounidis. “The ability to have them crunch gravitational-wave data has been in the back of my mind since joining LIGO over 20 years ago.”

    Two years ago they received their first grant, and the University of Washington’s Shih-Chieh Hsu joined in. The team initiated the Fast Machine Lab, published about 40 papers on the subject, built the group to about 50 researchers, and “launched a whole industry of how to explore a region of AI that has not been explored in the past,” says Harris. “We basically started this without any funding. We’ve been getting small grants for various projects over the years. A3D3 represents our first large grant to support this effort.”

    “What makes A3D3 so special and suited to MIT is its exploration of a technical frontier, where AI is implemented not in high-level software, but rather in lower-level firmware, reconfiguring individual gates to address the scientific question at hand,” says Rob Simcoe, director of MIT Kavli Institute for Astrophysics and Space Research and the Francis Friedman Professor of Physics. “We are in an era where experiments generate torrents of data. The acceleration gained from tailoring reprogrammable, bespoke computers at the processor level can advance real-time analysis of these data to new levels of speed and sophistication.”

    The Huge Data from the Large Hadron Collider

    With data rates already exceeding 500 terabits per second, the LHC processes more data than any other scientific instrument on earth. Its future aggregate data rates will soon exceed 1 petabit per second, the biggest data rate in the world.

    “Through the use of AI, A3D3 aims to perform advanced analyses, such as anomaly detection, and particle reconstruction on all collisions happening 40 million times per second,” says Harris.

    The goal is to find within all of this data a way to identify the few collisions out of the 3.2 billion collisions per second that could reveal new forces, explain how Dark Matter is formed, and complete the picture of how fundamental forces interact with matter. Processing all of this information requires a customized computing system capable of interpreting the collider information within ultra-low latencies.

    “The challenge of running this on all of the 100s of terabits per second in real-time is daunting and requires a complete overhaul of how we design and implement AI algorithms,” says Harris. “With large increases in the detector resolution leading to data rates that are even larger the challenge of finding the one collision, among many, will become even more daunting.”

    The Brain and the Universe

    Thanks to advances in techniques such as medical imaging and electrical recordings from implanted electrodes, neuroscience is also gathering larger amounts of data on how the brain’s neural networks process responses to stimuli and perform motor information. A3D3 plans to develop and implement high-throughput and low-latency AI algorithms to process, organize, and analyze massive neural datasets in real time, to probe brain function in order to enable new experiments and therapies.

    With Multi-Messenger Astrophysics (MMA), A3D3 aims to quickly identify astronomical events by efficiently processing data from gravitational waves, gamma-ray bursts, and neutrinos picked up by telescopes and detectors.

    The A3D3 researchers also include a multi-disciplinary group of 15 other researchers, including project lead the University of Washington, along with The California Institute of Technology (US), Duke University (US), Purdue University (US), The University of California-San Diego (US), The University of Illinois-Urbana-Champaign (US), The University of Minnesota (US), and The University of Wisconsin-Madison (US). It will include neutrinos research at The University of Wisconsin IceCube Neutrino Observatory(US) and The Fermi National Accelerator Laboratory DUNE/LBNF experiment (US), and visible astronomy at The Zwicky Transient Facility (US), and will organize deep-learning workshops and boot camps to train students and researchers on how to contribute to the framework and widen the use of fast AI strategies.

    “We have reached a point where detector network growth will be transformative, both in terms of event rates and in terms of astrophysical reach and ultimately, discoveries,” says Katsavounidis. “‘Fast’ and ‘efficient’ is the only way to fight the ‘faint’ and ‘fuzzy’ that is out there in the universe, and the path for getting the most out of our detectors. A3D3 on one hand is going to bring production-scale AI to gravitational-wave physics and multi-messenger astronomy; but on the other hand, we aspire to go beyond our immediate domains and become the go-to place across the country for applications of accelerated AI to data-driven disciplines.”

    Science paper:
    Hardware-accelerated Inference for Real-Time Gravitational-Wave Astronomy

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Mission Statement

    The mission of the MIT Kavli Institute (MKI) for Astrophysics and Space Research is to facilitate and carry out the research programs of faculty and research staff whose interests lie in the broadly defined area of astrophysics and space research. Specifically, the MKI will

    Provide an intellectual home for faculty, research staff, and students engaged in space- and ground-based astrophysics
    Develop and operate space- and ground-based instrumentation for astrophysics
    Engage in technology development
    Maintain an engineering and technical core capability for enabling and supporting innovative research
    Communicate to students, educators, and the public an understanding of and an appreciation for the goals, techniques and results of MKI’s research.

    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.

    To date, The Kavli Foundation has made grants to establish Kavli Institutes on the campuses of 20 major universities. In addition to the Kavli Institutes, nine Kavli professorships have been established: three at Harvard University, two at University of California, Santa Barbara, one each at University of California, Los Angeles, University of California, Irvine, Columbia University, Cornell University, and California Institute of Technology.

    The Kavli Institutes:

    The Kavli Foundation’s 20 institutes focus on astrophysics, nanoscience, neuroscience and theoretical physics.

    Astrophysics

    The Kavli Institute for Particle Astrophysics and Cosmology at Stanford University
    The Kavli Institute for Cosmological Physics, University of Chicago
    The Kavli Institute for Astrophysics and Space Research at the Massachusetts Institute of Technology
    The Kavli Institute for Astronomy and Astrophysics at Peking University
    The Kavli Institute for Cosmology at the University of Cambridge
    The Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo

    Nanoscience

    The Kavli Institute for Nanoscale Science at Cornell University
    The Kavli Institute of Nanoscience at Delft University of Technology in the Netherlands
    The Kavli Nanoscience Institute at the California Institute of Technology
    The Kavli Energy NanoSciences Institute at University of California, Berkeley and the Lawrence Berkeley National Laboratory
    The Kavli Institute for NanoScience Discovery at the University of Oxford

    Neuroscience

    The Kavli Institute for Brain Science at Columbia University
    The Kavli Institute for Brain & Mind at the University of California, San Diego
    The Kavli Institute for Neuroscience at Yale University
    The Kavli Institute for Systems Neuroscience at the Norwegian University of Science and Technology
    The Kavli Neuroscience Discovery Institute at Johns Hopkins University
    The Kavli Neural Systems Institute at The Rockefeller University
    The Kavli Institute for Fundamental Neuroscience at the University of California, San Francisco

    Theoretical physics

    Kavli Institute for Theoretical Physics at the University of California, Santa Barbara
    The Kavli Institute for Theoretical Physics China at the University of Chinese Academy of Sciences

     
  • richardmitnick 12:08 pm on October 25, 2021 Permalink | Reply
    Tags: "Astrophysicists Reveal Largest-Ever Suite of Universe Simulations", , , , , , , Supercomputing   

    From Harvard-Smithsonian Center for Astrophysics (US): “Astrophysicists Reveal Largest-Ever Suite of Universe Simulations” 

    From Harvard-Smithsonian Center for Astrophysics (US)

    10.24.21

    Nadia Whitehead
    Public Affairs Officer
    Center for Astrophysics | Harvard & Smithsonian
    nadia.whitehead@cfa.harvard.edu
    617-721-7371

    Anastasia Greenebaum
    Communications Director
    Simons Foundation
    press@simonsfoundation.org
    212-524-6097

    To understand how the universe formed, astronomers have created AbacusSummit, more than 160 simulations of how gravity may have shaped the distribution of dark matter.

    1
    Lucy Reading-Ikkanda/Simons Foundation.

    The simulation suite, dubbed AbacusSummit, will be instrumental for extracting secrets of the universe from upcoming surveys of the cosmos, its creators predict. They present AbacusSummit in several papers published this week in the MNRAS.

    AbacusSummit is the product of researchers at the Flatiron Institute Center for Computational Astrophysics (US) (CCA) in New York City and the Center for Astrophysics | Harvard & Smithsonian. Made up of more than 160 simulations, it models how particles in the universe move about due to their gravitational attraction. Such models, known as N-body simulations, capture the behavior of the dark matter, a mysterious and invisible force that makes up 27 percent of the universe and interacts only via gravity.

    “This suite is so big that it probably has more particles than all the other N-body simulations that have ever been run combined — though that’s a hard statement to be certain of,” says Lehman Garrison, lead author of one of the new papers and a CCA research fellow.

    Garrison led the development of the AbacusSummit simulations along with graduate student Nina Maksimova and professor of astronomy Daniel Eisenstein, both of the Center for Astrophysics. The simulations ran on the U.S. Department of Energy’s Summit supercomputer at the DOE’s ORNL Leadership Computing Facility (US).

    ORNL OLCF IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500..

    Several space surveys will produce maps of the cosmos with unprecedented detail in the coming years. These include the Dark Energy Spectroscopic Instrument (DESI), the Nancy Grace Roman Space Telescope, the Vera C. Rubin Observatory and the Euclid spacecraft. One of the goals of these big-budget missions is to improve estimations of the cosmic and astrophysical parameters that determine how the universe behaves and how it looks.

    National Science Foundation(US) NOIRLab (US) NOAO Kitt Peak National Observatory (US) on Kitt Peak of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers (55 mi) west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft), annotated.

    Scientists will make those improved estimations by comparing the new observations to computer simulations of the universe with different values for the various parameters — such as the nature of the dark energy pulling the universe apart.

    “The coming generation of cosmological surveys will map the universe in great detail and explore a wide range of cosmological questions,” says Eisenstein, a co-author on the new MNRAS papers. “But leveraging this opportunity requires a new generation of ambitious numerical simulations. We believe that AbacusSummit will be a bold step for the synergy between computation and experiment.”

    The decade-long project was daunting. N-body calculations — which attempt to compute the movements of objects, like planets, interacting gravitationally — have been a foremost challenge in the field of physics since the days of Isaac Newton. The trickiness comes from each object interacting with every other object, no matter how far away. That means that as you add more things, the number of interactions rapidly increases.

    There is no general solution to the N-body problem for three or more massive bodies. The calculations available are simply approximations. A common approach is to freeze time, calculate the total force acting on each object, then nudge each one based on the net force it experiences. Time is then moved forward slightly, and the process repeats.

    Using that approach, AbacusSummit handled colossal numbers of particles thanks to clever code, a new numerical method and lots of computing power. The Summit supercomputer was the world’s fastest at the time the team ran the calculations; it is still the fastest computer in the U.S.

    The team designed the codebase for AbacusSummit — called Abacus — to take full advantage of Summit’s parallel processing power, whereby multiple calculations can run simultaneously. In particular, Summit boasts lots of graphical processing units, or GPUs, that excel at parallel processing.

    Running N-body calculations using parallel processing requires careful algorithm design because an entire simulation requires a substantial amount of memory to store. That means Abacus can’t just make copies of the simulation for different nodes of the supercomputer to work on. The code instead divides each simulation into a grid. An initial calculation provides a fair approximation of the effects of distant particles at any given point in the simulation (which play a much smaller role than nearby particles). Abacus then groups nearby cells and splits them off so that the computer can work on each group independently, combining the approximation of distant particles with precise calculations of nearby particles.

    “The Abacus algorithm is well matched to the capabilities of modern supercomputers, as it provides a very regular pattern of computation for the massive parallelism of GPU co-processors,” Maksimova says.

    Thanks to its design, Abacus achieved very high speeds, updating 70 million particles per second per node of the Summit supercomputer, while also performing analysis of the simulations as they ran. Each particle represents a clump of dark matter with 3 billion times the mass of the sun.

    “Our vision was to create this code to deliver the simulations that are needed for this particular new brand of galaxy survey,” says Garrison. “We wrote the code to do the simulations much faster and much more accurate than ever before.”

    Eisenstein, who is a member of the DESI collaboration — which recently began its survey to map an unprecedented fraction of the universe — says he is eager to use Abacus in the future.

    “Cosmology is leaping forward because of the multidisciplinary fusion of spectacular observations and state-of-the-art computing,” he says. “The coming decade promises to be a marvelous age in our study of the historical sweep of the universe.”

    Additional co-creators of Abacus and AbacusSummit include Sihan Yuan of Stanford University (US), Philip Pinto of The University of Arizona (US), Sownak Bose of Durham University (UK) and Center for Astrophysics researchers Boryana Hadzhiyska, Thomas Satterthwaite and Douglas Ferrer. The simulations ran on the Summit supercomputer under an Advanced Scientific Computing Research Leadership Computing Challenge allocation.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The Harvard-Smithsonian Center for Astrophysics (US) combines the resources and research facilities of the Harvard College Observatory(US) and the Smithsonian Astrophysical Observatory(US) under a single director to pursue studies of those basic physical processes that determine the nature and evolution of the universe. The Smithsonian Astrophysical Observatory(US) is a bureau of the Smithsonian Institution(US), founded in 1890. The Harvard College Observatory, founded in 1839, is a research institution of the Faculty of Arts and Sciences, Harvard University(US), and provides facilities and substantial other support for teaching activities of the Department of Astronomy.

    Founded in 1973 and headquartered in Cambridge, Massachusetts, the CfA leads a broad program of research in astronomy, astrophysics, Earth and space sciences, as well as science education. The CfA either leads or participates in the development and operations of more than fifteen ground- and space-based astronomical research observatories across the electromagnetic spectrum, including the forthcoming Giant Magellan Telescope(CL) and the Chandra X-ray Observatory(US), one of NASA’s Great Observatories.

    Hosting more than 850 scientists, engineers, and support staff, the CfA is among the largest astronomical research institutes in the world. Its projects have included Nobel Prize-winning advances in cosmology and high energy astrophysics, the discovery of many exoplanets, and the first image of a black hole. The CfA also serves a major role in the global astrophysics research community: the CfA’s Astrophysics Data System(ADS)(US), for example, has been universally adopted as the world’s online database of astronomy and physics papers. Known for most of its history as the “Harvard-Smithsonian Center for Astrophysics”, the CfA rebranded in 2018 to its current name in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. The CfA’s current Director (since 2004) is Charles R. Alcock, who succeeds Irwin I. Shapiro (Director from 1982 to 2004) and George B. Field (Director from 1973 to 1982).

    The Center for Astrophysics | Harvard & Smithsonian is not formally an independent legal organization, but rather an institutional entity operated under a Memorandum of Understanding between Harvard University and the Smithsonian Institution. This collaboration was formalized on July 1, 1973, with the goal of coordinating the related research activities of the Harvard College Observatory (HCO) and the Smithsonian Astrophysical Observatory (SAO) under the leadership of a single Director, and housed within the same complex of buildings on the Harvard campus in Cambridge, Massachusetts. The CfA’s history is therefore also that of the two fully independent organizations that comprise it. With a combined lifetime of more than 300 years, HCO and SAO have been host to major milestones in astronomical history that predate the CfA’s founding.

    History of the Smithsonian Astrophysical Observatory (SAO)

    Samuel Pierpont Langley, the third Secretary of the Smithsonian, founded the Smithsonian Astrophysical Observatory on the south yard of the Smithsonian Castle (on the U.S. National Mall) on March 1,1890. The Astrophysical Observatory’s initial, primary purpose was to “record the amount and character of the Sun’s heat”. Charles Greeley Abbot was named SAO’s first director, and the observatory operated solar telescopes to take daily measurements of the Sun’s intensity in different regions of the optical electromagnetic spectrum. In doing so, the observatory enabled Abbot to make critical refinements to the Solar constant, as well as to serendipitously discover Solar variability. It is likely that SAO’s early history as a solar observatory was part of the inspiration behind the Smithsonian’s “sunburst” logo, designed in 1965 by Crimilda Pontes.

    In 1955, the scientific headquarters of SAO moved from Washington, D.C. to Cambridge, Massachusetts to affiliate with the Harvard College Observatory (HCO). Fred Lawrence Whipple, then the chairman of the Harvard Astronomy Department, was named the new director of SAO. The collaborative relationship between SAO and HCO therefore predates the official creation of the CfA by 18 years. SAO’s move to Harvard’s campus also resulted in a rapid expansion of its research program. Following the launch of Sputnik (the world’s first human-made satellite) in 1957, SAO accepted a national challenge to create a worldwide satellite-tracking network, collaborating with the United States Air Force on Project Space Track.

    With the creation of National Aeronautics and Space Administration(US) the following year and throughout the space race, SAO led major efforts in the development of orbiting observatories and large ground-based telescopes, laboratory and theoretical astrophysics, as well as the application of computers to astrophysical problems.

    History of Harvard College Observatory (HCO)

    Partly in response to renewed public interest in astronomy following the 1835 return of Halley’s Comet, the Harvard College Observatory was founded in 1839, when the Harvard Corporation appointed William Cranch Bond as an “Astronomical Observer to the University”. For its first four years of operation, the observatory was situated at the Dana-Palmer House (where Bond also resided) near Harvard Yard, and consisted of little more than three small telescopes and an astronomical clock. In his 1840 book recounting the history of the college, then Harvard President Josiah Quincy III noted that “…there is wanted a reflecting telescope equatorially mounted…”. This telescope, the 15-inch “Great Refractor”, opened seven years later (in 1847) at the top of Observatory Hill in Cambridge (where it still exists today, housed in the oldest of the CfA’s complex of buildings). The telescope was the largest in the United States from 1847 until 1867. William Bond and pioneer photographer John Adams Whipple used the Great Refractor to produce the first clear Daguerrotypes of the Moon (winning them an award at the 1851 Great Exhibition in London). Bond and his son, George Phillips Bond (the second Director of HCO), used it to discover Saturn’s 8th moon, Hyperion (which was also independently discovered by William Lassell).

    Under the directorship of Edward Charles Pickering from 1877 to 1919, the observatory became the world’s major producer of stellar spectra and magnitudes, established an observing station in Peru, and applied mass-production methods to the analysis of data. It was during this time that HCO became host to a series of major discoveries in astronomical history, powered by the Observatory’s so-called “Computers” (women hired by Pickering as skilled workers to process astronomical data). These “Computers” included Williamina Fleming; Annie Jump Cannon; Henrietta Swan Leavitt; Florence Cushman; and Antonia Maury, all widely recognized today as major figures in scientific history. Henrietta Swan Leavitt, for example, discovered the so-called period-luminosity relation for Classical Cepheid variable stars, establishing the first major “standard candle” with which to measure the distance to galaxies. Now called “Leavitt’s Law”, the discovery is regarded as one of the most foundational and important in the history of astronomy; astronomers like Edwin Hubble, for example, would later use Leavitt’s Law to establish that the Universe is expanding, the primary piece of evidence for the Big Bang model.

    Upon Pickering’s retirement in 1921, the Directorship of HCO fell to Harlow Shapley (a major participant in the so-called “Great Debate” of 1920). This era of the observatory was made famous by the work of Cecelia Payne-Gaposchkin, who became the first woman to earn a Ph.D. in astronomy from Radcliffe College (a short walk from the Observatory). Payne-Gapochkin’s 1925 thesis proposed that stars were composed primarily of hydrogen and helium, an idea thought ridiculous at the time. Between Shapley’s tenure and the formation of the CfA, the observatory was directed by Donald H. Menzel and then Leo Goldberg, both of whom maintained widely recognized programs in solar and stellar astrophysics. Menzel played a major role in encouraging the Smithsonian Astrophysical Observatory to move to Cambridge and collaborate more closely with HCO.

    Joint history as the Center for Astrophysics (CfA)

    The collaborative foundation for what would ultimately give rise to the Center for Astrophysics began with SAO’s move to Cambridge in 1955. Fred Whipple, who was already chair of the Harvard Astronomy Department (housed within HCO since 1931), was named SAO’s new director at the start of this new era; an early test of the model for a unified Directorship across HCO and SAO. The following 18 years would see the two independent entities merge ever closer together, operating effectively (but informally) as one large research center.

    This joint relationship was formalized as the new Harvard–Smithsonian Center for Astrophysics on July 1, 1973. George B. Field, then affiliated with UC Berkeley(US), was appointed as its first Director. That same year, a new astronomical journal, the CfA Preprint Series was created, and a CfA/SAO instrument flying aboard Skylab discovered coronal holes on the Sun. The founding of the CfA also coincided with the birth of X-ray astronomy as a new, major field that was largely dominated by CfA scientists in its early years. Riccardo Giacconi, regarded as the “father of X-ray astronomy”, founded the High Energy Astrophysics Division within the new CfA by moving most of his research group (then at American Sciences and Engineering) to SAO in 1973. That group would later go on to launch the Einstein Observatory (the first imaging X-ray telescope) in 1976, and ultimately lead the proposals and development of what would become the Chandra X-ray Observatory. Chandra, the second of NASA’s Great Observatories and still the most powerful X-ray telescope in history, continues operations today as part of the CfA’s Chandra X-ray Center. Giacconi would later win the 2002 Nobel Prize in Physics for his foundational work in X-ray astronomy.

    Shortly after the launch of the Einstein Observatory, the CfA’s Steven Weinberg won the 1979 Nobel Prize in Physics for his work on electroweak unification. The following decade saw the start of the landmark CfA Redshift Survey (the first attempt to map the large scale structure of the Universe), as well as the release of the Field Report, a highly influential Astronomy & Astrophysics Decadal Survey chaired by the outgoing CfA Director George Field. He would be replaced in 1982 by Irwin Shapiro, who during his tenure as Director (1982 to 2004) oversaw the expansion of the CfA’s observing facilities around the world.

    CfA-led discoveries throughout this period include canonical work on Supernova 1987A, the “CfA2 Great Wall” (then the largest known coherent structure in the Universe), the best-yet evidence for supermassive black holes, and the first convincing evidence for an extrasolar planet.

    The 1990s also saw the CfA unwittingly play a major role in the history of computer science and the internet: in 1990, SAO developed SAOImage, one of the world’s first X11-based applications made publicly available (its successor, DS9, remains the most widely used astronomical FITS image viewer worldwide). During this time, scientists at the CfA also began work on what would become the Astrophysics Data System (ADS), one of the world’s first online databases of research papers. By 1993, the ADS was running the first routine transatlantic queries between databases, a foundational aspect of the internet today.

    The CfA Today

    Research at the CfA

    Charles Alcock, known for a number of major works related to massive compact halo objects, was named the third director of the CfA in 2004. Today Alcock overseas one of the largest and most productive astronomical institutes in the world, with more than 850 staff and an annual budget in excess of $100M. The Harvard Department of Astronomy, housed within the CfA, maintains a continual complement of approximately 60 Ph.D. students, more than 100 postdoctoral researchers, and roughly 25 undergraduate majors in astronomy and astrophysics from Harvard College. SAO, meanwhile, hosts a long-running and highly rated REU Summer Intern program as well as many visiting graduate students. The CfA estimates that roughly 10% of the professional astrophysics community in the United States spent at least a portion of their career or education there.

    The CfA is either a lead or major partner in the operations of the Fred Lawrence Whipple Observatory, the Submillimeter Array, MMT Observatory, the South Pole Telescope, VERITAS, and a number of other smaller ground-based telescopes. The CfA’s 2019-2024 Strategic Plan includes the construction of the Giant Magellan Telescope as a driving priority for the Center.

    CFA Harvard Smithsonian Submillimeter Array on MaunaKea, Hawaii, USA, Altitude 4,205 m (13,796 ft).

    South Pole Telescope SPTPOL. The SPT collaboration is made up of over a dozen (mostly North American) institutions, including The University of Chicago (US); The University of California Berkeley (US); Case Western Reserve University (US); Harvard/Smithsonian Astrophysical Observatory (US); The University of Colorado, Boulder; McGill(CA) University, The University of Illinois, Urbana-Champaign;The University of California, Davis; Ludwig Maximilians Universität München(DE); DOE’s Argonne National Laboratory; and The National Institute for Standards and Technology. The University of California, Davis; Ludwig Maximilians Universität München(DE); DOE’s Argonne National Laboratory; and The National Institute for Standards and Technology. It is funded by the National Science Foundation(US).

    Along with the Chandra X-ray Observatory, the CfA plays a central role in a number of space-based observing facilities, including the recently launched Parker Solar Probe, Kepler Space Telescope, the Solar Dynamics Observatory (SDO), and HINODE. The CfA, via the Smithsonian Astrophysical Observatory, recently played a major role in the Lynx X-ray Observatory, a NASA-Funded Large Mission Concept Study commissioned as part of the 2020 Decadal Survey on Astronomy and Astrophysics (“Astro2020”). If launched, Lynx would be the most powerful X-ray observatory constructed to date, enabling order-of-magnitude advances in capability over Chandra.

    NASA Parker Solar Probe Plus named to honor Pioneering Physicist Eugene Parker.

    SAO is one of the 13 stakeholder institutes for the Event Horizon Telescope Board, and the CfA hosts its Array Operations Center. In 2019, the project revealed the first direct image of a black hole.

    The result is widely regarded as a triumph not only of observational radio astronomy, but of its intersection with theoretical astrophysics. Union of the observational and theoretical subfields of astrophysics has been a major focus of the CfA since its founding.

    In 2018, the CfA rebranded, changing its official name to the “Center for Astrophysics | Harvard & Smithsonian” in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Today, the CfA receives roughly 70% of its funding from NASA, 22% from Smithsonian federal funds, and 4% from the National Science Foundation. The remaining 4% comes from contributors including the United States Department of Energy, the Annenberg Foundation, as well as other gifts and endowments.

     
  • richardmitnick 3:52 pm on October 21, 2021 Permalink | Reply
    Tags: "What happens when a meteor hits the atmosphere?", , Astronomers are leaps and bounds ahead of where they were 20 years ago in terms being able to model meteor ablation., , Every second millions of pieces of dirt that are smaller than a grain of sand strike Earth's upper atmosphere., Meteor ablation physics, Meteor composition helps astronomers characterize the space environment of our solar system., Meteors play an important role in upper atmospheric science not just for the Earth but for other planets as well., Scientists also track with radar the plasma generated by meteors., Scientists are using supercomputers to help understand how tiny meteors-invisible to the naked eye-liberate electrons that can be detected by radar., Supercomputing,   

    From Texas Advanced Computing Center (US) : “What happens when a meteor hits the atmosphere?” 

    From Texas Advanced Computing Center (US)

    October 21, 2021
    Jorge Salazar, Texas Advanced Computing Center

    1
    XSEDE Stampede2 [below] simulations are helping reveal the physics of what happens when a meteor strikes the atmosphere. Credit: Jacek Halicki/CC BY-SA 4.0.

    In the heavens above, it’s raining dirt.

    Every second millions of pieces of dirt that are smaller than a grain of sand strike Earth’s upper atmosphere. At about 100 kilometers altitude, bits of dust, mainly debris from asteroid collisions, zing through the sky vaporizing as they go 10 to 100 times the speed of a bullet. The bigger ones can make streaks in the sky, meteors that take our breath away.

    Scientists are using supercomputers to help understand how tiny meteors-invisible to the naked eye-liberate electrons that can be detected by radar and can characterize the speed, direction and rate of meteor deceleration with high precision, allowing its origin to be determined. Because this falling space dust helps seed rain-making clouds, this basic research on meteors will help scientists more fully understand the chemistry of Earth’s atmosphere. What’s more, meteor composition helps astronomers characterize the space environment of our solar system.

    Meteors play an important role in upper atmospheric science not just for the Earth but for other planets as well. They allow scientists to be able to diagnose what’s in the air using pulsed laser remote sensing lidar, which bounces off meteor dust to reveal the temperature, density, and the winds of the upper atmosphere.

    Scientists also track with radar the plasma generated by meteors, determining how fast winds are moving in the upper atmosphere by how fast the plasma is pushed around. It’s a region that’s impossible to study with satellites, as the atmospheric drag at these altitudes will cause the spacecraft to re-enter the atmosphere.

    The meteor research was published in June 2021 in the Journal of Geophysical Research: Space Physics of the American Geophysical Society.

    In it, lead author Glenn Sugar of Johns Hopkins University (US) developed computer simulations to model the physics of what happens when a meteor hits the atmosphere. The meteor heats up and sheds material at hypersonic speeds in a process called ablation. The shed material slams into atmospheric molecules and turns into glowing plasma.

    “What we’re trying to do with the simulations of the meteors is mimic that very complex process of ablation, to see if we understand the physics going on; and to also develop the ability to interpret high-resolution observations of meteors, primarily radar observations of meteors,” said study co-author Meers Oppenheim, professor of Astronomy at Boston University (US).

    Large radar dishes, such as the iconic but now defunct Arecibo radar telescope, have recorded multiple meteors per second in a little tiny patch of sky.

    NAIC Arecibo Observatory(US) operated by University of Central Florida(US), Yang Enterprises(US) and Ana G. Méndez University[Universidad Ana G. Méndez, Recinto de Cupey](PR) Altitude 497 m (1,631 ft), which has now collapsed.

    According to Oppenheim, this means the Earth is getting hit by millions and millions of meteors every second.

    2
    Representative plasma frequency distributions used in meteor ablation simulations. Credit: Sugar et al.

    “Interpreting those measurements has been tricky,” he said. “Knowing what we’re looking at when we see these measurements is not so easy to understand.”

    The simulations in the paper basically set up a box that represents a chunk of atmosphere. In the middle of the box, a tiny meteor is placed, spewing out atoms. The particle-in-cell, finite-difference time-domain simulations were used to generate density distributions of the plasma generated by meteor atoms as their electrons are stripped off in collisions with air molecules.

    “Radars are really sensitive to free electrons,” Oppenheim explained. “You make a big, conical plasma that develops immediately in front of the meteoroid and then gets swept out behind the meteoroid. That then is what the radar observes. We want to be able to go from what the radar has observed back to how big that meteoroid is. The simulations allow us to reverse engineer that.”

    The goal is to be able to look at the signal strength of radar observations and be able to get physical characteristics on the meteor, such as size and composition.

    “Up to now we’ve only had very crude estimates of that. The simulations allow us to go beyond the simple crude estimates,” Oppenheim said.

    “Analytical theory works really well when you can say, ‘Okay, this single phenomenon is happening, independently of these other phenomena.’ But when it’s all happening at once, it becomes so messy. Simulations become the best tool,” Oppenheim said.

    Oppenheim was awarded supercomputer time by the Extreme Science and Engineering Discovery Environment (XSEDE) on TACC’s Stampede2 supercomputer [below] for the meteor simulations.

    “Now we’re really able to use the power of Stampede2—these giant supercomputers—to evaluate meteor ablation in incredible detail,” said Oppenheim. “XSEDE made this research possible by making it easy for me, the students, and research associates to take advantage of the supercomputers.”

    “The systems are well run,” he added. “We use many mathematical packages and data storage packages. They’re all pre-compiled and ready for us to use on XSEDE. They also have good documentation. And the XSEDE staff has been very good. When we run into a bottleneck or hurdle, they’re very helpful. It’s been a terrific asset to have.”

    Astronomers are leaps and bounds ahead of where they were 20 years ago in terms being able to model meteor ablation. Oppenheim referred to a 2020 study [Journal of Geophysical Research: Space Physics] led by Boston University undergraduate Gabrielle Guttormsen that simulates tiny meteor ablation to see how fast it heats up and how much material bubbles away.

    Meteor ablation physics is very hard to do with pen and paper calculations, because meteors are incredibly inhomogeneous, said Oppenheim. “You’re essentially modeling explosions. All this physics is happening in milliseconds, hundreds of milliseconds for the bigger ones, and for the bolides, the giant fireballs that can last a few seconds, we’re talking seconds. They’re explosive events.”

    Oppenheim’s team models ablation all the way from picoseconds, which is the time scale of the meteor disintegrating and the atoms interacting when the air molecules slam into them. The meteors are often traveling at ferocious speeds of 50 kilometers a second or even up to 70 kilometers a second.

    Oppenheim outlined three different types of simulations he’s conducting to attack the meteor ablation problem. First, he uses molecular dynamics, that looks at individual atoms as the air molecules slam into the small particles at picosecond time resolution.

    Next, he uses a different simulator to watch what happens as those molecules then fly away, and then the independent molecules slam into the air molecules and become a plasma with electromagnetic radiation. Finally, he takes that plasma and launches a virtual radar at it, listening for the echoes there.

    So far, he hasn’t been able to combine these three simulations into one. It’s what he describes as a ‘stiff problem,’ with too many timescales for today’s technology to handle one self-consistent simulation.

    Oppenheim said he plans to apply for supercomputer time on TACC’s NSF-funded Frontera supercomputer [below], the fastest academic supercomputer on the planet.

    “Stampede2 is good for lots of smaller test runs, but if you have something really massive, Frontera is meant for that,” he said.

    Said Oppenheim: “Supercomputers give scientists the power to investigate in detail the real physical processes, not simplified toy models. They’re ultimately a tool for numerically testing ideas and coming to a better understanding of the nature of meteor physics and everything in the universe.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) (US) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Frontera Dell EMC supercomputer fastest at any university.

     
  • richardmitnick 9:48 am on October 8, 2021 Permalink | Reply
    Tags: "Modernizing Workflow Analysis to Assist in Supercomputer Procurements", , , Supercomputing   

    From DOE’s Exascale Computing Project (US): “Modernizing Workflow Analysis to Assist in Supercomputer Procurements” 

    From DOE’s Exascale Computing Project (US)

    October 6, 2021
    Rob Farber

    It is well known in the high-performance computing (HPC) community that many (perhaps most) HPC workloads exhibit dynamic performance envelopes that can stress the memory, compute, network, and storage capabilities of modern supercomputers. Optimizing HPC workloads to run efficiently on existing hardware systems is challenging, but attempting to quantify the performance envelopes of HPC workloads to extrapolate performance predictions for HPC workloads on new system architectures is even more challenging, albeit essential. This predictive analysis is beneficial because it helps each data center’s supercomputer procurement team extrapolate to the new machines and system architectures that will deliver the most performance for production workloads at their datacenter. However, once a supercomputer is installed, configured, made available to users, and benchmarked, it is too late to consider fundamental architectural changes.

    The goal of the Exascale Computing Project (ECP) hardware evaluation (HE) group is to modernize the metrics and predictive analysis to guide US Department of Energy (DOE) supercomputer procurements. Scott Pakin, the ECP HE lead at DOE’s Los Alamos National Laboratory (US), notes, “Our main customer is the DOE facilities, who consider our work to be very valuable in determining the types of machines to be procured and configured. Our work can also be used by application developers seeking to understand the performance characteristics of their codes.”

    Addressing the Complexity of Modern System Procurements

    Many modern supercomputers now contain both CPUs and GPUs, which have their own separate memory systems and run according to different computational models. CPUs, for example, are general-purpose multiple instruction, multiple data (MIMD) processing elements in which each processor core can run a separate task or instruction stream. On the other hand, GPUs use a single instruction, multiple thread (SIMT) execution model that provides a form of coarse-grained task parallelism. Some applications require fine-grained MIMD processing, which means they can only run efficiently on CPUs, whereas others can run efficiently on both CPUs and GPUs, and procurement teams must account for this. However, future systems could contain devices that run according to a non-von Neumann execution model. Potential devices include coarse-grained reconfigurable arrays and future artificial intelligence accelerators.

    The ECP must address these and other complexities. For this reason, the HE portfolio focuses on integration with facilities to answer the following questions.

    How well can ECP applications expect to perform on future hardware?
    Where should facilities focus their efforts in terms of helping applications exploit upcoming hardware?
    What hardware alternatives (e.g., node architectures, memory layout, network topology) would most benefit DOE applications?

    Experience has shown that HE often serves as a bridge between application development (AD) and facilities in which AD is focused more on “here and now” performance. HE takes a more future-looking approach that can provide advance information to facilities. Nick Wright, the Advanced Technology group lead and The DOE’s NERSC National Energy Research Scientific Computing Center (US) chief architect, notes, “Along with helping us define the characteristics of our production workload in a manner that is useful for future hardware procurements, the ECP HE effort has also provided useful information to our application teams to help optimize our workloads.”

    Evaluating CPU/GPU Data Movement

    Modern GPUs support two mechanisms for moving data between CPU memory and GPU memory. Programmer managed memory requires that the programmer manually allocate and explicitly specify each data movement in terms of contiguous blocks of data on each device. The relatively recent addition of GPU Unified memory with hardware support means that pages of data can be moved between devices automatically on demand, without explicit programmer intervention. In this way, unified memory can support the movement of complex and sparse data structures. (For more information see https://developer.nvidia.com/blog/unified-memory-cuda-beginners/).

    Programmer managed memory is more difficult to use but can lead to superior performance if employed carefully because the programmer has the ability to explicitly specify fast, bulk data transfers and even choreograph the asynchronous overlapping of computation and data movement. In contrast, unified memory relies on “smarts” in the hardware (implemented via the hardware memory management unit or MMU) to determine when pages of data are needed by either the CPU or GPU. Sometimes, the hardware cannot minimize time consuming data movement as well as a knowledgeable programmer, but this is not always the case. Overall, the ease of use and benefits of unified memory are hard to deny – especially as it is well suited for complicated data structures and access patterns for which the programmer also cannot determine the best way to transfer data between devices.

    Programmer managed and unified memory are not mutually exclusive. In fact, it is reasonable for an application to use programmer managed memory to transfer regular data structures (like a matrix) and unified memory to transfer irregular data structures (like an in-memory database or a graph). Scott Pakin notes, the HE team investigated this use case and found that it can lead to excessive data movement, as shown in Figure 3. Part of the reason is that on current GPUs, memory explicitly allocated on the GPU takes precedence over unified memory that can migrate between CPU and GPU—even if the latter memory is more critical to an application’s performance. As a result, unified memory can be repeatedly evicted from the GPU to CPU to make room then brought back as soon as it is again required.

    2
    Figure 3: Identification of excess data movement between the CPU and GPU memory systems. (Source: the ECP.)

    nvestigations like this provide three main benefits:

    -inform facilities, AD, and software technology of excess data movement causes in CPU/GPU interactions;
    -provide new and enhanced tools that relate excess data movement to application data structures and access patterns; and
    -identify potential ways to reduce excess data movement.

    Instruction Mix Analysis

    CPUs execute a variety of instruction types. Pakin notes that integer instructions typically execute faster than floating-point and branch instructions, which execute faster than memory instructions. Consequently, the mix of instructions an application executes correlates with application performance. This mix is not normally fixed but can depend on how the application is run. As an experiment, the HE team kept an application—in this case, the SW4Lite proxy application—and its inputs fixed while they varied the level of hardware thread parallelism the application was allowed to exploit and observed how the instruction mix varied by kernel routine.

    Figure 4 presents the resulting measurements. Of the sixteen kernels, nine are dominated by various types of floating-point operations (FPOPS, DPOPS, SPOPS, and FMAINS), and the ratio changes little with increased parallelism. This indicates that those kernels are likely to observe good performance even with increased parallelism. However, five of the sixteen kernels are dominated by memory operations (the blue LSTINS bars), and a few of those see an increase in the fraction of memory operations with increased parallelism. The implication is that the performance of those kernels will become dominated by the memory subsystem as parallelism increases and therefore run slower than an application developer might hope.

    4
    Figure 4: Categorization of instruction mix. In the color key on the right, LSTINS are the load-store instructions, INTINS are the integer instructions, BRINS are the branch instructions, FPOPS are the floating-point operation instructions, DPOPS are the double-precision operations, SPOPS are the single-precision operations, and FMAINS are the Fused Multiply-Add instructions.

    The team believes that this analysis capability will help guide the Facilities in future procurement, as well as guide AD in development and tuning activities. It can guide Facilities by indicating where performance gains from increased core counts may begin to peter out. At such a point, Facilities may favor CPUs with fewer hardware threads per node but more performance per thread. Similarly, AD can utilize this capability to help identify the sources of reduced efficiency as thread counts increase.

    Understanding the Effectiveness of Network Congestion Control

    On HPC systems, competition for network bandwidth can arise from multiple sources both internal to the application as well as from competing jobs running on other parts of the system. Some HPC systems are also designed so that I/O traffic is also routed across the supercomputer fabric, rather than though a separate storage fabric, which means that file and storage I/O can also cause network congestion that can slow application performance.

    The HE team recently reported progress in understanding the effectiveness of network congestion management and quality of service (QoS) (i.e., priority levels) in the presence of I/O and other many-to-one (N:1) communication patterns.

    The findings shown in Figure 5, based on large-scale network simulation, indicate that QoS and congestion management can effectively mitigate interference of N:1 traffic with other applications. The orange line represents an N:1 workload that saturates an I/O server in a multi-job environment. The blue line represents an application, in this case a “Halo3D” microbenchmark that alternates computing and communicating with its nearest neighbors on a 3‑D grid. The spikes in the graph are caused by these periodic bursts of communication. Without congestion management, the N:1 workload would consume all the network bandwidth, flattening the spikes and delaying the application’s execution until the N:1 workload completes. With congestion management, the case represented in the figure, the bursts of communication continue largely on schedule while the N:1 workload still receives substantial bandwidth when the application does not require it and only slightly degraded bandwidth when both traffic patterns communicate simultaneously. In short, congestion management helps reduce congestion on the network fabric and more fairly allocate network bandwidth across jobs, which could lead to less variation in run time for applications as well as higher performance.

    5
    Figure 5: Example results for a congestion management benchmark. (Source: the ECP.)

    The team thinks this will help in the selection and configuration of supercomputer networks and storage infrastructure, as well as help avoid the worst-case network congestion scenarios.

    Other Tools: Roofline Model

    The HE team also uses the well-regarded roofline model, which can expose several computational limitations in both CPUs and GPUs. The roofline model plots computation rate (floating-point operations per second) against computational intensity (floating point operations per byte transferred from the memory subsystem) and represents the maximal performance achievable for a given computational intensity. The name derives from the shape of the curve, which looks like a roofline: a linear increase in performance until some threshold computational intensity is reached followed by a horizontal line of constant performance once the processor has reached its peak computation rate. An application is plotted as a point on that graph based on its observed computation rate and computational intensity. The distance between that point and the roofline quantifies the amount of inefficiency in the application and how much additional performance can be gained. The graph also indicates if a performance benefit can be observed if it is possible to increase the application’s computational intensity.

    In some recent work, the HE team used the roofline model to develop roofline scaling trajectories for two sparse numerical kernels, SpTRSV (sparse triangular solve) and SpTRSM (sparse triangular solve with multiple right-hand sides), running on GPUs. This means they analyzed how changes to cache and memory access locality, warp efficiency, and streaming-multiprocessor and GPU occupancy relate to where the corresponding point appears on the roofline graph. The challenge in performing this analysis is the kernels’ data dependencies. To address this challenge, the team constructed directed acyclic graphs of these dependencies and used these to produce trend lines of application performance and arithmetic intensity for different amounts of concurrency.

    Summary

    The ECP HE team was formed to provide the ECP and DOE facilities with hardware knowledge and analysis capabilities. New hardware architectures, tiered memory systems, and other advances in computer architectures have required modernizing project metrics and the predictive analysis used in system procurements.

    Once a supercomputer is installed and prepared for production runs, it is too late to consider fundamental architectural changes. The ECP predictive analysis effort is beneficial because it guides DOE supercomputer procurements toward the systems that deliver the most performance to the extreme-scale applications of interest to DOE.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About DOE’s Exascale Computing Project (US)
    The ECP is a collaborative effort of two DOE organizations – the DOE’s Office of Science and the DOE’s National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 7:41 am on August 31, 2021 Permalink | Reply
    Tags: "AARNet with RMIT and AWS collaborate to establish Australia’s first cloud supercomputing facility", , Amazon Web Services, , Cloud supercomputing, Supercomputing   

    From AARNet (AU) : “AARNet with RMIT and AWS collaborate to establish Australia’s first cloud supercomputing facility” 

    aarnet-bloc

    From AARNet (AU)

    July 29, 2021

    1

    This collaboration will provide Royal Melbourne Institute of Technology (AU) researchers and students with the ability to access cloud supercomputing at scale on Amazon Web Services (AWS) to accelerate research outcomes for advanced manufacturing, space, fintech, digital health, and creative technologies.

    Supercomputing in the cloud will help RMIT researchers address some of the world’s most complex challenges in far less time – from disease prevention, extreme weather forecasting, and citizen safety.

    RMIT will leverage AWS Direct Connect low latency, secure and private connections to AWS for workloads which require higher speed or lower latency than the internet. The increased bandwidth will give researchers, students, staff, and industry partners the ability to experiment and test new ideas and discoveries involving large data sets at speed, fast-tracking the time between concept and products that RMIT are ready to take to market.

    AARNet will provide the high-speed internet and telecommunications services required for the facility. Intel will contribute advanced technology solutions to process, optimise, store, and move large, complicated data sets.

    RMIT Deputy Vice-Chancellor (STEM College) and Vice President Digital Innovation, Professor Aleksandar Subic said the facility, supported by the Victorian Government Higher Education Investment Fund, is a pioneering example of innovation in the university sector.

    “Our collaboration with AWS, Intel, and AARNet to establish Australia’s first cloud supercomputing facility represents a step change in how universities and industries access HPC capabilities for advanced data processing and computing,” Subic said.

    “By leveraging AWS Direct Connect, RMIT is set to access tremendous HPC processing power using a unique service model that provides seamless access to all our staff, researchers, and students.

    “Our industry partners will also have access to the new cloud supercomputing facility through joint projects and programs.

    “The facility will be operated by our researchers and students in another example that shows how industry engagement and work integrated learning are in our DNA.”

    AARNet CEO Chris Hancock said AARNet had provided RMIT and other Australian universities with leading-edge telecommunications services to enable transformational research outcomes for decades.

    “We’ve also been connecting researchers to the cloud for many years, but nothing on this scale,” he said.

    “We’re excited to be partnering with RMIT on this project that uses our ultra-fast network to remove the barrier of geography and distance for research across Australia and beyond.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    AARNet (AU) provides critical infrastructure for driving innovation in today’s knowledge-based economy
    AARNET is a national resource – a National Research and Education Network (NREN). AARNet provides unique information communications technology capabilities to enable Australian education and research institutions to collaborate with each other and their international peer communities.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: