Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:14 pm on September 18, 2020 Permalink | Reply
    Tags: , , , Supercomputing, The comparison tests for tiny differences between matter and antimatter that could with even more computing power and other refinements point to physics phenomena not explained by the Standard Model., Theorists publish improved prediction for the tiny difference in kaon decays observed by experiments., This was an international collaboration of theoretical physicists—including scientists from Brookhaven National Laboratory and the RIKEN-BNL Research Center.   

    From Brookhaven National Lab: “New Calculation Refines Comparison of Matter with Antimatter” 

    From Brookhaven National Lab

    September 17, 2020
    Karen McNulty Walsh,
    kmcnulty@bnl.gov
    (631) 344-8350

    Peter Genzer,
    genzer@bnl.gov
    (631) 344-3174

    Theorists publish improved prediction for the tiny difference in kaon decays observed by experiments.

    1
    A new calculation performed using the world’s fastest supercomputers allows scientists to more accurately predict the likelihood of two kaon decay pathways, and compare those predictions with experimental measurements. The comparison tests for tiny differences between matter and antimatter that could, with even more computing power and other refinements, point to physics phenomena not explained by the Standard Model.

    An international collaboration of theoretical physicists—including scientists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory (BNL) and the RIKEN-BNL Research Center (RBRC)—has published a new calculation relevant to the search for an explanation of the predominance of matter over antimatter in our universe. The collaboration, known as RBC-UKQCD, also includes scientists from CERN (the European particle physics laboratory), Columbia University, the University of Connecticut, the University of Edinburgh, the Massachusetts Institute of Technology, the University of Regensburg, and the University of Southampton. They describe their result in a paper to be published in the journal Physical Review D and has been highlighted as an “editor’s suggestion.”

    Scientists first observed a slight difference in the behavior of matter and antimatter—known as a violation of “CP symmetry”—while studying the decays of subatomic particles called kaons in a Nobel Prize winning experiment at Brookhaven Lab in 1963. While the Standard Model of particle physics was pieced together soon after that, understanding whether the observed CP violation in kaon decays agreed with the Standard Model has proved elusive due to the complexity of the required calculations.

    Standard Model of Particle Physics from Symmetry Magazine.

    The new calculation gives a more accurate prediction for the likelihood with which kaons decay into a pair of electrically charged pions vs. a pair of neutral pions. Understanding these decays and comparing the prediction with more recent state-of-the-art experimental measurements made at CERN and DOE’s Fermi National Accelerator Laboratory gives scientists a way to test for tiny differences between matter and antimatter, and search for effects that cannot be explained by the Standard Model.

    The new calculation represents a significant improvement over the group’s previous result, published in Physical Review Letters in 2015. Based on the Standard Model, it gives a range of values for what is called “direct CP symmetry violation” in kaon decays that is consistent with the experimentally measured results. That means the observed CP violation is now, to the best of our knowledge, explained by the Standard Model, but the uncertainty in the prediction needs to be further improved since there is also an opportunity to reveal any sources of matter/antimatter asymmetry lying beyond the current theory’s description of our world.

    “An even more accurate theoretical calculation of the Standard Model may yet lie outside of the experimentally measured range. It is therefore of great importance that we continue our progress, and refine our calculations, so that we can provide an even stronger test of our fundamental understanding,” said Brookhaven Lab theorist Amarjit Soni.

    Matter/antimatter imbalance

    “The need for a difference between matter and antimatter is built into the modern theory of the cosmos,” said Norman Christ of Columbia University. “Our current understanding is that the present universe was created with nearly equal amounts of matter and antimatter. Except for the tiny effects being studied here, matter and antimatter should be identical in every way, beyond conventional choices such as assigning negative charge to one particle and positive charge to its anti-particle. Some difference in how these two types of particles operate must have tipped the balance to favor matter over antimatter,” he said.

    “Any differences in matter and antimatter that have been observed to date are far too weak to explain the predominance of matter found in our current universe,” he continued. “Finding a significant discrepancy between an experimental observation and predictions based on the Standard Model would potentially point the way to new mechanisms of particle interactions that lie beyond our current understanding—and which we hope to find to help to explain this imbalance.”

    Modeling quark interactions

    All of the experiments that show a difference between matter and antimatter involve particles made of quarks, the subatomic building blocks that bind through the strong force to form protons, neutrons, and atomic nuclei—and also less-familiar particles like kaons and pions.

    “Each kaon and pion is made of a quark and an antiquark, surrounded by a cloud of virtual quark-antiquark pairs, and bound together by force carriers called gluons,” explained Christopher Kelly, of Brookhaven National Laboratory.

    The Standard Model-based calculations of how these particles behave must therefore include all the possible interactions of the quarks and gluons, as described by the modern theory of strong interactions, known as quantum chromodynamics (QCD).

    In addition, these bound particles move at close to the speed of light. That means the calculations must also include the principles of relativity and quantum theory, which govern such near-light-speed particle interactions.

    “Because of the huge number of variables involved, these are some of the most complicated calculations in all of physics,” noted Tianle Wang, of Columbia University.

    Computational challenge

    To conquer the challenge, the theorists used a computing approach called lattice QCD, which “places” the particles on a four-dimensional space-time lattice (three spatial dimensions plus time). This box-like lattice allows them to map out all the possible quantum paths for the initial kaon to decay to the final two pions. The result becomes more accurate as the number of lattice points increases. Wang noted that the “Feynman integral” for the calculation reported here involved integrating 67 million variables!

    These complex calculations were done by using cutting-edge supercomputers. The first part of the work, generating samples or snapshots of the most likely quark and gluon fields, was performed on supercomputers located in the US, Japan, and the UK. The second and most complex step of extracting the actual kaon decay amplitudes was performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at DOE’s Lawrence Berkeley National Laboratory.

    But using the fastest computers is not enough; these calculations are still only possible even on these computers when using highly optimized computer codes, developed for the calculation by the authors.

    “The precision of our results cannot be increased significantly by simply performing more calculations,” Kelly said. “Instead, in order to tighten our test of the Standard Model we must now overcome a number of more fundamental theoretical challenges. Our collaboration has already made significant strides in resolving these issues and coupled with improvements in computational techniques and the power of near-future DOE supercomputers, we expect to achieve much improved results within the next three to five years.”

    The authors of this paper are, in alphabetical order: Ryan Abbott (Columbia), Thomas Blum (UConn), Peter Boyle (BNL & U of Edinburgh), Mattia Bruno (CERN), Norman Christ (Columbia), Daniel Hoying (UConn), Chulwoo Jung (BNL), Christopher Kelly (BNL), Christoph Lehner (BNL & U of Regensburg), Robert Mawhinney (Columbia), David Murphy (MIT), Christopher Sachrajda (U of Southampton), Amarjit Soni (BNL), Masaaki Tomii (UConn), and Tianle Wang (Columbia).

    The majority of the measurements and analysis for this work were performed using the Cori supercomputer at NERSC, with additional contributions from the Hokusai machine at the Advanced Center for Computing and Communication at Japan’s RIKEN Laboratory and the IBM BlueGene/Q (BG/Q) installation at Brookhaven Lab (supported by the RIKEN BNL Research Center and Brookhaven Lab’s prime operating contract from DOE’s Office of Science).

    At NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC is a DOE Office of Science User Facility.

    Riken HOKUSAI Big-Waterfall supercomputer built on the Fujitsu PRIMEHPC FX100 platform based on the SPARC64 processor.

    Additional supercomputing resources used to develop the lattice configurations included: the BG/Q installation at Brookhaven Lab, the Mira supercomputer at the Argonne Leadership Class Computing Facility (ALCF) at Argonne National Laboratory, Japan’s KEKSC 1540 computer, the UK Science and Technology Facilities Council DiRAC machine at the University of Edinburgh, and the National Center for Supercomputing Applications Blue Waters machine at the University of Illinois (funded by the U.S. National Science Foundation). NERSC and ALCF are DOE Office of Science user facilities. Individual researchers received support from various grants issued by the DOE Office of Science and other sources in the U.S. and abroad.

    BNL BGQ IBM BlueGene/Q (BG/Q) Linux supercomputer.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility.

    DiRAC BlueGene/Q EPCC at The University of Edinburgh.

    NCSA U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer,
    at the National Center for Supercomputing Applications.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    Brookhaven campus

    Brookhaven Campus.

    BNL Center for Functional Nanomaterials.

    BNL NSLS-II

    BNL NSLS II

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL Phenix Detector

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

     
  • richardmitnick 5:52 pm on September 15, 2020 Permalink | Reply
    Tags: "U of T and AMD launch supercomputing program dedicated to big-data health research", , Supercomputing,   

    From University of Toronto: “U of T and AMD launch supercomputing program dedicated to big-data health research” 

    U Toronto Bloc

    From University of Toronto

    September 14, 2020
    Rahul Kalvapalle

    1
    (Photo courtesy of SciNet.)

    The University of Toronto is teaming up with processor giant AMD to launch a supercomputing platform that will power the university’s health research – including on global threats such as COVID-19.

    The initiative, dubbed SciNet4Health, will allow researchers and clinician scientists at U of T and its partner hospitals to access and analyze massive databases of patient health information – in a secure way that protects patients’ privacy – using technologies such as machine learning.

    SciNet4Health is made possible by AMD’s donation of one petaflop of dedicated processing power, capable of a quadrillion calculations per second. It promises to lead to advancements in vaccine development, drug discovery, genomics research and mathematical modelling.

    “The new resources that we are receiving from AMD are going to allow us to set up the computing infrastructure that our health researchers need, especially right now during the time of COVID-19 when many of our faculty are working towards various solutions and positive outcomes for the pandemic,” said Alex Mihailidis, U of T’s associate vice-president, international partnerships.

    “Until today, U of T did not have a dedicated computing infrastructure for health researchers that can support patient data, so this is going to have a significant impact on our research.”

    SciNet4Health will operate out of the facilities of SciNet, the U of T-based supercomputing consortium and home to Canada’s most powerful research supercomputer: Niagara.

    U Toronto Lenovo SciNet Niagara supercomputer.

    The program will allow SciNet, which has enabled advancements fields ranging from astrophysics to climate science, bring its capacity for cutting-edge data science to health research.

    Daniel Gruner, chief technology officer at SciNet, said high-performance computing allows for complex calculations that regular computers simply can’t manage.

    “If you’re thinking of using AI and machine learning to try and make sense of huge and diverse data, you need these big computers because it can’t be done on a small machine – it requires a lot of math, a lot of computation, so you need computers that are specially geared towards that,” he said.

    “The resources we’re receiving from AMD happen to be very heavy on GPUs [graphic processing units] that can run deep learning calculations a lot faster than a regular CPU can.”

    The donation by AMD, based in Santa Clara, Calif., consists of 20 “compute nodes” – individual computers that comprise a high-performance computing cluster – each with eight GPUs.

    “That’s a whole lot of power,” Gruner said.

    It’s also power that will be completely in-house. Until now, U of T health researchers in need of supercomputing worked through partner initiatives such as HPC4Health, a high-performance computing network established by UHN and the Hospital for Sick Children. SciNet4Health drew on HPC4Health’s experience using patient data to establish its procedures and protocols. The two organizations plan to work together to meet the needs of the health sciences research community in and around Toronto.

    “This is helping catalyse our ability to do more private health information research inside the university,” said Gruner.

    For his part, Mihailidis says the machine learning and deep learning capabilities that will be provided by SciNet4Health will enable researchers to work with patient data to a degree that wasn’t previously possible due to security and privacy considerations. A professor in the department of occupational science and occupational therapy in the Faculty of Medicine, Mihailidis cited his research on aging and geriatrics as just one example of the kind of work that stands to benefit.

    “We’ve been doing a lot of work around collecting data about what older people are doing in their homes and communities, and using machine learning, deep learning and other predictive analytics to determine changes in their health,” he said.

    “The problem we’ve had to date is that because we haven’t had secure servers that have allowed us to securely use patient data, we’ve had to scrub the data to the point where the personal attributes are being removed – and because of that, our predictive models on their health aren’t as accurate as they could be if we were able to include the patient health data itself.

    “Having this type of resource at the university will allow us to take that type of research to the next level.”

    U of T is among a small group of universities to receive the supercomputing systems from AMD. Others include Stanford University and the University of California, Los Angeles.

    “AMD is proud to be working with leading global research institutions to bring the power of high-performance computing technology to the fight against the coronavirus pandemic,” said Mark Papermaster, AMD’s executive vice-president and chief technology officer.

    “These donations of AMD EPYC and Radeon Instinct processors will help researchers not only deepen their understanding of COVID-19, but also help improve our ability to respond to future potential threats to global health.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded in 1827, the University of Toronto has evolved into Canada’s leading institution of learning, discovery and knowledge creation. We are proud to be one of the world’s top research-intensive universities, driven to invent and innovate.

    Our students have the opportunity to learn from and work with preeminent thought leaders through our multidisciplinary network of teaching and research faculty, alumni and partners.

    The ideas, innovations and actions of more than 560,000 graduates continue to have a positive impact on the world.

     
  • richardmitnick 4:10 pm on September 7, 2020 Permalink | Reply
    Tags: "Scientists 'Zoom In' On Dark Matter Revealing The Invisible Skeleton Of The Universe", , , , , , , Supercomputing   

    From Harvard-Smithsonian Center for Astrophysics: “Scientists ‘Zoom In’ On Dark Matter Revealing The Invisible Skeleton Of The Universe” 

    Harvard Smithsonian Center for Astrophysics


    From Harvard-Smithsonian Center for Astrophysics

    9.2.20

    Center for Astrophysics | Harvard & Smithsonian
    Fred Lawrence Whipple Observatory
    Amy Oliver, Public Affairs
    amy.oliver@cfa.harvard.edu
    520-879-4406

    1
    Using the power of supercomputers, an international team of researchers has zoomed in on the smallest clumps of dark matter in a virtual universe. Published today in Nature, the study reveals dark matter haloes as active regions of the sky, teeming with not only galaxies, but also radiation-emitting collisions that could make it possible to find these haloes in the real sky.

    Dark matter—which makes up roughly 83% of the matter in the universe—is an important player in cosmic evolution, including in the formation of galaxies, which grew as gas cooled and condensed at the center of enormous clumps of dark matter. Over time, haloes formed as some dark matter clumps pulled away from the expansion of the universe due to their own enormous gravity. The largest dark matter haloes contain huge galaxy clusters—collections of hundreds of galaxies—and while their properties can be inferred by studying those galaxies within them, the smallest dark matter haloes, which typically lack even a single star, have remained a mystery until now.

    “Amongst the things we’ve learned from our simulations is that gravity leads to dark matter particles ‘clumping’ in overly dense regions of the universe, settling into what’s known as dark matter haloes. These can essentially be thought of as big wells of gravity filled with dark matter particles,” said Sownak Bose, a postdoc at the Center for Astrophysics | Harvard & Smithsonian, and one of the lead authors on the research. “We think that every galaxy in the cosmos is surrounded by an extended distribution of dark matter, which outweighs the luminous material of the galaxy by between a factor of 10-100, depending on the type of galaxy. Because this dark matter surrounds every galaxy in all directions, we refer to it as a ‘halo.’”

    Using a simulated universe, researchers were able to zoom in with the precision required to recognize a flea on the surface of the full Moon—with magnification up to 10 to the power of seven, or 10 followed by seven zeroes—and create highly detailed images of hundreds of virtual dark matter haloes, from the largest known to the smallest expected.

    “Simulations are helpful because they help us quantify not just the overall distribution of dark matter in the universe, but also the detailed internal structure of these dark matter haloes,” said Bose. “Establishing the abundance and the internal structure of the entire range of dark matter haloes that can be formed in the cold dark matter model is of interest because this enables us to calculate how easy it may be to detect dark matter in the real universe.”

    While studying the structure of the haloes, researchers were met with a surprise: all dark matter haloes, whether large or small, have very similar internal structures which are dense at the center and become increasingly diffuse moving outward. Without a scale-bar, it is almost impossible to tell the difference between the dark matter halo of a massive galaxy—up to 10^15 solar masses—and that of a halo with less than a solar mass—down to 10^-6 solar masses. “Several previous studies suggested that the density profiles for super-mini haloes would be quite different from their massive counterparts,” said Jie Wang, astronomer at the National Astronomical Observatories (NAOC) in Beijing, and a lead author on the research. “Our simulations show that they look similar across a huge mass range of dark haloes and that is really surprising.” Bose added that even in the smallest haloes which do not surround galaxies, “Our simulations enabled us to visualize the so-called ‘cosmic web.’ Where filaments of dark matter intersect, one sees the tiny, near spherical blobs of dark matter, which are the haloes themselves, and they are so universal in structure that I could show you a picture of a galaxy cluster with a million billion times the mass of the Sun, and an Earth-mass halo at a million times smaller than the Sun, and you would not be able to tell which is which.”

    Although the images of dark matter haloes from this study are the result of simulations, the simulations themselves are informed by real observational data. For astronomers, that means the study could be replicated against the real night sky given the right technology. “The initial conditions that went into our simulation are based on actual observational data from the cosmic microwave background radiation measurements of the Planck satellite, which tells us what the composition of the Universe is and how much dark matter to put in,” said Bose.

    During the study researchers tested a feature of dark matter haloes that may make them easier to find in the real night sky: particle collisions. Current theory suggests that dark matter particles that collide near the center of haloes may explode in a violent burst of high-energy gamma radiation, potentially making the dark matter haloes detectable by gamma-ray and other telescopes.

    “Exactly how the radiation would be detected depends on the precise properties of the dark matter particle. In the case of weakly interacting massive particles (WIMPs), which are amongst the leading candidates in the standard cold dark matter picture, gamma radiation is typically produced in the GeV range. There have been claims of a galactic center excess of GeV-scale gamma radiation in Fermi data, which could be due to dark matter or perhaps due to pulsars,” said Bose. “Ground-based telescopes like the Very Energetic Radiation Imaging Telescope Array System (VERITAS) can be used for this purpose, too.

    Veritas Four Čerenkov telescopes at the Fred Lawrence Whipple Observatory,Mount Hopkins, Arizona, US Altitude 2,606 m 8,550 ft.

    And, pointing telescopes at galaxies other than our own could also help, as this radiation should be produced in all dark matter haloes.” Wang added, “With the knowledge from our simulation, we can evaluate many different tools to detect haloes—gamma-ray, gravitational lensing, dynamics. These methods are all promising in the work to shed light on the nature of dark matter particles.”

    The results of the study provide a pathway both for current and future researchers to better understand what’s out there, whether we can see it or not. “Understanding the nature of dark matter is one of the Holy Grails of cosmology. While we know that it dominates the gravity of the universe, we know very little about its fundamental properties: how heavy an individual particle is, what sorts of interactions, if any, it has with ordinary matter, etcetera,” said Bose. “Through computer simulations we have come to learn about its fundamental role in the formation of the structure in our universe. In particular, we have come to realize that without dark matter, our universe would look nothing like the way it does now. There would be no galaxies, no stars, no planets, and therefore, no life. This is because dark matter acts as the invisible skeletal structure that holds up the visible universe around us.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Center for Astrophysics combines the resources and research facilities of the Harvard College Observatory and the Smithsonian Astrophysical Observatory under a single director to pursue studies of those basic physical processes that determine the nature and evolution of the universe. The Smithsonian Astrophysical Observatory (SAO) is a bureau of the Smithsonian Institution, founded in 1890. The Harvard College Observatory (HCO), founded in 1839, is a research institution of the Faculty of Arts and Sciences, Harvard University, and provides facilities and substantial other support for teaching activities of the Department of Astronomy.

     
  • richardmitnick 1:36 pm on September 3, 2020 Permalink | Reply
    Tags: "Zooming in on Dark Matter", , , , , , , Supercomputing   

    From MPG Institute for Astrophysics, Garching: “Zooming in on Dark Matter” 


    From MPG Institute for Astrophysics, Garching

    September 02, 2020

    Prof. Dr. Simon D.M. White
    Max Planck Institute for Astrophysics, Garching
    Tel +49 89 30000-2211
    swhite@mpa-garching.mpg.de

    Prof. Dr. Volker Springel
    Max Planck Institute for Astrophysics, Garching
    Tel +49 89 30000-2195
    vspringel@mpa-garching.mpg.de

    Dr. Hannelore Hämmerle
    Press Officer Max Planck Institute for Astrophysics, Garching
    Tel +49 89 30000-3980
    hhaemmerle@mpa-garching.mpg.de

    Computer simulation reveals similar structures for large and small dark matter halos.

    Most of the matter in the Universe is dark and thus not directly observable. In results just published in the journal Nature, an international research team harnessed supercomputers in China and Europe to zoom into a typical region of a virtual universe by a totally unprecedented factor, equivalent to that needed to recognise a flea on the surface of the full Moon.

    The supercomputers involved were Cobra @ Garching, Germany; COSMA5, COSMA6, COSMA7 @ Durham,UK; and Tianhe-II @ Guangzhou, China.

    4
    The MPG Supercomputer COBRA.

    U Durham HPC DiRAC COSMA7 supercomputer.

    China’s Tianhe-2 Kylin Linux supercomputer at National Supercomputer Center, Guangzhou, China.

    This allowed the team to make detailed pictures of hundreds of virtual dark matter haloes from the very largest to the very smallest expected in our Universe.

    1
    This image shows a slice through the main simulation which is more than two billion light years on a side. The two insets are successive zooms into regions which are 700 thousand and then just 600 light-years on a side. The largest individual lumps in the main image correspond to clusters of galaxies, while the smallest lumps in the second zoom are similar in mass to the Earth. © MPA .

    Dark matter plays an important role in cosmic evolution. Galaxies grew as gas cooled and condensed at the centre of enormous clumps of dark matter, so-called dark matter haloes. The haloes themselves separated from the overall expansion of the universe as a result of the gravitational pull of their own dark matter. Astronomers can infer the structure of big dark matter haloes from the properties of the galaxies and gas within them, but they have no information about haloes that might be too small to contain a galaxy.

    The biggest dark matter haloes in today’s universe contain huge galaxy clusters, collections of hundreds of bright galaxies. Their properties are well studied, and they weigh over a quadrillion (10^15) times as much as our Sun. On the other hand, the masses of the smallest dark matter haloes are unknown. The theory of dark matter that underlies the new supercomputer zoom suggests that they may be similar in mass to the Earth. Such small haloes would be extremely numerous, containing a substantial fraction of all the dark matter in the universe, but they would remain dark throughout cosmic history because stars and galaxies grow only in haloes at least a million times more massive than the Sun.

    The research team, based in China, Germany, the UK and the USA took five years to develop, test and carry out their cosmic zoom. It enabled them to study the structure of dark matter haloes of all masses between that of the Earth and that of a big galaxy cluster. In numbers: The zoom covers a mass range of 10 to the power 30 (that is a 1 followed by 30 zeroes), which is equivalent to the number of kilograms in the Sun.

    Relevance for the detection of radiation from small halos.

    Surprisingly, the astrophysicists found all haloes to have very similar internal structures: They are very dense at the centre, becoming increasingly diffuse outwards, with smaller clumps orbiting in their outer regions. Without a scale-bar, it is almost impossible to tell an image of the dark matter halo of a massive galaxy from one of a halo with less than a solar mass. “We were really surprised by our results,” says Simon White from the Max-Planck-Institut for Astrophysics. “Everyone had guessed that the smallest clumps of dark matter would look quite different from the big ones we are more familiar with. But when we were finally able to calculate their properties, they looked just the same.”

    The result has a potential practical application. Particles of dark matter can collide near the centres of haloes, and may – according to some theories – annihilate in a burst of energetic (gamma) radiation. The new zoom simulation allows the scientists to calculate the expected amount of radiation for haloes of differing mass. Much of this radiation could come from dark matter haloes too small to contain stars. Future gamma-ray observatories might be able to detect this emission, making the small objects individually or collectively “visible”. This would confirm the hypothesised nature of the dark matter, which may not be entirely dark after all!

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MPG Institute for Astrophysics Campus

    The Max-Planck-Institut für Astrophysik, usually called the MPA for short, is one of about 80 autonomous research institutes within the Max-Planck Society. These institutes are primarily devoted to fundamental research. Most of them carry out work in several distinct areas, each led by a senior scientist who is a “Scientific Member” of the Max-Planck Society.

    The MPA was founded in 1958 under the direction of Ludwig Biermann. It was an offshoot of the MPI für Physik which at that time had just moved from Göttingen to Munich. In 1979 the headquarters of the European Southern Observatory (ESO) came to Munich from Geneva, and as part of the resulting reorganisation the MPA (then under its second director, Rudolf Kippenhahn) moved to a new site in Garching, just north of the Munich city limits.

    The new building lies in a research park barely 50 metres from ESO headquarters and is physically connected to the buildings which house the MPI für Extraterrestrische Physik (MPE). This park also contains two other large research institutes, the MPI für Plasmaphysik (IPP) and the MPI für Quantenoptik (MPQ), as well as many of the scientific and engineering departments of the Technische Universität München (TUM). The MPA is currently led by a Board of four directors, Guinevere Kauffmann, Eiichiro Komatsu, Volker Springel, and Simon White.

     
  • richardmitnick 8:49 am on August 15, 2020 Permalink | Reply
    Tags: , ATOS JADE 2 AI NVIDIA DGX SuperPOD Supercomputer at Hartree Center at STFC’s Daresbury Laboratory UK, , , Supercomputing, The largest supercomputer dedicated to artificial intelligence (AI) in the UK.   

    From Science and Technology Facilities Council: “New milestone brings the UK’s largest AI supercomputer one step closer” 


    From Science and Technology Facilities Council

    5 August 2020

    The largest supercomputer dedicated to artificial intelligence (AI) in the UK will enable academics and industry to drive cutting edge innovation, boosting the UK’s ability to make world-changing scientific breakthroughs.

    ATOS JADE 2 AI NVIDIA DGX SuperPOD Supercomputer at Hartree Center at STFC’s Daresbury Laboratory UK

    The JADE-2 supercomputer, which will be hosted at the Hartree Centre at STFC’s Daresbury Laboratory [below], will give UK researchers and industry invaluable access to extremely powerful systems that will support ground breaking work in areas such as energy storage and supply and improved drug design. Funded by the Engineering and Physical Sciences Research Council (EPSRC), it follows the highly successful JADE (Joint Academic Data Science Endeavour), also hosted at the Hartree Centre, at Sci-Tech Daresbury.

    Now, in an exciting new milestone, Atos, a global leader in digital transformation, has signed a four-year contract worth £5 million with the University of Oxford to deliver JADE-2. This high-performance system will house at least triple the current capacity of JADE, providing computing capabilities that will help meet the increasing demand for AI-focused facilities in the UK.

    Alison Kennedy, Director of STFC’s Hartree Centre, said: “I’m thrilled at this latest announcement between Atos and the University of Oxford. JADE-2 will significantly amplify the UK’s ability to address AI and machine learning challenges, supporting the ambition to be a world-leader in these areas. I believe that this milestone, which builds on an already strong, well-established partnership between Atos and the Hartree Centre, will help UK research and industry future-proof itself for the growing demand for AI and deep learning technologies from across a whole range of scientific fields and industry sectors.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    STFC-Science and Technology Facilities Council

    STFC Rutherford Appleton Laboratory at Harwell in Oxfordshire, UK


    STFC Hartree Center


    Helping build a globally competitive, knowledge-based UK economy

    We are a world-leading multi-disciplinary science organisation, and our goal is to deliver economic, societal, scientific and international benefits to the UK and its people – and more broadly to the world. Our strength comes from our distinct but interrelated functions:

    Universities: we support university-based research, innovation and skills development in astronomy, particle physics, nuclear physics, and space science
    Scientific Facilities: we provide access to world-leading, large-scale facilities across a range of physical and life sciences, enabling research, innovation and skills training in these areas
    National Campuses: we work with partners to build National Science and Innovation Campuses based around our National Laboratories to promote academic and industrial collaboration and translation of our research to market through direct interaction with industry
    Inspiring and Involving: we help ensure a future pipeline of skilled and enthusiastic young people by using the excitement of our sciences to encourage wider take-up of STEM subjects in school and future life (science, technology, engineering and mathematics)

    We support an academic community of around 1,700 in particle physics, nuclear physics, and astronomy including space science, who work at more than 50 universities and research institutes in the UK, Europe, Japan and the United States, including a rolling cohort of more than 900 PhD students.

    STFC-funded universities produce physics postgraduates with outstanding high-end scientific, analytic and technical skills who on graduation enjoy almost full employment. Roughly half of our PhD students continue in research, sustaining national capability and creating the bedrock of the UK’s scientific excellence. The remainder – much valued for their numerical, problem solving and project management skills – choose equally important industrial, commercial or government careers.

    Our large-scale scientific facilities in the UK and Europe are used by more than 3,500 users each year, carrying out more than 2,000 experiments and generating around 900 publications. The facilities provide a range of research techniques using neutrons, muons, lasers and x-rays, and high performance computing and complex analysis of large data sets.

    They are used by scientists across a huge variety of science disciplines ranging from the physical and heritage sciences to medicine, biosciences, the environment, energy, and more. These facilities provide a massive productivity boost for UK science, as well as unique capabilities for UK industry.

    Our two Campuses are based around our Rutherford Appleton Laboratory at Harwell in Oxfordshire, and our Daresbury Laboratory in Cheshire – each of which offers a different cluster of technological expertise that underpins and ties together diverse research fields.

    Daresbury Laboratory at Sci-Tech Daresbury in the Liverpool City Region,

    The combination of access to world-class research facilities and scientists, office and laboratory space, business support, and an environment which encourages innovation has proven a compelling combination, attracting start-ups, SMEs and large blue chips such as IBM and Unilever.

    We think our science is awesome – and we know students, teachers and parents think so too. That’s why we run an extensive Public Engagement and science communication programme, ranging from loans to schools of Moon Rocks, funding support for academics to inspire more young people, embedding public engagement in our funded grant programme, and running a series of lectures, travelling exhibitions and visits to our sites across the year.

    Ninety per cent of physics undergraduates say that they were attracted to the course by our sciences, and applications for physics courses are up – despite an overall decline in university enrolment.

     
  • richardmitnick 9:46 pm on August 13, 2020 Permalink | Reply
    Tags: "Simulating crash into asteroid reveals its heavy metal psyche", , , , , , Forthcoming asteroid mission- Psyche: Journey to a Metal World that launches in 2022., , Supercomputing   

    From Los Alamos National Laboratory: “Simulating crash into asteroid reveals its heavy metal psyche” 

    LANL bloc

    From Los Alamos National Laboratory

    August 10, 2020

    Nancy Ambrosiano
    (505) 699-1149
    nwa@lanl.gov

    New study of biggest Main Belt asteroid, Psyche, finds it might be remnant of a planet that never fully formed.

    1
    Artist’s conception of asteroid Psyche, whose composition has been proposed as a porous metallic body hurtling through space, thanks to computer modeling of its largest crater. (Image courtesy of Peter Rubin and Arizona State University.)

    New 2D and 3D computer modeling of impacts on the asteroid Psyche, the largest Main Belt asteroid, indicate it is probably metallic and porous in composition, something like a flying cosmic rubble pile. Knowing this will be critical to NASA’s forthcoming asteroid mission, Psyche: Journey to a Metal World, that launches in 2022.

    NASA Psyche spacecraft depiction

    “This mission will be the first to visit a metallic asteroid, and the more we, the scientific community, know about Psyche prior to launch, the more likely the mission will have the most appropriate tools for examining Psyche and collecting data,” said Wendy K. Caldwell, Los Alamos National Laboratory Chick Keller Postdoctoral Fellow and lead author on a paper published recently in the journal Icarus. “Psyche is an interesting body to study because it is likely the remnant of a planetary core that was disrupted during the accretion stage, and we can learn a lot about planetary formation from Psyche if it is indeed primarily metallic.”

    Modeling impact structures on Psyche contributes to our understanding of metallic bodies and how cratering processes on large metal objects differ from those on rocky and icy bodies, she noted.

    The team provides the first 3D models of the formation of Psyche’s largest impact crater, and it is the first work to use impact crater models to inform asteroid composition. The 2D and 3D models indicate an oblique impact angle where an incoming object would have struck the asteroid’s surface, deforming Psyche in a very specific and predictable manner, given the likely materials involved.

    Metals deform differently from other common asteroid materials, such as silicates, and impacts into targets of similar composition to Psyche should result in craters similar to those observed on Psyche.


    Simulating an impact crater on an asteroid. An animation video using the team’s simulation output shows a theoretical impact scenario that could have led to Psyche’s largest crater. The simulation shows how some material is ejected into space after impact and reveals the crater modification stage, where the impact area shows the resulting damaged material.

    “Our ability to model the impact through the modification stage is essential to understanding how craters form on metallic bodies,” Caldwell said. “In early stages of crater formation, the target material behaves like a fluid. In the modification stage, however, the strength of the target material plays a key role in how material that isn’t ejected ‘settles’ into the crater.”

    The researchers’ results corroborate estimates on Psyche’s compositions based on observational measuring techniques. Of particular interest is the material that provided the best match, Monel. Monel is a Nickel alloy based on ore from Sudbury Crater, an impact structure in Canada. The ore is thought to have come from the impactor that formed the crater, meaning the ore itself is likely to have extraterrestrial origins. The modeling successes using Monel demonstrate that Psyche’s material composition behaves similarly under shock conditions to extraterrestrial metals.

    The modeling tool used in the work, run on a Los Alamos supercomputer, was the FLAG hydrocode, previously shown to be effective in modeling impact craters and an ideal choice to model crater formation on Psyche.

    LANL Cray XC30 Trinity supercomputer

    Based upon the probable impact velocity, local gravity, and bulk density estimates, the formation of Psyche’s largest crater likely was dominated by strength rather than gravity, Caldwell said.

    “It’s incredible what we can accomplish with the laboratory’s resources,” Caldwell noted. “Our supercomputers are some of the most powerful in the world, and for large problems like asteroid impacts, we really rely on our numerical modeling tools to supplement observational data.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus
    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.
    Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

    Operated by Los Alamos National Security, LLC for the U.S. Dept. of Energy’s NNSA

     
  • richardmitnick 5:13 pm on August 3, 2020 Permalink | Reply
    Tags: "Unequal neutron-star mergers create unique 'bang' in simulations", , , , , , , , Supercomputing   

    From Pennsylvania State University: “Unequal neutron-star mergers create unique ‘bang’ in simulations” Updated to include the full list of supercomputers used in this project 

    Penn State Bloc

    From Pennsylvania State University

    8.3.20
    David Radice
    david.radice@psu.edu

    Gail McCormick
    gailmccormick@psu.edu
    Work Phone: 814-863-0901

    1
    Through a series of simulations, an international team of researchers has determined that some mergers of neutron stars produce radiation that should be detectible from Earth. When neutron stars of unequal mass merge, the smaller star is ripped apart by tidal forces from its massive companion (left). Most of the smaller partner’s mass falls onto the massive star, causing it to collapse and to form a black hole (middle). But some of the material is ejected into space; the rest falls back to form a massive accretion disk around the black hole (right). Image: Adapted from Bernuzzi et al. 2020, Monthly Notices of the Royal Astronomical Society.

    When two neutron stars slam together, the result is sometimes a black hole that swallows all but the gravitational evidence of the collision. However, in a series of simulations, an international team of researchers including a Penn State scientist determined that these typically quiet — at least in terms of radiation we can detect on Earth — collisions can sometimes be far noisier.

    “When two incredibly dense collapsed neutron stars combine to form a black hole, strong gravitational waves emerge from the impact,” said David Radice, assistant professor of physics and of astronomy and astrophysics at Penn State and a member of the research team. “We can now pick up these waves using detectors like LIGO in the United States and Virgo in Italy.

    MIT /Caltech Advanced aLigo

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    A black hole typically swallows any other radiation that could have come out of the merger that we would be able to detect on Earth, but through our simulations, we found that this may not always be the case.”

    The research team found that when the masses of the two colliding neutron stars are different enough, the larger companion tears the smaller apart. This causes a slower merger that allows an electromagnetic “bang” to escape. Astronomers should be able to detect this electromagnetic signal, and the simulations provide signatures of these noisy collisions that astronomers could look for from Earth.

    The research team, which includes members of the international collaboration CoRe (Computational Relativity), describe their findings in a paper appearing online in the Monthly Notices of the Royal Astronomical Society.

    “Recently, LIGO announced the discovery of a merger event in which the two stars have possibly very different masses,” said Radice. “The main consequence in this scenario is that we expect this very characteristic electromagnetic counterpart to the gravititational wave signal.”

    After reporting the first detection of a neutron-star merger in 2017, in 2019 the LIGO team reported the second, which they named GW190425. The result of the 2017 collision was about what astronomers expected, with a total mass of about 2.7 times the mass of our sun and each of the two neutron stars about equal in mass. But GW190425 was much heavier, with a combined mass of around 3.5 solar masses and the ratio of the two participants more unequal — possibly as high as 2 to 1.

    “While a 2 to 1 difference in mass may not seem like a large difference, only a small range of masses is possible for neutron stars,” said Radice.

    Neutron stars can exist only in a narrow range of masses between about 1.2 and 3 times the mass of our sun. Lighter stellar remnants don’t collapse to form neutron stars and instead form white dwarfs, while heavier objects collapse directly to form black holes. When the difference between the merging stars gets as large as in GW190425, scientists suspected that the merger could be messier — and louder in electromagnetic radiation. Astronomers had detected no such signal from GW190425’s location, but coverage of that area of the sky by conventional telescopes that day wasn’t good enough to rule it out.

    To understand the phenomenon of unequal neutron stars colliding, and to predict signatures of such collisions that astronomers could look for, the research team ran a series of simulations using Pittsburgh Supercomputing Center’s Bridges platform and the San Diego Supercomputer Center’s Comet platform — both in the National Science Foundation’s XSEDE network of supercomputing centers and computers — and other supercomputers.

    Bridges HPE Apollo 2000 XSEDE-allocated supercomputer at Pittsburgh Supercomputing Center

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    4
    The supercomputer Lenovo SuperMUC at at Leibniz Supercomputing Centre, Munich

    MARCONI, CINECA, Lenovo NeXtScale supercomputer Italy

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    NCSA U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer,
    at the National Center for Supercomputing Applications

    Resources of the National Energy Re-search Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy, used in this workspecific assets not named:

    NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC Hopper Cray XE6 supercomputer, named after Grace Hopper, One of the first programmers of the Harvard Mark I computer

    NERSC Cray XC30 Edison supercomputer

    NERSC GPFS for Life Sciences


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF computer cluster in 2003.

    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Future:

    Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

    NERSC is a DOE Office of Science User Facility.

    The researchers found that as the two simulated neutron stars spiraled in toward each other, the gravity of the larger star tore its partner apart. That meant that the smaller neutron star didn’t hit its more massive companion all at once. The initial dump of the smaller star’s matter turned the larger into a black hole. But the rest of its matter was too far away for the black hole to capture immediately. Instead, the slower rain of matter into the black hole created a flash of electromagnetic radiation.

    The research team hopes that the simulated signature they found can help astronomers using a combination of gravitational-wave detectors and conventional telescopes to detect the paired signals that would herald the breakup of a smaller neutron star merging with a larger.

    The simulations required an unusual combination of computing speed, massive amounts of memory, and flexibility in moving data between memory and computation. The team used about 500 computing cores, running for weeks at a time, over about 20 separate instances. The many physical quantities that had to be accounted for in each calculation required about 100 times as much memory as a typical astrophysical simulation.

    “There is a lot of uncertainty surrounding the properties of neutron stars,” said Radice. “In order to understand them, we have to simulate many possible models to see which ones are compatible with astronomical observations. A single simulation of one model would not tell us much; we need to perform a large number of fairly computationally intensive simulations. We need a combination of high capacity and high capability that only machines like Bridges can offer. This work would not have been possible without access to such national supercomputing resources.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Penn State Campus

    About Penn State

    WHAT WE DO BEST

    We teach students that the real measure of success is what you do to improve the lives of others, and they learn to be hard-working leaders with a global perspective. We conduct research to improve lives. We add millions to the economy through projects in our state and beyond. We help communities by sharing our faculty expertise and research.

    Penn State lives close by no matter where you are. Our campuses are located from one side of Pennsylvania to the other. Through Penn State World Campus, students can take courses and work toward degrees online from anywhere on the globe that has Internet service.

    We support students in many ways, including advising and counseling services for school and life; diversity and inclusion services; social media sites; safety services; and emergency assistance.

    Our network of more than a half-million alumni is accessible to students when they want advice and to learn about job networking and mentor opportunities as well as what to expect in the future. Through our alumni, Penn State lives all over the world.

    The best part of Penn State is our people. Our students, faculty, staff, alumni, and friends in communities near our campuses and across the globe are dedicated to education and fostering a diverse and inclusive environment.

     
  • richardmitnick 2:04 pm on July 15, 2020 Permalink | Reply
    Tags: "Biggest Cosmic Nuclear Bombs - See the First Supernovae in Cutting-edge Supercomputer Simulations", ASIAA ACADEMIA SINICA Institute of Astronomy and Astrophysics, , , , , Hypernovae, Supercomputing   

    From ASIAA ACADEMIA SINICA Institute of Astronomy and Astrophysics:”Biggest Cosmic Nuclear Bombs – See the First Supernovae in Cutting-edge Supercomputer Simulations” 

    1

    From ASIAA ACADEMIA SINICA Institute of Astronomy and Astrophysics

    July 15th, 2020

    A hypernova is a type of supernovae that are 100 times more energetic. Astronomers think these biggest cosmic bombs hold the key to peek into genesis moment of first supernovae birth. However, hypernovae are extremely rare in observation. Therefore, the ASIAA team led by Ke-Jung (Ken) Chen, using the NAOJ’s CfCA supercomputer, has completed high-resolution simulations to tackle this issue. By exploring deeply into the core, they discovered what hypernovae would look like after 300 days of their explosion. The highly innovative work offers an unprecedented conclusion: the effect which gas movement has on the luminosity estimation, has long been overlooked in previous theoretical models. This result boosts our understanding of hypernova formation and may prove to be instrumental in the future hypernova observations.

    Right after the big bang, the only elements produced in the universe were hydrogen and helium, all the other natural elements did not come about until after the first stars born and evolved. To understand how the first stars and the elements formed in the first place necessitates the research on supernova. Nearly 50 years of supernova research has simply proven to us: it is not an easy task, many mysteries still open. Thinking that hypernova plays a key role in the breaking-through, the ASIAA Ken Chen team took a deep look into the heart of hypernova by numerical simulations. Because, despite hypernova ejects 100 times more energy than supernovae, observationally, it is in fact extremely rare. And that’s where astronomers start looking for help from good theoretical models and supercomputer simulations.

    There are currently two theoretical models of how hypernovae formed, and Chen’s team chose to build their simulations on the pair-instability supernovae model — the one that is highly anticipated and relatively more robust (the other is called the core-collapse supernovae model, or, “the black hole model”). The difference between them is that one (the later) leaves a black hole, and the other (the former) doesn’t even leave a black hole when it completely blows itself up. Usually when massive stars explode, they leave something behind – either a dense core called a neutron star or a black hole. But for the massive stars – the first stars in the universe – there was only hydrogen and helium, no traces of other elements yet. These very massive first stars can begin making pairs of electron-positrons in the end of their evolution, causing a runaway effect where the pressure drops in the star’s core, triggering a collapse, leading to an enormous explosion that completely disrupts the star, leaving nothing behind, not even a black hole.
    ________________________________________________________
    “A star must be 140-260 times the mass of the Sun to die in such a manner” Chen said. Astronomers call stars that explode in this way the “pair-instability supernovae” (where the “pair” means the electron – positron pair.)

    Such an explosion produces a large amount of radioactive isotope Ni 56, which according to Chen, is “the most important element in a supernova, because its decay energy accounted for most of the visible light of a supernova, and without it, many supernovae would have been too dark to observe”.
    _________________________________________________________

    The international team led by Ken Chen has used the NAOJ’s CfCA supercomputer to run their high resolution hydrodynamical simulation for hypernova. Describing the code and the running “extremely challenging”, Chen explains, “larger the simulation scale, to keep the resolution high, the entire calculation will become very difficult and demand much more computational power, not to mention that the physics involved is also complicated.” To combat these, Chen said, their best advantage is their “well-craft code and a robust program structure.”

    While previous simulations run for pair-instability supernovae model have only done 30 days after the explosion, Chen’s team has run the simulation up to 300 days — which allow them to study the entire decay process of Ni 56 (which has a half-life of 70 days, so the simulation had to be long enough). They are the first team who has done this. With extensive experience in simulating large scale supernovae, the team probed the relationship between the gas movement and energy radiation inside the supernova. What they found is that during the initial decay of Nickel 56, the heated gas expanded and formed thin-shell structures.

    3
    A 2-D snapshot of a pair-instability supernovae as the explosion waves is about to break through the star’s surface. The tiny disturbs represent fluid instability – in a region where different elements interact and mix. Image Credit: ASIAA/Ken Chen

    3
    A 3-D profile of a pair-instability supernovae. The blue cube shows the entire simulated space. Orange region is where nickel 56 decays.
    Image Credit: ASIAA/Ken Chen

    Chen said, “the temperature inside the gas shell is extremely high, from calculation we understand that there should be ~ 30% energy used in gas movement, then the remaining ~ 70% energy can likely become the supernova luminosity. Earlier models have ignored the gas dynamic effects, so the supernova luminosity results were all overestimated.”

    Therefore, in the field of pair instability supernovae study, these results will certainly contribute to the further understanding of its radiation mechanism and observational characteristics.

    Several studies showed that the mass of first stars in the universe would be 100 to 300 solar masses, somewhat hint that the chances for first supernovae to be pair-instability supernovae could really be high. On the other hand, the first stars may be detectable by the James Webb Space Telescope (JWST) – the successor to the Hubble Space Telescope – making the observation and theoretical work of pair-instability supernovae an important subject in the near future.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:44 am on July 15, 2020 Permalink | Reply
    Tags: "Revealing the atmospheric impact of planetary collisions", , , , , Supercomputing, U Durham   

    From Durham University: “Revealing the atmospheric impact of planetary collisions” 

    Durham U bloc

    From Durham University

    15 July 2020

    Giant impacts have a wide range of consequences for young planets and their atmospheres, according to research led by our scientists.

    These huge collisions dominate the late stages of planet formation.

    Using 3D supercomputer simulations the researchers have found a way of revealing how much atmosphere is lost during these events.


    The atmospheric impact of gigantic planetary collisions

    Earth-like planets

    Their simulations show how Earth-like planets with thin atmospheres might have evolved in an early solar system depending on how they were impacted by other objects.

    They ran more than 100 detailed simulations of different giant impacts, altering the speed and angle of the impact on each occasion.

    They found that grazing impacts – like the one thought to have formed our Moon 4.5 billion years ago – led to much less atmospheric loss than a direct hit.

    Giant impacts

    Head on collisions and higher speeds led to much greater erosion, sometimes obliterating the atmosphere completely along with some of the mantle, the layer that sits under a planet’s crust.

    The research tells us more about what happens during these giant impacts, which scientists know are common and important events in the evolution of planets both in our solar system and beyond.

    This will help us to understand both the Earth’s history as a habitable planet and the evolution of exoplanets around other stars.

    The researchers are carrying out hundreds more simulations to test the effects that the different masses and compositions of colliding objects might have.

    Find out more

    The findings are published in The Astrophysical Journal.

    The simulations were conducted on the COSMA supercomputer, part of the DiRAC High-Performance Computing facility based in Durham. The research used the SWIFT open-source code, largely developed and maintained at Durham, to enable the running of these high-resolution supercomputer simulations.

    U Durham HPC DiRAC COSMA7 supercomputer

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Durham U campus

    a hre f=”https://www.dur.ac.uk/”>Durham University is distinctive – a residential collegiate university with long traditions and modern values. We seek the highest distinction in research and scholarship and are committed to excellence in all aspects of education and transmission of knowledge. Our research and scholarship affect every continent. We are proud to be an international scholarly community which reflects the ambitions of cultures from around the world. We promote individual participation, providing a rounded education in which students, staff and alumni gain both the academic and the personal skills required to flourish.

     
  • richardmitnick 9:50 am on July 9, 2020 Permalink | Reply
    Tags: "Designing the perfect bridge", , , , Supercomputing   

    From Science Node: “Designing the perfect bridge” 

    Science Node bloc
    From Science Node

    Reinventing bridge design enables longer spans and reduces building material.

    1
    The Golden Gate Bridge and San Francisco, CA at sunset. This photo was taken from the Marin Headlands.
    Rich Niewiroski Jr.

    From the Union Bridge—the oldest suspension bridge still in use—built on the border between England and Scotland in 1820, to San Francisco’s iconic Golden Gate Bridge [above] and the Great Belt Link that opened in 1997 in Denmark, many suspension bridges have become globally recognized landmarks. They also play a key role in connecting people and civil infrastructure.

    2
    Hutton, River Tweed, Union Bridge. Outwivcamera

    3
    Storebæltsbroen as seen from ‘Bro og naturcenter’, Sjælland, Denmark. Sendelbach

    Now, ever longer bridges are being envisaged. The Strait of Messina Bridge would connect the Italian mainland to Sicily. In Norway, a project to replace the ferries that are part of the north-south European route E39 will involve some of the longest proposed bridge spans worldwide.

    5
    Strait of Messina Bridge depiction. Video e foto del Ponte

    However, the main spans (the length of the suspended roadway between the two lofty bridge pylons) of these planned bridges are fast approaching the limit of what is possible using the conventional design of suspension bridges originating in the 1950s.

    In addition, the construction of bridges and infrastructure consumes a lot of energy and produces considerable CO2 emissions. According to the 2019 Global Status Report of the UN Environment Programme, the construction industry is responsible for nearly 40 percent of total global CO2 emissions.

    A large portion of these emissions arises from the production and transport of building materials, primarily steel and concrete. Consequently, a way to reduce environmental impact is to find methods that use less of those materials.

    Supercomputing reveals possibilities for super-long bridges.

    These pressing problems are why Mads Baandrup and his colleagues in the group of professor Ole Sigmund and associate professor Niels Aage at the Technological University of Denmark (DTU) have reinvented the design of the bridge deck, the traffic-bearing element of suspension bridges [Nature Communications].

    6
    Conventional bridge girders consist of straight steel plates placed orthogonally to stabilize the bridge deck. Though easy to build, this concept doesn’t provide the most efficient transfer of loads on the bridge. Courtesy Nature Communications, Baandrup, et al.

    To ensure industrial applicability, the research was done in close collaboration with Technical Director Henrik Polk from COWI. The goal was to maximize the load carrying capacity of the bridge deck to enable a longer main span, while at the same time minimizing material consumption.

    To achieve this, the scientists used topology optimization, a computational method already used extensively in the car and aircraft industries to optimize combustion engines or wing shapes.

    “With the recently increased power of supercomputers we could adjust the method to apply it to large-scale structures,” says Baandrup.

    Using the PRACE Joliot-Curie supercomputer at GENCI in France, Baandrup and his colleagues analyzed a bridge element measuring 30 x 5 x 75 meters—a repetitive section that represents the whole bridge deck.

    4
    PRACE Joliot Atos Curie supercomputer. Prace.

    7
    The ideal girder design, determined by topology optimization after 400 iterations, offers a more direct and therefore more efficient load transfer than the conventional design. From this ideal, scientists derived an interpreted layout of curved steel diaphragms (red panels). Courtesy Nature Communications, Baandrup, et al.

    This element was divided into 2 billion voxels (the 3D pendant of pixels), each no bigger than a few centimeters. Existing components were stripped out to remove any trace of conventional design. The topology optimization then determined whether each individual voxel should consist of air or steel.

    “In this way, the optimized structure is calculated from scratch, without any assumptions about what it should look like,” Baandrup explains.

    To make the calculation work, the scientists modified an algorithm previously used to find the optimized shape of an aircraft wing, to instead impose the symmetry inherent to all bridge decks.

    “In a process working towards optimization in iterations that were parallelized on thousands of nodes, this was not trivial,” says Baandrup. The symmetry constraint provided the advantage of reducing computational time. The complete calculation would have taken 155 years on an ordinary computer but took only 85 hours using 16,000 nodes on Joliot-Curie.

    Less material means more sustainable construction.

    9
    Design concept applied to the 2692 meter Osman Gazi bridge in Turkey. From the organic-looking and highly complex optimization result in the upper right, a simplified novel design was identified (shown in red). Compared to the conventional design (in blue), the thin curved steel diaphragms lead to a 28 percent weight reduction for the bridge girder. Courtesy Mads Baandrup, Niels Aage.

    The topology optimization resulted in what looks like an organically grown bridge. Instead of the traditional girder of straight steel diaphragms placed inside the bridge deck to reinforce and provide stability, the algorithm came up with a net of curved steel elements.

    “The software identifies the optimal structure but does not take into account if the structure is actually buildable,” Baandrup explains.

    Out of that ideal design, however, he and his colleagues extracted a concept which is constructible—and at a reasonable cost. This interpreted design consists of a girder made of bundles of curved steel plates that are thinner than the plates constituting the conventional design.

    The curved plates transfer the loads on the bridge deck much more directly into the hangers (vertical cables that absorb the loads of a suspension bridge deck) than traditional steel girders. That’s why bridges designed in this way can be constructed to span a longer distance than conventional bridges while requiring less material. In fact, the new design reduces steel consumption by 28 percent, resulting in a reduction of CO2 emissions of a similar magnitude.

    In principle, a similar topology optimization could be applied to other large building structures, such as high-rises or stadiums, in order to reduce the consumption of steel and concrete and thereby work towards a more sustainable construction.

    “Our results reveal a huge potential in rendering construction more ecological,” says Baandrup. “In the future, the construction industry should not only think about how to reduce cost but also how to reduce energy consumption and CO2 emissions. With our results, we believe we can initiate this discussion.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: