Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:48 am on October 8, 2021 Permalink | Reply
    Tags: "Modernizing Workflow Analysis to Assist in Supercomputer Procurements", , , Supercomputing   

    From DOE’s Exascale Computing Project (US): “Modernizing Workflow Analysis to Assist in Supercomputer Procurements” 

    From DOE’s Exascale Computing Project (US)

    October 6, 2021
    Rob Farber

    It is well known in the high-performance computing (HPC) community that many (perhaps most) HPC workloads exhibit dynamic performance envelopes that can stress the memory, compute, network, and storage capabilities of modern supercomputers. Optimizing HPC workloads to run efficiently on existing hardware systems is challenging, but attempting to quantify the performance envelopes of HPC workloads to extrapolate performance predictions for HPC workloads on new system architectures is even more challenging, albeit essential. This predictive analysis is beneficial because it helps each data center’s supercomputer procurement team extrapolate to the new machines and system architectures that will deliver the most performance for production workloads at their datacenter. However, once a supercomputer is installed, configured, made available to users, and benchmarked, it is too late to consider fundamental architectural changes.

    The goal of the Exascale Computing Project (ECP) hardware evaluation (HE) group is to modernize the metrics and predictive analysis to guide US Department of Energy (DOE) supercomputer procurements. Scott Pakin, the ECP HE lead at DOE’s Los Alamos National Laboratory (US), notes, “Our main customer is the DOE facilities, who consider our work to be very valuable in determining the types of machines to be procured and configured. Our work can also be used by application developers seeking to understand the performance characteristics of their codes.”

    Addressing the Complexity of Modern System Procurements

    Many modern supercomputers now contain both CPUs and GPUs, which have their own separate memory systems and run according to different computational models. CPUs, for example, are general-purpose multiple instruction, multiple data (MIMD) processing elements in which each processor core can run a separate task or instruction stream. On the other hand, GPUs use a single instruction, multiple thread (SIMT) execution model that provides a form of coarse-grained task parallelism. Some applications require fine-grained MIMD processing, which means they can only run efficiently on CPUs, whereas others can run efficiently on both CPUs and GPUs, and procurement teams must account for this. However, future systems could contain devices that run according to a non-von Neumann execution model. Potential devices include coarse-grained reconfigurable arrays and future artificial intelligence accelerators.

    The ECP must address these and other complexities. For this reason, the HE portfolio focuses on integration with facilities to answer the following questions.

    How well can ECP applications expect to perform on future hardware?
    Where should facilities focus their efforts in terms of helping applications exploit upcoming hardware?
    What hardware alternatives (e.g., node architectures, memory layout, network topology) would most benefit DOE applications?

    Experience has shown that HE often serves as a bridge between application development (AD) and facilities in which AD is focused more on “here and now” performance. HE takes a more future-looking approach that can provide advance information to facilities. Nick Wright, the Advanced Technology group lead and The DOE’s NERSC National Energy Research Scientific Computing Center (US) chief architect, notes, “Along with helping us define the characteristics of our production workload in a manner that is useful for future hardware procurements, the ECP HE effort has also provided useful information to our application teams to help optimize our workloads.”

    Evaluating CPU/GPU Data Movement

    Modern GPUs support two mechanisms for moving data between CPU memory and GPU memory. Programmer managed memory requires that the programmer manually allocate and explicitly specify each data movement in terms of contiguous blocks of data on each device. The relatively recent addition of GPU Unified memory with hardware support means that pages of data can be moved between devices automatically on demand, without explicit programmer intervention. In this way, unified memory can support the movement of complex and sparse data structures. (For more information see https://developer.nvidia.com/blog/unified-memory-cuda-beginners/).

    Programmer managed memory is more difficult to use but can lead to superior performance if employed carefully because the programmer has the ability to explicitly specify fast, bulk data transfers and even choreograph the asynchronous overlapping of computation and data movement. In contrast, unified memory relies on “smarts” in the hardware (implemented via the hardware memory management unit or MMU) to determine when pages of data are needed by either the CPU or GPU. Sometimes, the hardware cannot minimize time consuming data movement as well as a knowledgeable programmer, but this is not always the case. Overall, the ease of use and benefits of unified memory are hard to deny – especially as it is well suited for complicated data structures and access patterns for which the programmer also cannot determine the best way to transfer data between devices.

    Programmer managed and unified memory are not mutually exclusive. In fact, it is reasonable for an application to use programmer managed memory to transfer regular data structures (like a matrix) and unified memory to transfer irregular data structures (like an in-memory database or a graph). Scott Pakin notes, the HE team investigated this use case and found that it can lead to excessive data movement, as shown in Figure 3. Part of the reason is that on current GPUs, memory explicitly allocated on the GPU takes precedence over unified memory that can migrate between CPU and GPU—even if the latter memory is more critical to an application’s performance. As a result, unified memory can be repeatedly evicted from the GPU to CPU to make room then brought back as soon as it is again required.

    2
    Figure 3: Identification of excess data movement between the CPU and GPU memory systems. (Source: the ECP.)

    nvestigations like this provide three main benefits:

    -inform facilities, AD, and software technology of excess data movement causes in CPU/GPU interactions;
    -provide new and enhanced tools that relate excess data movement to application data structures and access patterns; and
    -identify potential ways to reduce excess data movement.

    Instruction Mix Analysis

    CPUs execute a variety of instruction types. Pakin notes that integer instructions typically execute faster than floating-point and branch instructions, which execute faster than memory instructions. Consequently, the mix of instructions an application executes correlates with application performance. This mix is not normally fixed but can depend on how the application is run. As an experiment, the HE team kept an application—in this case, the SW4Lite proxy application—and its inputs fixed while they varied the level of hardware thread parallelism the application was allowed to exploit and observed how the instruction mix varied by kernel routine.

    Figure 4 presents the resulting measurements. Of the sixteen kernels, nine are dominated by various types of floating-point operations (FPOPS, DPOPS, SPOPS, and FMAINS), and the ratio changes little with increased parallelism. This indicates that those kernels are likely to observe good performance even with increased parallelism. However, five of the sixteen kernels are dominated by memory operations (the blue LSTINS bars), and a few of those see an increase in the fraction of memory operations with increased parallelism. The implication is that the performance of those kernels will become dominated by the memory subsystem as parallelism increases and therefore run slower than an application developer might hope.

    4
    Figure 4: Categorization of instruction mix. In the color key on the right, LSTINS are the load-store instructions, INTINS are the integer instructions, BRINS are the branch instructions, FPOPS are the floating-point operation instructions, DPOPS are the double-precision operations, SPOPS are the single-precision operations, and FMAINS are the Fused Multiply-Add instructions.

    The team believes that this analysis capability will help guide the Facilities in future procurement, as well as guide AD in development and tuning activities. It can guide Facilities by indicating where performance gains from increased core counts may begin to peter out. At such a point, Facilities may favor CPUs with fewer hardware threads per node but more performance per thread. Similarly, AD can utilize this capability to help identify the sources of reduced efficiency as thread counts increase.

    Understanding the Effectiveness of Network Congestion Control

    On HPC systems, competition for network bandwidth can arise from multiple sources both internal to the application as well as from competing jobs running on other parts of the system. Some HPC systems are also designed so that I/O traffic is also routed across the supercomputer fabric, rather than though a separate storage fabric, which means that file and storage I/O can also cause network congestion that can slow application performance.

    The HE team recently reported progress in understanding the effectiveness of network congestion management and quality of service (QoS) (i.e., priority levels) in the presence of I/O and other many-to-one (N:1) communication patterns.

    The findings shown in Figure 5, based on large-scale network simulation, indicate that QoS and congestion management can effectively mitigate interference of N:1 traffic with other applications. The orange line represents an N:1 workload that saturates an I/O server in a multi-job environment. The blue line represents an application, in this case a “Halo3D” microbenchmark that alternates computing and communicating with its nearest neighbors on a 3‑D grid. The spikes in the graph are caused by these periodic bursts of communication. Without congestion management, the N:1 workload would consume all the network bandwidth, flattening the spikes and delaying the application’s execution until the N:1 workload completes. With congestion management, the case represented in the figure, the bursts of communication continue largely on schedule while the N:1 workload still receives substantial bandwidth when the application does not require it and only slightly degraded bandwidth when both traffic patterns communicate simultaneously. In short, congestion management helps reduce congestion on the network fabric and more fairly allocate network bandwidth across jobs, which could lead to less variation in run time for applications as well as higher performance.

    5
    Figure 5: Example results for a congestion management benchmark. (Source: the ECP.)

    The team thinks this will help in the selection and configuration of supercomputer networks and storage infrastructure, as well as help avoid the worst-case network congestion scenarios.

    Other Tools: Roofline Model

    The HE team also uses the well-regarded roofline model, which can expose several computational limitations in both CPUs and GPUs. The roofline model plots computation rate (floating-point operations per second) against computational intensity (floating point operations per byte transferred from the memory subsystem) and represents the maximal performance achievable for a given computational intensity. The name derives from the shape of the curve, which looks like a roofline: a linear increase in performance until some threshold computational intensity is reached followed by a horizontal line of constant performance once the processor has reached its peak computation rate. An application is plotted as a point on that graph based on its observed computation rate and computational intensity. The distance between that point and the roofline quantifies the amount of inefficiency in the application and how much additional performance can be gained. The graph also indicates if a performance benefit can be observed if it is possible to increase the application’s computational intensity.

    In some recent work, the HE team used the roofline model to develop roofline scaling trajectories for two sparse numerical kernels, SpTRSV (sparse triangular solve) and SpTRSM (sparse triangular solve with multiple right-hand sides), running on GPUs. This means they analyzed how changes to cache and memory access locality, warp efficiency, and streaming-multiprocessor and GPU occupancy relate to where the corresponding point appears on the roofline graph. The challenge in performing this analysis is the kernels’ data dependencies. To address this challenge, the team constructed directed acyclic graphs of these dependencies and used these to produce trend lines of application performance and arithmetic intensity for different amounts of concurrency.

    Summary

    The ECP HE team was formed to provide the ECP and DOE facilities with hardware knowledge and analysis capabilities. New hardware architectures, tiered memory systems, and other advances in computer architectures have required modernizing project metrics and the predictive analysis used in system procurements.

    Once a supercomputer is installed and prepared for production runs, it is too late to consider fundamental architectural changes. The ECP predictive analysis effort is beneficial because it guides DOE supercomputer procurements toward the systems that deliver the most performance to the extreme-scale applications of interest to DOE.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About DOE’s Exascale Computing Project (US)
    The ECP is a collaborative effort of two DOE organizations – the DOE’s Office of Science and the DOE’s National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 7:41 am on August 31, 2021 Permalink | Reply
    Tags: "AARNet with RMIT and AWS collaborate to establish Australia’s first cloud supercomputing facility", AARNet (AU), Amazon Web Services, , Cloud supercomputing, Supercomputing   

    From AARNet (AU) : “AARNet with RMIT and AWS collaborate to establish Australia’s first cloud supercomputing facility” 

    aarnet-bloc

    From AARNet (AU)

    July 29, 2021

    1

    This collaboration will provide Royal Melbourne Institute of Technology (AU) researchers and students with the ability to access cloud supercomputing at scale on Amazon Web Services (AWS) to accelerate research outcomes for advanced manufacturing, space, fintech, digital health, and creative technologies.

    Supercomputing in the cloud will help RMIT researchers address some of the world’s most complex challenges in far less time – from disease prevention, extreme weather forecasting, and citizen safety.

    RMIT will leverage AWS Direct Connect low latency, secure and private connections to AWS for workloads which require higher speed or lower latency than the internet. The increased bandwidth will give researchers, students, staff, and industry partners the ability to experiment and test new ideas and discoveries involving large data sets at speed, fast-tracking the time between concept and products that RMIT are ready to take to market.

    AARNet will provide the high-speed internet and telecommunications services required for the facility. Intel will contribute advanced technology solutions to process, optimise, store, and move large, complicated data sets.

    RMIT Deputy Vice-Chancellor (STEM College) and Vice President Digital Innovation, Professor Aleksandar Subic said the facility, supported by the Victorian Government Higher Education Investment Fund, is a pioneering example of innovation in the university sector.

    “Our collaboration with AWS, Intel, and AARNet to establish Australia’s first cloud supercomputing facility represents a step change in how universities and industries access HPC capabilities for advanced data processing and computing,” Subic said.

    “By leveraging AWS Direct Connect, RMIT is set to access tremendous HPC processing power using a unique service model that provides seamless access to all our staff, researchers, and students.

    “Our industry partners will also have access to the new cloud supercomputing facility through joint projects and programs.

    “The facility will be operated by our researchers and students in another example that shows how industry engagement and work integrated learning are in our DNA.”

    AARNet CEO Chris Hancock said AARNet had provided RMIT and other Australian universities with leading-edge telecommunications services to enable transformational research outcomes for decades.

    “We’ve also been connecting researchers to the cloud for many years, but nothing on this scale,” he said.

    “We’re excited to be partnering with RMIT on this project that uses our ultra-fast network to remove the barrier of geography and distance for research across Australia and beyond.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    AARNet (AU) provides critical infrastructure for driving innovation in today’s knowledge-based economy
    AARNET is a national resource – a National Research and Education Network (NREN). AARNet provides unique information communications technology capabilities to enable Australian education and research institutions to collaborate with each other and their international peer communities.

     
  • richardmitnick 4:43 pm on August 27, 2021 Permalink | Reply
    Tags: "How disorderly young galaxies grow up and mature", , , , , , Supercomputing   

    From Lund University [Lunds universitet] (SE) : “How disorderly young galaxies grow up and mature” 

    From Lund University [Lunds universitet] (SE)

    27 August 2021
    Oscar Agertz, associate senior lecturer
    Department of Astronomy and Theoretical Physics,
    Lund University [Lunds universitet] (SE)
    +46 700 45 22 20
    oscar.agertz@astro.lu.se

    1
    Using a supercomputer, the researchers created a high-resolution simulation.

    Using a supercomputer [below] simulation, a research team at Lund University in Sweden has succeeded in following the development of a galaxy over a span of 13.8 billion years. The study shows how, due to interstellar frontal collisions, young and chaotic galaxies over time mature into spiral galaxies such as the Milky Way.

    Soon after the Big Bang 13.8 billion years ago, the Universe was an unruly place. Galaxies constantly collided. Stars formed at an enormous rate inside gigantic gas clouds. However, after a few billion years of intergalactic chaos, the unruly, embryonic galaxies became more stable and over time matured into well-ordered spiral galaxies. The exact course of these developments has long been a mystery to the world’s astronomers. However, in a new study published in MNRAS VINTERGATAN – I. The origins of chemically, kinematically, and structurally distinct discs in a simulated Milky Way-mass galaxy, researchers have been able to provide some clarity on the matter.

    Also in MNRAS VINTERGATAN – II. The history of the Milky Way told by its mergers.

    and MNRAS VINTERGATAN III. How to reset the metallicity of the Milky Way.

    “Using a supercomputer, we have created a high-resolution simulation that provides a detailed picture of a galaxy’s development since the Big Bang, and how young chaotic galaxies transition into well-ordered spirals” says Oscar Agertz, astronomy researcher at Lund University.

    In the study, the astronomers, led by Oscar Agertz and Florent Renaud, use the Milky Way’s stars as a starting point. The stars act as time capsules that divulge secrets about distant epochs and the environment in which they were formed. Their positions, speeds and amounts of various chemical elements can therefore, with the assistance of computer simulations, help us understand how our own galaxy was formed.

    “We have discovered that when two large galaxies collide, a new disc can be created around the old one due to the enormous inflows of star-forming gas. Our simulation shows that the old and new discs slowly merged over a period of several billion years. This is something that not only resulted in a stable spiral galaxy, but also in populations of stars that are similar to those in the Milky Way”, says Florent Renaud, astronomy researcher at Lund University.

    3
    A compact group of interacting galaxies, similar to the chaos of the early days of the Universe. Credit: (National Aeronautics Space Agency (US)/European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU), AND THE HUBBLE SM4 ERO TEAM.

    The new findings will help astronomers to interpret current and future mappings of the Milky Way. The study points to a new direction for research in which the main focus will be on the interaction between large galaxy collisions and how spiral galaxies’ discs are formed. The research team in Lund has already started new super computer simulations in cooperation with the research infrastructure PRACE (Partnership for Advanced Computing in Europe).

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Lund University [Lunds universitet] (SE) is a prestigious university in Sweden and one of northern Europe’s oldest universities. The university is located in the city of Lund in the province of Scania, Sweden. It traces its roots back to 1425, when a Franciscan studium generale was founded in Lund. After Sweden won Scania from Denmark in the 1658 Treaty of Roskilde, the university was officially founded in 1666 on the location of the old studium generale next to Lund Cathedral.

    Lund University has nine faculties with additional campuses in the cities of Malmö and Helsingborg, with around 44,000 students in 270 different programmes and 1,400 freestanding courses. The university has 640 partner universities in nearly 70 countries and it belongs to the League of European Research Universities (EU) as well as the global Universitas 21 network. Lund University is consistently ranked among the world’s top 100 universities.

    Two major facilities for materials research are in Lund University: MAX IV, a synchrotron radiation laboratory – inaugurated in June 2016, and European Spallation Source (ESS), a new European facility that will provide up to 100 times brighter neutron beams than existing facilities today, to be starting to produce neutrons in 2023.

    The university centers on the Lundagård park adjacent to the Lund Cathedral, with various departments spread in different locations in town, but mostly concentrated in a belt stretching north from the park connecting to the university hospital area and continuing out to the northeastern periphery of the town, where one finds the large campus of the Faculty of Engineering.

    Research centres

    The university is organised into more than 20 institutes and research centres, such as:

    Lund University Centre for Sustainability Studies (LUCSUS)
    Biomedical Centre
    Centre for Biomechanics
    Centre for Chemistry and Chemical Engineering – Kemicentrum
    Centre for East and South-East Asian Studies
    Centre for European Studies
    Centre for Geographical Information Systems (GIS Centrum)
    Centre for Innovation, Research and Competence in the Learning Economy (CIRCLE)
    Center for Middle Eastern Studies at Lund University
    Centre for Molecular Protein Science
    Centre for Risk Analysis and Management (LUCRAM)
    International Institute for Industrial Environmental Economics at Lund University (IIIEE)
    Lund Functional Food Science Centre
    Lund University Diabetes Centre (LUDC)
    MAX lab – Accelerator physics, synchrotron radiation and nuclear physics research
    Pufendorf Institute
    Raoul Wallenberg Institute of Human Rights and Humanitarian Law
    Swedish South Asian Studies Network

     
  • richardmitnick 4:03 pm on August 25, 2021 Permalink | Reply
    Tags: , , , , NVIDIA and HPE to Deliver 2240-GPU Polaris Supercomputer for DOE's Argonne National Laboratory", Supercomputing, The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society.   

    From insideHPC : “NVIDIA and HPE to Deliver 2240-GPU Polaris Supercomputer for DOE’s Argonne National Laboratory” 

    From insideHPC

    August 25, 2021

    NVIDIA and DOE’s Argonne National Laboratory (US) this morning announced Polaris, a GPU-based supercomputer, with 22240 NVIDIA A100 Tensor Core GPUs delivering 1.4 exaflops of theoretical AI performance and about 44 petaflops of peak double-precision performance.

    The Polaris system will be hosted at the laboratory’s Argonne Leadership Computing Facility (ALCF) in support of R&D with extreme scale for users’ algorithms and science. Polaris, to be built by Hewlett Packard Enterprise, will combine simulation and machine learning by tackling data-intensive and AI high performance computing workloads, powered by 560 total nodes, each with four A100 GPUs, the organizations said.

    1
    ANL ALCF HPE NVIDIA Polaris supercomputer depiction.

    “The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU-accelerated computing platform provides pioneers like the ALCF breakthrough performance for next-generation supercomputers such as Polaris that let researchers push the boundaries of scientific exploration.”

    “Polaris is a powerful platform that will allow our users to enter the era of exascale AI,” said ALCF Director Michael E. Papka. “Harnessing the huge number of NVIDIA A100 GPUs will have an immediate impact on our data-intensive and AI HPC workloads, allowing Polaris to tackle some of the world’s most complex scientific problems.”

    The system will accelerate transformative scientific exploration, such as advancing cancer treatments, exploring clean energy and propelling particle collision research to discover new approaches to physics. And it will transport the ALCF into the era of exascale AI by enabling researchers to update their scientific workloads for Aurora, Argonne’s forthcoming exascale system.

    Polaris will also be available to researchers from academia, government agencies and industry through the ALCF’s peer-reviewed allocation and application programs. These programs provide the scientific community with access to the nation’s fastest supercomputers to address “grand challenges” in science and engineering.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:29 am on August 23, 2021 Permalink | Reply
    Tags: "A Huge Number of Rogue Supermassive Black Holes Are Wandering The Universe", , , , , Supercomputing   

    From Harvard-Smithsonian Center for Astrophysics (US) via Science Alert (US) : “A Huge Number of Rogue Supermassive Black Holes Are Wandering The Universe” 

    From Harvard-Smithsonian Center for Astrophysics (US)

    via

    Science Alert (US)

    23 AUGUST 2021
    MICHELLE STARR

    1
    Artist’s impression of a supermassive black hole. (Naeblys/iStock/Getty Images Plus)

    Messier 87*, The first image of the event horizon of a black hole. This is the supermassive black hole at the center of the galaxy Messier 87. Image via The Event Horizon Telescope Collaboration released on 10 April 2019 via National Science Foundation(US).

    Supermassive black holes tend to sit, more or less stationary, at the centers of galaxies. But not all of these awesome cosmic objects stay put; some may be knocked askew, wobbling around galaxies like cosmic nomads.

    We call such black holes ‘wanderers’, and they’re largely theoretical, because they are difficult (but not impossible) to observe, and therefore quantify. But a new set of simulations has allowed a team of scientists to work out how many wanderers there should be, and whereabouts – which in turn could help us identify them out there in the Universe.

    This could have important implications for our understanding of how supermassive black holes – monsters millions to billions of times the mass of our Sun – form and grow, a process that is shrouded in mystery.

    Cosmologists think that supermassive black holes (SMBHs) reside at the nuclei of all – or at least most – galaxies in the Universe. These objects’ masses are usually roughly proportional to the mass of the central galactic bulge around them, which suggests that the evolution of the black hole and its galaxy are somehow linked.

    But the formation pathways of supermassive black holes are unclear. We know that stellar-mass black holes form from the core collapse of massive stars, but that mechanism doesn’t work for black holes over about 55 times the mass of the Sun.

    Astronomers think that SMBHs grow via the accretion of stars and gas and dust, and mergers with other black holes (very chunky ones at nuclei of other galaxies, when those galaxies collide).

    But cosmological timescales are very different from our human timescales, and the process of two galaxies colliding can take a very long time. This makes the potential window for the merger to be disrupted quite large, and the process could be delayed or even prevented entirely, resulting in these black hole ‘wanderers’.

    A team of astronomers led by Angelo Ricarte of the Harvard & Smithsonian Center for Astrophysics has used the Romulus cosmological simulations to estimate how frequently this ought to have occurred in the past, and how many black holes would still be wandering today.

    This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.

    NCSA University of Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer,at the National Center for Supercomputing Applications.

    These simulations self-consistently track the orbital evolution of pairs of supermassive black holes, which means they are able to predict which black holes are likely to make it to the center of their new galactic home, and how long this process should take – as well as how many never get there.
    “Romulus predicts that many supermassive black hole binaries form after several billions of years of orbital evolution, while some SMBHs will never make it to the center,” the researchers wrote in their paper.

    “As a result, Milky Way-mass galaxies in Romulus are found to host an average of 12 supermassive black holes, which typically wander the halo far from the galactic center.”

    In the early Universe, before about 2 billion years after the Big Bang, the team found, wanderers both outnumber and outshine the supermassive black holes in galactic nuclei. This means they would produce most of the light we would expect to see shining from the material around active SMBHs, glowing brightly as it orbits and accretes onto the black hole.

    They remain close to their seed mass – that is, the mass at which they formed – and probably originate in smaller satellite galaxies that orbit larger ones.

    And some wanderers should still be around today, according to the simulations. In the local Universe, there should actually be quite a few hanging around.

    “We find that the number of wandering black holes scales roughly linearly with the halo mass, such that we expect thousands of wandering black holes in galaxy cluster halos,” the researchers wrote.

    “Locally, these wanderers account for around 10 percent of the local black hole mass budget once seed masses are accounted for.”

    These black holes may not necessarily be active, and therefore would be very difficult to spot. In an upcoming paper, the team will be exploring in detail the possible ways we could observe these lost wanderers.

    Then all we have to do is find the lost stellar-mass and intermediate-mass black holes…

    The research has been published in the MNRAS.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The Harvard-Smithsonian Center for Astrophysics (US) combines the resources and research facilities of the Harvard College Observatory(US) and the Smithsonian Astrophysical Observatory(US) under a single director to pursue studies of those basic physical processes that determine the nature and evolution of the universe. The Smithsonian Astrophysical Observatory(US) is a bureau of the Smithsonian Institution(US), founded in 1890. The Harvard College Observatory, founded in 1839, is a research institution of the Faculty of Arts and Sciences, Harvard University(US), and provides facilities and substantial other support for teaching activities of the Department of Astronomy.

    Founded in 1973 and headquartered in Cambridge, Massachusetts, the CfA leads a broad program of research in astronomy, astrophysics, Earth and space sciences, as well as science education. The CfA either leads or participates in the development and operations of more than fifteen ground- and space-based astronomical research observatories across the electromagnetic spectrum, including the forthcoming Giant Magellan Telescope(CL) and the Chandra X-ray Observatory(US), one of NASA’s Great Observatories.

    Hosting more than 850 scientists, engineers, and support staff, the CfA is among the largest astronomical research institutes in the world. Its projects have included Nobel Prize-winning advances in cosmology and high energy astrophysics, the discovery of many exoplanets, and the first image of a black hole. The CfA also serves a major role in the global astrophysics research community: the CfA’s Astrophysics Data System(ADS)(US), for example, has been universally adopted as the world’s online database of astronomy and physics papers. Known for most of its history as the “Harvard-Smithsonian Center for Astrophysics”, the CfA rebranded in 2018 to its current name in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. The CfA’s current Director (since 2004) is Charles R. Alcock, who succeeds Irwin I. Shapiro (Director from 1982 to 2004) and George B. Field (Director from 1973 to 1982).

    The Center for Astrophysics | Harvard & Smithsonian is not formally an independent legal organization, but rather an institutional entity operated under a Memorandum of Understanding between Harvard University and the Smithsonian Institution. This collaboration was formalized on July 1, 1973, with the goal of coordinating the related research activities of the Harvard College Observatory (HCO) and the Smithsonian Astrophysical Observatory (SAO) under the leadership of a single Director, and housed within the same complex of buildings on the Harvard campus in Cambridge, Massachusetts. The CfA’s history is therefore also that of the two fully independent organizations that comprise it. With a combined lifetime of more than 300 years, HCO and SAO have been host to major milestones in astronomical history that predate the CfA’s founding.

    History of the Smithsonian Astrophysical Observatory (SAO)

    Samuel Pierpont Langley, the third Secretary of the Smithsonian, founded the Smithsonian Astrophysical Observatory on the south yard of the Smithsonian Castle (on the U.S. National Mall) on March 1,1890. The Astrophysical Observatory’s initial, primary purpose was to “record the amount and character of the Sun’s heat”. Charles Greeley Abbot was named SAO’s first director, and the observatory operated solar telescopes to take daily measurements of the Sun’s intensity in different regions of the optical electromagnetic spectrum. In doing so, the observatory enabled Abbot to make critical refinements to the Solar constant, as well as to serendipitously discover Solar variability. It is likely that SAO’s early history as a solar observatory was part of the inspiration behind the Smithsonian’s “sunburst” logo, designed in 1965 by Crimilda Pontes.

    In 1955, the scientific headquarters of SAO moved from Washington, D.C. to Cambridge, Massachusetts to affiliate with the Harvard College Observatory (HCO). Fred Lawrence Whipple, then the chairman of the Harvard Astronomy Department, was named the new director of SAO. The collaborative relationship between SAO and HCO therefore predates the official creation of the CfA by 18 years. SAO’s move to Harvard’s campus also resulted in a rapid expansion of its research program. Following the launch of Sputnik (the world’s first human-made satellite) in 1957, SAO accepted a national challenge to create a worldwide satellite-tracking network, collaborating with the United States Air Force on Project Space Track.

    With the creation of National Aeronautics and Space Administration(US) the following year and throughout the space race, SAO led major efforts in the development of orbiting observatories and large ground-based telescopes, laboratory and theoretical astrophysics, as well as the application of computers to astrophysical problems.

    History of Harvard College Observatory (HCO)

    Partly in response to renewed public interest in astronomy following the 1835 return of Halley’s Comet, the Harvard College Observatory was founded in 1839, when the Harvard Corporation appointed William Cranch Bond as an “Astronomical Observer to the University”. For its first four years of operation, the observatory was situated at the Dana-Palmer House (where Bond also resided) near Harvard Yard, and consisted of little more than three small telescopes and an astronomical clock. In his 1840 book recounting the history of the college, then Harvard President Josiah Quincy III noted that “…there is wanted a reflecting telescope equatorially mounted…”. This telescope, the 15-inch “Great Refractor”, opened seven years later (in 1847) at the top of Observatory Hill in Cambridge (where it still exists today, housed in the oldest of the CfA’s complex of buildings). The telescope was the largest in the United States from 1847 until 1867. William Bond and pioneer photographer John Adams Whipple used the Great Refractor to produce the first clear Daguerrotypes of the Moon (winning them an award at the 1851 Great Exhibition in London). Bond and his son, George Phillips Bond (the second Director of HCO), used it to discover Saturn’s 8th moon, Hyperion (which was also independently discovered by William Lassell).

    Under the directorship of Edward Charles Pickering from 1877 to 1919, the observatory became the world’s major producer of stellar spectra and magnitudes, established an observing station in Peru, and applied mass-production methods to the analysis of data. It was during this time that HCO became host to a series of major discoveries in astronomical history, powered by the Observatory’s so-called “Computers” (women hired by Pickering as skilled workers to process astronomical data). These “Computers” included Williamina Fleming; Annie Jump Cannon; Henrietta Swan Leavitt; Florence Cushman; and Antonia Maury, all widely recognized today as major figures in scientific history. Henrietta Swan Leavitt, for example, discovered the so-called period-luminosity relation for Classical Cepheid variable stars, establishing the first major “standard candle” with which to measure the distance to galaxies. Now called “Leavitt’s Law”, the discovery is regarded as one of the most foundational and important in the history of astronomy; astronomers like Edwin Hubble, for example, would later use Leavitt’s Law to establish that the Universe is expanding, the primary piece of evidence for the Big Bang model.

    Upon Pickering’s retirement in 1921, the Directorship of HCO fell to Harlow Shapley (a major participant in the so-called “Great Debate” of 1920). This era of the observatory was made famous by the work of Cecelia Payne-Gaposchkin, who became the first woman to earn a Ph.D. in astronomy from Radcliffe College (a short walk from the Observatory). Payne-Gapochkin’s 1925 thesis proposed that stars were composed primarily of hydrogen and helium, an idea thought ridiculous at the time. Between Shapley’s tenure and the formation of the CfA, the observatory was directed by Donald H. Menzel and then Leo Goldberg, both of whom maintained widely recognized programs in solar and stellar astrophysics. Menzel played a major role in encouraging the Smithsonian Astrophysical Observatory to move to Cambridge and collaborate more closely with HCO.

    Joint history as the Center for Astrophysics (CfA)

    The collaborative foundation for what would ultimately give rise to the Center for Astrophysics began with SAO’s move to Cambridge in 1955. Fred Whipple, who was already chair of the Harvard Astronomy Department (housed within HCO since 1931), was named SAO’s new director at the start of this new era; an early test of the model for a unified Directorship across HCO and SAO. The following 18 years would see the two independent entities merge ever closer together, operating effectively (but informally) as one large research center.

    This joint relationship was formalized as the new Harvard–Smithsonian Center for Astrophysics on July 1, 1973. George B. Field, then affiliated with UC Berkeley(US), was appointed as its first Director. That same year, a new astronomical journal, the CfA Preprint Series was created, and a CfA/SAO instrument flying aboard Skylab discovered coronal holes on the Sun. The founding of the CfA also coincided with the birth of X-ray astronomy as a new, major field that was largely dominated by CfA scientists in its early years. Riccardo Giacconi, regarded as the “father of X-ray astronomy”, founded the High Energy Astrophysics Division within the new CfA by moving most of his research group (then at American Sciences and Engineering) to SAO in 1973. That group would later go on to launch the Einstein Observatory (the first imaging X-ray telescope) in 1976, and ultimately lead the proposals and development of what would become the Chandra X-ray Observatory. Chandra, the second of NASA’s Great Observatories and still the most powerful X-ray telescope in history, continues operations today as part of the CfA’s Chandra X-ray Center. Giacconi would later win the 2002 Nobel Prize in Physics for his foundational work in X-ray astronomy.

    Shortly after the launch of the Einstein Observatory, the CfA’s Steven Weinberg won the 1979 Nobel Prize in Physics for his work on electroweak unification. The following decade saw the start of the landmark CfA Redshift Survey (the first attempt to map the large scale structure of the Universe), as well as the release of the Field Report, a highly influential Astronomy & Astrophysics Decadal Survey chaired by the outgoing CfA Director George Field. He would be replaced in 1982 by Irwin Shapiro, who during his tenure as Director (1982 to 2004) oversaw the expansion of the CfA’s observing facilities around the world.

    CfA-led discoveries throughout this period include canonical work on Supernova 1987A, the “CfA2 Great Wall” (then the largest known coherent structure in the Universe), the best-yet evidence for supermassive black holes, and the first convincing evidence for an extrasolar planet.

    The 1990s also saw the CfA unwittingly play a major role in the history of computer science and the internet: in 1990, SAO developed SAOImage, one of the world’s first X11-based applications made publicly available (its successor, DS9, remains the most widely used astronomical FITS image viewer worldwide). During this time, scientists at the CfA also began work on what would become the Astrophysics Data System (ADS), one of the world’s first online databases of research papers. By 1993, the ADS was running the first routine transatlantic queries between databases, a foundational aspect of the internet today.

    The CfA Today

    Research at the CfA

    Charles Alcock, known for a number of major works related to massive compact halo objects, was named the third director of the CfA in 2004. Today Alcock overseas one of the largest and most productive astronomical institutes in the world, with more than 850 staff and an annual budget in excess of $100M. The Harvard Department of Astronomy, housed within the CfA, maintains a continual complement of approximately 60 Ph.D. students, more than 100 postdoctoral researchers, and roughly 25 undergraduate majors in astronomy and astrophysics from Harvard College. SAO, meanwhile, hosts a long-running and highly rated REU Summer Intern program as well as many visiting graduate students. The CfA estimates that roughly 10% of the professional astrophysics community in the United States spent at least a portion of their career or education there.

    The CfA is either a lead or major partner in the operations of the Fred Lawrence Whipple Observatory, the Submillimeter Array, MMT Observatory, the South Pole Telescope, VERITAS, and a number of other smaller ground-based telescopes. The CfA’s 2019-2024 Strategic Plan includes the construction of the Giant Magellan Telescope as a driving priority for the Center.

    CFA Harvard Smithsonian Submillimeter Array on MaunaKea, Hawaii, USA, Altitude 4,205 m (13,796 ft).

    South Pole Telescope SPTPOL. The SPT collaboration is made up of over a dozen (mostly North American) institutions, including The University of Chicago (US); The University of California Berkeley (US); Case Western Reserve University (US); Harvard/Smithsonian Astrophysical Observatory (US); The University of Colorado, Boulder; McGill(CA) University, The University of Illinois, Urbana-Champaign;The University of California, Davis; Ludwig Maximilians Universität München(DE); DOE’s Argonne National Laboratory; and The National Institute for Standards and Technology. The University of California, Davis; Ludwig Maximilians Universität München(DE); DOE’s Argonne National Laboratory; and The National Institute for Standards and Technology. It is funded by the National Science Foundation(US).

    Along with the Chandra X-ray Observatory, the CfA plays a central role in a number of space-based observing facilities, including the recently launched Parker Solar Probe, Kepler Space Telescope, the Solar Dynamics Observatory (SDO), and HINODE. The CfA, via the Smithsonian Astrophysical Observatory, recently played a major role in the Lynx X-ray Observatory, a NASA-Funded Large Mission Concept Study commissioned as part of the 2020 Decadal Survey on Astronomy and Astrophysics (“Astro2020”). If launched, Lynx would be the most powerful X-ray observatory constructed to date, enabling order-of-magnitude advances in capability over Chandra.

    NASA Parker Solar Probe Plus named to honor Pioneering Physicist Eugene Parker.

    SAO is one of the 13 stakeholder institutes for the Event Horizon Telescope Board, and the CfA hosts its Array Operations Center. In 2019, the project revealed the first direct image of a black hole.

    The result is widely regarded as a triumph not only of observational radio astronomy, but of its intersection with theoretical astrophysics. Union of the observational and theoretical subfields of astrophysics has been a major focus of the CfA since its founding.

    In 2018, the CfA rebranded, changing its official name to the “Center for Astrophysics | Harvard & Smithsonian” in an effort to reflect its unique status as a joint collaboration between Harvard University and the Smithsonian Institution. Today, the CfA receives roughly 70% of its funding from NASA, 22% from Smithsonian federal funds, and 4% from the National Science Foundation. The remaining 4% comes from contributors including the United States Department of Energy, the Annenberg Foundation, as well as other gifts and endowments.

     
  • richardmitnick 10:22 am on August 12, 2021 Permalink | Reply
    Tags: "Protecting Earth from Space Storms", Supercomputing, TACC Frontera Dell EMC supercomputer fastest at any university.,   

    From Texas Advanced Computing Center (US) : “Protecting Earth from Space Storms” 

    From Texas Advanced Computing Center (US)

    August 11, 2021
    Aaron Dubrow

    1
    Space weather modeling framework simulation of the Sept 10, 2014 coronal mass ejection during solar maximum. The radial magnetic field is shown on the surface of the Sun in gray scale. The magnetic field lines on the flux rope are colored with the velocity. The background is colored with the electron number density. [Credit: Gabor Toth, University of Michigan (US)]

    “There are only two natural disasters that could impact the entire U.S.,” according to Gabor Toth, professor of Climate and Space Sciences and Engineering at the University of Michigan. “One is a pandemic and the other is an extreme space weather event.”

    We’re currently seeing the effects of the first in real-time.

    The last major space weather event struck the Earth in 1859 [Carrington Event]. Smaller, but still significant, space weather events occur regularly. These fry electronics and power grids, disrupt global positioning systems, cause shifts in the range of the Aurora Borealis, and raise the risk of radiation to astronauts or passengers on planes crossing over the poles.

    “We have all these technological assets that are at risk,” Toth said. “If an extreme event like the one in 1859 happened again, it would completely destroy the power grid and satellite and communications systems — the stakes are much higher.”

    Motivated by the White House National Space Weather Strategy and Action Plan and the National Strategic Computing Initiative, in 2020 the National Science Foundation (US) and National Aeronautics Space Agency (US) created the Space Weather with Quantified Uncertainties (SWQU) program. It brings together research teams from across scientific disciplines to advance the latest statistical analysis and high performance computing methods within the field of space weather modeling.

    “We are very proud to have launched the SWQU projects by bringing together expertise and supports across multiple scientific domains in a joint effort between NSF and NASA,” said Vyacheslav (Slava) Lukin, the Program Director for Plasma Physics at NSF. “The need has been recognized for some time, and the portfolio of six projects, Gabor Toth’s among them, engages not only the leading university groups, but also NASA Centers, Department of Defense (US) and National Laboratories | Department of Energy (US), as well as the private sector.”

    2
    Meridional cut from an advanced three-dimensional magnetosphere simulation. The Earth is at the center of the black circle that is the inner boundary at 2.5 Earth radii. The white lines are magnetic field lines. The colors show density. The blue rectangle indicates where the kinetic model is used, which is coupled with the global magnetohydrodynamic model. Credit: Chen, Yuxi & Toth, Gabor & Hietala, Heli & Vines, Sarah & Zou, Ying & Nishimura, Yukitoshi & Silveira, Marcos & Guo, Zhifang & Lin, Yu & Markidis, Stefano.

    Toth helped develop today’s preeminent space weather prediction model, which is used for operational forecasting by the National Oceanic and Atmospheric Administration (NOAA). On February 3, 2021, NOAA began using the Geospace Model Version 2.0, which is part of the University of Michigan’s Space Weather Modeling Framework, to predict geomagnetic disturbances.

    “We’re constantly improving our models,” Toth said. The new model replaces version 1.5 which has been in operations since November 2017. “The main change in version 2 was the refinement of the numerical grid in the magnetosphere, several improvements in the algorithms, and a recalibration of the empirical parameters.”

    The Geospace Model is based on a global representation of Earth’s Geospace environment that includes magnetohydrodynamics — the properties and behavior of electrically conducting fluids like plasma interacting with magnetic fields, which plays a key role in the dynamics of space weather.

    The Geospace model predicts magnetic disturbances on the ground resulting from geospace interactions with solar wind. Such magnetic disturbances induce a geoelectric field that can damage large-scale electrical conductors, such as the power grid.

    Short-term advanced warning from the model provides forecasters and power grid operators with situational awareness about harmful currents and allows time to mitigate the problem and maintain the integrity of the electric power grid, NOAA announced at the time of the launch.

    As advanced as the Geospace Model is, it provides only about 30 minutes of advanced warning. Toth’s team is one of several groups working to increase lead time to one to three days. Doing so means understanding how activity on the surface of the Sun leads to events that can impact the Earth.

    “We’re currently using data from a satellite measuring plasma parameters one million miles away from the Earth,” Toth explained. Researchers hope to start from the Sun, using remote observation of the Sun’s surface — in particular, coronal mass ejections that produce flares that are visible in X-rays and UV light. “That happens early on the Sun. From that point, we can run a model and predict the arrival time and impact of magnetic events.”

    Improving the lead time of space weather forecasts requires new methods and algorithms that can compute far faster than those used today and can be deployed efficiently on high performance computers. Toth uses the Frontera supercomputer [below]at the Texas Advanced Computing Center — the fastest academic system in the world and the 10th most powerful overall — to develop and test these new methods.

    “I consider myself really good at developing new algorithms,” Toth said. “I apply these to space physics, but many of the algorithms I develop are more general and not restricted to one application.”

    A key algorithmic improvement made by Toth involved finding a novel way to combine the kinetic and fluid aspects of plasmas in one simulation model. “People tried it before and failed. But we made it work. We go a million times faster than brute-force simulations by inventing smart approximations and algorithms,” Toth said.

    The new algorithm dynamically adapts the location covered by the kinetic model based on the simulation results. The model identifies the regions of interests and places the kinetic model and the computational resources to focus on them. This can result in a 10 to 100 time speed up for space weather models.

    As part of the NSF SWQU project, Toth and his team has been working on making the Space Weather Modeling Framework run efficiently on future supercomputers that rely heavily on graphical processing units (GPUs). As a first goal, they set out to port the Geospace model to GPUs using the NVIDIA Fortran compiler with OpenACC directives.

    They recently managed to run the full Geospace model faster than real-time on a single GPU. They used TACC’s GPU-enabled Longhorn machine to reach this milestone. To run the model with the same speed on traditional supercomputer requires at least 100 CPU cores.

    “It took a whole year of code development to make this happen, Toth said. “The goal is to run an ensemble of simulations fast and efficiently to provide a probabilistic space weather forecast.”

    This type of probabilistic forecasting is important for another aspect of Toth’s research: localizing predictions in terms of the impact on the surface of Earth.

    “Should we worry in Michigan or only in Canada? What is the maximum induced current particular transformers will experience? How long will generators need to be shut off? To do this accurately, you need a model you believe in,” he said. “Whatever we predict, there’s always some uncertainty. We want to give predictions with precise probabilities, similar to terrestrial weather forecasts.”

    Toth and his team run their code in parallel on thousands of cores on Frontera for each simulation. They plan to run thousands of simulations over the coming years to see how model parameters affect the results to find the best model parameters and to be able to attach probabilities to simulation results.

    “Without Frontera, I don’t think we could do this research,” Toth said. “When you put together smart people and big computers, great things can happen.”

    The Michigan Sun-to-Earth Model, including the SWMF Geospace and the new GPU port, is available as open-source at https://github.com/MSTEM-QUDA. Toth and his collaborators published a review of recent and in-progress developments to the model in the May issue of EOS.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) (US) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Frontera Dell EMC supercomputer fastest at any university.

     
  • richardmitnick 11:04 am on August 11, 2021 Permalink | Reply
    Tags: "Penn Engineering’s ENIAD Sets New World Record for Energy-efficient Supercomputing", , , Supercomputing,   

    From University of Pennsylvania Engineering and Applied Science: “Penn Engineering’s “ENIAD” Sets New World Record for Energy-efficient Supercomputing” 

    From University of Pennsylvania Engineering and Applied Science

    August 4, 2021
    Evan Lerner

    When it comes to supercomputers, raw speed is no longer enough to tackle some of the world’s most pressing problems. When the data being crunched comes in the form of complex networks — such as tracking the billions of contacts between individuals for global COVID contact tracing — supercomputers that excel at a technique known as “graph processing” are necessary.

    It would take the average computer years, if not decades, to fully understand the path of the COVID pandemic as it spread over the world. Worse, processing such a graph would use hundreds of megawatts, enough electricity to power Philadelphia’s homes for a year.

    As graph analysis becomes integral to more and more applications, such as drug discovery, climate simulation and social science, researchers have the paradoxical task of making these supercomputers faster while using less power.

    Now, researchers at the University of Pennsylvania’s School of Engineering and Applied Science have shown that their supercomputer, “ENIAD”, is among the best in the world when it comes to energy-efficient graph-solving.

    1
    University of Pennsylvania’s School of Engineering and Applied Science “ENIAD” supercomputer.

    “ENIAD’s” performance was certified by Graph500, the de facto standard for ranking the performance and energy efficiency of supercomputers around the world. Running one of Graph500’s benchmark graph analytics applications, “ENIAD” took the top spot among a list of 500 of the most energy-efficient supercomputers reported in the world.

    Despite being designed and constructed by a two-person research team, “ENIAD’s” only close competition was Tianhe-3, China’s next-generation, exascale supercomputer, which is the product of thousands of scientists and engineers.

    The researchers behind “ENIAD” are Jialiang Zhang, a graduate student in the Computational Intelligence lab and Jing Li, Eduardo D. Glandt Faculty Fellow and associate professor in the Department of Electrical and Systems Engineering. They named their supercomputer after “ENIAC”, the world’s first digital computer, which was developed at Penn 75 years ago in the same building where Li and Zhang currently work.

    “We’re proud to be carrying on “ENIAC’s” legacy by setting this new world record,” Li says.

    According to the Graph500 benchmark specification, the performance of the supercomputers is measured in MTEPS, or millions of traversed edges per second. When comparing energy efficiency, that number is divided by the number of watts of electricity used in the process of solving a standard type of network known as a Kronecker graph.

    For graphs at the scale of 64 million nodes, “ENIAD” performed at 6,028.85 MTEPS/W, besting Tianhe-3’s 4,724.30 MTEPS/W. At this rate, “ENIAD” would reduce the power consumption for processing the world’s COVID contact tracing graph from hundreds of megawatts — on the scale of an entire city’s electricity usage — to several kilowatts, or what it takes to power the HVAC system of a typical household.

    “Graph solving is a lot more challenging than just the regular computation that you see in most AI and machine learning applications, since you can’t parallelize it well and go faster just by adding more processors,” says Li. “But that opens up new opportunities for small-team academic researchers like us. We can have a bigger impact by exploiting a richer set of ideas than the largest industrial developers, who are likely more constrained and not looking as comprehensively at the design space.”

    Rather than a challenge, Li and Zhang feel more out-of-the-box thinking beyond the abstraction levels defined by conventional computer systems was part of their recipe for success.

    “We rethought the “ENIAD” design entirely from the ground up for AI and big data,” says Zhang. “Supercomputer hardware and software are often developed in isolation from one another based on a fixed abstraction, so the fact that we were able to co-design the hardware and software through a set of new abstractions was one of our secret ingredients.”

    Li and Zhang also believe that these records are just the beginning for “ENIAD”. They will present new performance results for “ENIAD at the Hot Chips conference next month.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Penn campus

    Academic life at University of Pennsylvania is unparalleled, with 100 countries and every U.S. state represented in one of the Ivy League’s most diverse student bodies. Consistently ranked among the top 10 universities in the country, Penn enrolls 10,000 undergraduate students and welcomes an additional 10,000 students to our world-renowned graduate and professional schools.

    Penn’s award-winning educators and scholars encourage students to pursue inquiry and discovery, follow their passions, and address the world’s most challenging problems through an interdisciplinary approach.

    The University of Pennsylvania(US) is a private Ivy League research university in Philadelphia, Pennsylvania. The university claims a founding date of 1740 and is one of the nine colonial colleges chartered prior to the U.S. Declaration of Independence. Benjamin Franklin, Penn’s founder and first president, advocated an educational program that trained leaders in commerce, government, and public service, similar to a modern liberal arts curriculum.

    Penn has four undergraduate schools as well as twelve graduate and professional schools. Schools enrolling undergraduates include the College of Arts and Sciences; the School of Engineering and Applied Science; the Wharton School; and the School of Nursing. Penn’s “One University Policy” allows students to enroll in classes in any of Penn’s twelve schools. Among its highly ranked graduate and professional schools are a law school whose first professor wrote the first draft of the United States Constitution, the first school of medicine in North America (Perelman School of Medicine, 1765), and the first collegiate business school (Wharton School, 1881).

    Penn is also home to the first “student union” building and organization (Houston Hall, 1896), the first Catholic student club in North America (Newman Center, 1893), the first double-decker college football stadium (Franklin Field, 1924 when second deck was constructed), and Morris Arboretum, the official arboretum of the Commonwealth of Pennsylvania. The first general-purpose electronic computer (ENIAC) was developed at Penn and formally dedicated in 1946. In 2019, the university had an endowment of $14.65 billion, the sixth-largest endowment of all universities in the United States, as well as a research budget of $1.02 billion. The university’s athletics program, the Quakers, fields varsity teams in 33 sports as a member of the NCAA Division I Ivy League conference.

    As of 2018, distinguished alumni and/or Trustees include three U.S. Supreme Court justices; 32 U.S. senators; 46 U.S. governors; 163 members of the U.S. House of Representatives; eight signers of the Declaration of Independence and seven signers of the U.S. Constitution (four of whom signed both representing two-thirds of the six people who signed both); 24 members of the Continental Congress; 14 foreign heads of state and two presidents of the United States, including Donald Trump. As of October 2019, 36 Nobel laureates; 80 members of the American Academy of Arts and Sciences(US); 64 billionaires; 29 Rhodes Scholars; 15 Marshall Scholars and 16 Pulitzer Prize winners have been affiliated with the university.

    History

    The University of Pennsylvania considers itself the fourth-oldest institution of higher education in the United States, though this is contested by Princeton University(US) and Columbia(US) Universities. The university also considers itself as the first university in the United States with both undergraduate and graduate studies.

    In 1740, a group of Philadelphians joined together to erect a great preaching hall for the traveling evangelist George Whitefield, who toured the American colonies delivering open-air sermons. The building was designed and built by Edmund Woolley and was the largest building in the city at the time, drawing thousands of people the first time it was preached in. It was initially planned to serve as a charity school as well, but a lack of funds forced plans for the chapel and school to be suspended. According to Franklin’s autobiography, it was in 1743 when he first had the idea to establish an academy, “thinking the Rev. Richard Peters a fit person to superintend such an institution”. However, Peters declined a casual inquiry from Franklin and nothing further was done for another six years. In the fall of 1749, now more eager to create a school to educate future generations, Benjamin Franklin circulated a pamphlet titled Proposals Relating to the Education of Youth in Pensilvania, his vision for what he called a “Public Academy of Philadelphia”. Unlike the other colonial colleges that existed in 1749—Harvard University(US), William & Mary(US), Yale Unversity(US), and The College of New Jersey(US)—Franklin’s new school would not focus merely on education for the clergy. He advocated an innovative concept of higher education, one which would teach both the ornamental knowledge of the arts and the practical skills necessary for making a living and doing public service. The proposed program of study could have become the nation’s first modern liberal arts curriculum, although it was never implemented because Anglican priest William Smith (1727-1803), who became the first provost, and other trustees strongly preferred the traditional curriculum.

    Franklin assembled a board of trustees from among the leading citizens of Philadelphia, the first such non-sectarian board in America. At the first meeting of the 24 members of the board of trustees on November 13, 1749, the issue of where to locate the school was a prime concern. Although a lot across Sixth Street from the old Pennsylvania State House (later renamed and famously known since 1776 as “Independence Hall”), was offered without cost by James Logan, its owner, the trustees realized that the building erected in 1740, which was still vacant, would be an even better site. The original sponsors of the dormant building still owed considerable construction debts and asked Franklin’s group to assume their debts and, accordingly, their inactive trusts. On February 1, 1750, the new board took over the building and trusts of the old board. On August 13, 1751, the “Academy of Philadelphia”, using the great hall at 4th and Arch Streets, took in its first secondary students. A charity school also was chartered on July 13, 1753 by the intentions of the original “New Building” donors, although it lasted only a few years. On June 16, 1755, the “College of Philadelphia” was chartered, paving the way for the addition of undergraduate instruction. All three schools shared the same board of trustees and were considered to be part of the same institution. The first commencement exercises were held on May 17, 1757.

    The institution of higher learning was known as the College of Philadelphia from 1755 to 1779. In 1779, not trusting then-provost the Reverend William Smith’s “Loyalist” tendencies, the revolutionary State Legislature created a University of the State of Pennsylvania. The result was a schism, with Smith continuing to operate an attenuated version of the College of Philadelphia. In 1791, the legislature issued a new charter, merging the two institutions into a new University of Pennsylvania with twelve men from each institution on the new board of trustees.

    Penn has three claims to being the first university in the United States, according to university archives director Mark Frazier Lloyd: the 1765 founding of the first medical school in America made Penn the first institution to offer both “undergraduate” and professional education; the 1779 charter made it the first American institution of higher learning to take the name of “University”; and existing colleges were established as seminaries (although, as detailed earlier, Penn adopted a traditional seminary curriculum as well).

    After being located in downtown Philadelphia for more than a century, the campus was moved across the Schuylkill River to property purchased from the Blockley Almshouse in West Philadelphia in 1872, where it has since remained in an area now known as University City. Although Penn began operating as an academy or secondary school in 1751 and obtained its collegiate charter in 1755, it initially designated 1750 as its founding date; this is the year that appears on the first iteration of the university seal. Sometime later in its early history, Penn began to consider 1749 as its founding date and this year was referenced for over a century, including at the centennial celebration in 1849. In 1899, the board of trustees voted to adjust the founding date earlier again, this time to 1740, the date of “the creation of the earliest of the many educational trusts the University has taken upon itself”. The board of trustees voted in response to a three-year campaign by Penn’s General Alumni Society to retroactively revise the university’s founding date to appear older than Princeton University, which had been chartered in 1746.

    Research, innovations and discoveries

    Penn is classified as an “R1” doctoral university: “Highest research activity.” Its economic impact on the Commonwealth of Pennsylvania for 2015 amounted to $14.3 billion. Penn’s research expenditures in the 2018 fiscal year were $1.442 billion, the fourth largest in the U.S. In fiscal year 2019 Penn received $582.3 million in funding from the National Institutes of Health(US).

    In line with its well-known interdisciplinary tradition, Penn’s research centers often span two or more disciplines. In the 2010–2011 academic year alone, five interdisciplinary research centers were created or substantially expanded; these include the Center for Health-care Financing; the Center for Global Women’s Health at the Nursing School; the $13 million Morris Arboretum’s Horticulture Center; the $15 million Jay H. Baker Retailing Center at Wharton; and the $13 million Translational Research Center at Penn Medicine. With these additions, Penn now counts 165 research centers hosting a research community of over 4,300 faculty and over 1,100 postdoctoral fellows, 5,500 academic support staff and graduate student trainees. To further assist the advancement of interdisciplinary research President Amy Gutmann established the “Penn Integrates Knowledge” title awarded to selected Penn professors “whose research and teaching exemplify the integration of knowledge”. These professors hold endowed professorships and joint appointments between Penn’s schools.

    Penn is also among the most prolific producers of doctoral students. With 487 PhDs awarded in 2009, Penn ranks third in the Ivy League, only behind Columbia University(US) and Cornell University(US) (Harvard University(US) did not report data). It also has one of the highest numbers of post-doctoral appointees (933 in number for 2004–2007), ranking third in the Ivy League (behind Harvard and Yale University(US)) and tenth nationally.

    In most disciplines Penn professors’ productivity is among the highest in the nation and first in the fields of epidemiology, business, communication studies, comparative literature, languages, information science, criminal justice and criminology, social sciences and sociology. According to the National Research Council nearly three-quarters of Penn’s 41 assessed programs were placed in ranges including the top 10 rankings in their fields, with more than half of these in ranges including the top five rankings in these fields.

    Penn’s research tradition has historically been complemented by innovations that shaped higher education. In addition to establishing the first medical school; the first university teaching hospital; the first business school; and the first student union Penn was also the cradle of other significant developments. In 1852, Penn Law was the first law school in the nation to publish a law journal still in existence (then called The American Law Register, now the Penn Law Review, one of the most cited law journals in the world). Under the deanship of William Draper Lewis, the law school was also one of the first schools to emphasize legal teaching by full-time professors instead of practitioners, a system that is still followed today. The Wharton School was home to several pioneering developments in business education. It established the first research center in a business school in 1921 and the first center for entrepreneurship center in 1973 and it regularly introduced novel curricula for which BusinessWeek wrote, “Wharton is on the crest of a wave of reinvention and change in management education”.

    Several major scientific discoveries have also taken place at Penn. The university is probably best known as the place where the first general-purpose electronic computer (ENIAC) was born in 1946 at the Moore School of Electrical Engineering. It was here also where the world’s first spelling and grammar checkers were created, as well as the popular COBOL programming language. Penn can also boast some of the most important discoveries in the field of medicine. The dialysis machine used as an artificial replacement for lost kidney function was conceived and devised out of a pressure cooker by William Inouye while he was still a student at Penn Med; the Rubella and Hepatitis B vaccines were developed at Penn; the discovery of cancer’s link with genes; cognitive therapy; Retin-A (the cream used to treat acne), Resistin; the Philadelphia gene (linked to chronic myelogenous leukemia) and the technology behind PET Scans were all discovered by Penn Med researchers. More recent gene research has led to the discovery of the genes for fragile X syndrome, the most common form of inherited mental retardation; spinal and bulbar muscular atrophy, a disorder marked by progressive muscle wasting; and Charcot–Marie–Tooth disease, a progressive neurodegenerative disease that affects the hands, feet and limbs.

    Conductive polymer was also developed at Penn by Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa, an invention that earned them the Nobel Prize in Chemistry. On faculty since 1965, Ralph L. Brinster developed the scientific basis for in vitro fertilization and the transgenic mouse at Penn and was awarded the National Medal of Science in 2010. The theory of superconductivity was also partly developed at Penn, by then-faculty member John Robert Schrieffer (along with John Bardeen and Leon Cooper). The university has also contributed major advancements in the fields of economics and management. Among the many discoveries are conjoint analysis, widely used as a predictive tool especially in market research; Simon Kuznets’s method of measuring Gross National Product; the Penn effect (the observation that consumer price levels in richer countries are systematically higher than in poorer ones) and the “Wharton Model” developed by Nobel-laureate Lawrence Klein to measure and forecast economic activity. The idea behind Health Maintenance Organizations also belonged to Penn professor Robert Eilers, who put it into practice during then-President Nixon’s health reform in the 1970s.

    International partnerships

    Students can study abroad for a semester or a year at partner institutions such as the London School of Economics(UK), University of Barcelona [Universitat de Barcelona](ES), Paris Institute of Political Studies [Institut d’études politiques de Paris](FR), University of Queensland(AU), University College London(UK), King’s College London(UK), Hebrew University of Jerusalem(IL) and University of Warwick(UK).

     
  • richardmitnick 8:25 pm on July 18, 2021 Permalink | Reply
    Tags: "Curiosity and technology drive quest to reveal fundamental secrets of the universe", A very specific particle called a J/psi might provide a clearer picture of what’s going on inside a proton’s gluonic field., , Argonne-driven technology is part of a broad initiative to answer fundamental questions about the birth of matter in the universe and the building blocks that hold it all together., , , , , , Computational Science, , , , , , Developing and fabricating detectors that search for signatures from the early universe or enhance our understanding of the most fundamental of particles., , Electron-Ion Collider (EIC) at DOE's Brookhaven National Laboratory (US) to be built inside the tunnel that currently houses the Relativistic Heavy Ion Collider [RHIC]., Exploring the hearts of protons and neutrons, , , Neutrinoless double beta decay can only happen if the neutrino is its own anti-particle., , , , , , , SLAC National Accelerator Laboratory(US), , Supercomputing,   

    From DOE’s Argonne National Laboratory (US) : “Curiosity and technology drive quest to reveal fundamental secrets of the universe” 

    Argonne Lab

    From DOE’s Argonne National Laboratory (US)

    July 15, 2021
    John Spizzirri

    Argonne-driven technology is part of a broad initiative to answer fundamental questions about the birth of matter in the universe and the building blocks that hold it all together.

    Imagine the first of our species to lie beneath the glow of an evening sky. An enormous sense of awe, perhaps a little fear, fills them as they wonder at those seemingly infinite points of light and what they might mean. As humans, we evolved the capacity to ask big insightful questions about the world around us and worlds beyond us. We dare, even, to question our own origins.

    “The place of humans in the universe is important to understand,” said physicist and computational scientist Salman Habib. ​“Once you realize that there are billions of galaxies we can detect, each with many billions of stars, you understand the insignificance of being human in some sense. But at the same time, you appreciate being human a lot more.”

    The South Pole Telescope is part of a collaboration between Argonne and a number of national labs and universities to measure the CMB, considered the oldest light in the universe.

    The high altitude and extremely dry conditions of the South Pole keep water vapor from absorbing select light wavelengths.

    With no less a sense of wonder than most of us, Habib and colleagues at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are actively researching these questions through an initiative that investigates the fundamental components of both particle physics and astrophysics.

    The breadth of Argonne’s research in these areas is mind-boggling. It takes us back to the very edge of time itself, to some infinitesimally small portion of a second after the Big Bang when random fluctuations in temperature and density arose, eventually forming the breeding grounds of galaxies and planets.

    It explores the heart of protons and neutrons to understand the most fundamental constructs of the visible universe, particles and energy once free in the early post-Big Bang universe, but later confined forever within a basic atomic structure as that universe began to cool.

    And it addresses slightly newer, more controversial questions about the nature of Dark Matter and Dark Energy, both of which play a dominant role in the makeup and dynamics of the universe but are little understood.
    _____________________________________________________________________________________
    Dark Energy Survey

    Dark Energy Camera [DECam] built at DOE’s Fermi National Accelerator Laboratory(US)

    NOIRLab National Optical Astronomy Observatory(US) Cerro Tololo Inter-American Observatory(CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLab(US)NSF NOIRLab NOAO (US) Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    Timeline of the Inflationary Universe WMAP

    The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

    According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.
    _____________________________________________________________________________________

    “And this world-class research we’re doing could not happen without advances in technology,” said Argonne Associate Laboratory Director Kawtar Hafidi, who helped define and merge the different aspects of the initiative.

    “We are developing and fabricating detectors that search for signatures from the early universe or enhance our understanding of the most fundamental of particles,” she added. ​“And because all of these detectors create big data that have to be analyzed, we are developing, among other things, artificial intelligence techniques to do that as well.”

    Decoding messages from the universe

    Fleshing out a theory of the universe on cosmic or subatomic scales requires a combination of observations, experiments, theories, simulations and analyses, which in turn requires access to the world’s most sophisticated telescopes, particle colliders, detectors and supercomputers.

    Argonne is uniquely suited to this mission, equipped as it is with many of those tools, the ability to manufacture others and collaborative privileges with other federal laboratories and leading research institutions to access other capabilities and expertise.

    As lead of the initiative’s cosmology component, Habib uses many of these tools in his quest to understand the origins of the universe and what makes it tick.

    And what better way to do that than to observe it, he said.

    “If you look at the universe as a laboratory, then obviously we should study it and try to figure out what it is telling us about foundational science,” noted Habib. ​“So, one part of what we are trying to do is build ever more sensitive probes to decipher what the universe is trying to tell us.”

    To date, Argonne is involved in several significant sky surveys, which use an array of observational platforms, like telescopes and satellites, to map different corners of the universe and collect information that furthers or rejects a specific theory.

    For example, the South Pole Telescope survey, a collaboration between Argonne and a number of national labs and universities, is measuring the cosmic microwave background (CMB) [above], considered the oldest light in the universe. Variations in CMB properties, such as temperature, signal the original fluctuations in density that ultimately led to all the visible structure in the universe.

    Additionally, the Dark Energy Spectroscopic Instrument and the forthcoming Vera C. Rubin Observatory are specially outfitted, ground-based telescopes designed to shed light on dark energy and dark matter, as well as the formation of luminous structure in the universe.

    DOE’s Lawrence Berkeley National Laboratory(US) DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory, in the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft).

    National Optical Astronomy Observatory (US) Mayall 4 m telescope at NSF NOIRLab NOAO Kitt Peak National Observatory (US) in the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft).

    National Science Foundation(US) NSF (US) NOIRLab NOAO Kitt Peak National Observatory on the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers (55 mi) west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft).

    National Science Foundation(US) NOIRLab (US) NOAO Kitt Peak National Observatory (US) on Kitt Peak of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers (55 mi) west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft). annotated.

    NSF (US) NOIRLab (US) NOAO (US) Vera C. Rubin Observatory [LSST] Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing NSF (US) NOIRLab (US) NOAO (US) Gemini South Telescope and NSF (US) NOIRLab (US) NOAO (US) Southern Astrophysical Research Telescope.

    Darker matters

    All the data sets derived from these observations are connected to the second component of Argonne’s cosmology push, which revolves around theory and modeling. Cosmologists combine observations, measurements and the prevailing laws of physics to form theories that resolve some of the mysteries of the universe.

    But the universe is complex, and it has an annoying tendency to throw a curve ball just when we thought we had a theory cinched. Discoveries within the past 100 years have revealed that the universe is both expanding and accelerating its expansion — realizations that came as separate but equal surprises.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    “To say that we understand the universe would be incorrect. To say that we sort of understand it is fine,” exclaimed Habib. ​“We have a theory that describes what the universe is doing, but each time the universe surprises us, we have to add a new ingredient to that theory.”

    Modeling helps scientists get a clearer picture of whether and how those new ingredients will fit a theory. They make predictions for observations that have not yet been made, telling observers what new measurements to take.

    Habib’s group is applying this same sort of process to gain an ever-so-tentative grasp on the nature of dark energy and dark matter. While scientists can tell us that both exist, that they comprise about 68 and 26% of the universe, respectively, beyond that not much else is known.

    ______________________________________________________________________________________________________________

    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com.


    Coma cluster via NASA/ESA Hubble.


    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.
    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.
    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL).


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970

    Dark Matter Research

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    _____________________________________________________________________________________

    Observations of cosmological structure — the distribution of galaxies and even of their shapes — provide clues about the nature of dark matter, which in turn feeds simple dark matter models and subsequent predictions. If observations, models and predictions aren’t in agreement, that tells scientists that there may be some missing ingredient in their description of dark matter.

    But there are also experiments that are looking for direct evidence of dark matter particles, which require highly sensitive detectors [above]. Argonne has initiated development of specialized superconducting detector technology for the detection of low-mass dark matter particles.

    This technology requires the ability to control properties of layered materials and adjust the temperature where the material transitions from finite to zero resistance, when it becomes a superconductor. And unlike other applications where scientists would like this temperature to be as high as possible — room temperature, for example — here, the transition needs to be very close to absolute zero.

    Habib refers to these dark matter detectors as traps, like those used for hunting — which, in essence, is what cosmologists are doing. Because it’s possible that dark matter doesn’t come in just one species, they need different types of traps.

    “It’s almost like you’re in a jungle in search of a certain animal, but you don’t quite know what it is — it could be a bird, a snake, a tiger — so you build different kinds of traps,” he said.

    Lab researchers are working on technologies to capture these elusive species through new classes of dark matter searches. Collaborating with other institutions, they are now designing and building a first set of pilot projects aimed at looking for dark matter candidates with low mass.

    Tuning in to the early universe

    Amy Bender is working on a different kind of detector — well, a lot of detectors — which are at the heart of a survey of the cosmic microwave background (CMB).

    “The CMB is radiation that has been around the universe for 13 billion years, and we’re directly measuring that,” said Bender, an assistant physicist at Argonne.

    The Argonne-developed detectors — all 16,000 of them — capture photons, or light particles, from that primordial sky through the aforementioned South Pole Telescope, to help answer questions about the early universe, fundamental physics and the formation of cosmic structures.

    Now, the CMB experimental effort is moving into a new phase, CMB-Stage 4 (CMB-S4).

    CMB-S4 is the next-generation ground-based cosmic microwave background experiment.With 21 telescopes at the South Pole and in the Chilean Atacama desert surveying the sky with 550,000 cryogenically-cooled superconducting detectors for 7 years, CMB-S4 will deliver transformative discoveries in fundamental physics, cosmology, astrophysics, and astronomy. CMB-S4 is supported by the Department of Energy Office of Science and the National Science Foundation.

    This larger project tackles even more complex topics like Inflationary Theory, which suggests that the universe expanded faster than the speed of light for a fraction of a second, shortly after the Big Bang.
    _____________________________________________________________________________________
    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation
    [caption id="attachment_55311" align="alignnone" width="632"] HPHS Owls

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes
    Alex Mittelmann, Coldcreation


    Alan Guth’s notes:

    Alan Guth’s original notes on inflation


    _____________________________________________________________________________________

    3
    A section of a detector array with architecture suitable for future CMB experiments, such as the upcoming CMB-S4 project. Fabricated at Argonne’s Center for Nanoscale Materials, 16,000 of these detectors currently drive measurements collected from the South Pole Telescope. (Image by Argonne National Laboratory.)

    While the science is amazing, the technology to get us there is just as fascinating.

    Technically called transition edge sensing (TES) bolometers, the detectors on the telescope are made from superconducting materials fabricated at Argonne’s Center for Nanoscale Materials, a DOE Office of Science User Facility.

    Each of the 16,000 detectors acts as a combination of very sensitive thermometer and camera. As incoming radiation is absorbed on the surface of each detector, measurements are made by supercooling them to a fraction of a degree above absolute zero. (That’s over three times as cold as Antarctica’s lowest recorded temperature.)

    Changes in heat are measured and recorded as changes in electrical resistance and will help inform a map of the CMB’s intensity across the sky.

    CMB-S4 will focus on newer technology that will allow researchers to distinguish very specific patterns in light, or polarized light. In this case, they are looking for what Bender calls the Holy Grail of polarization, a pattern called B-modes.

    Capturing this signal from the early universe — one far fainter than the intensity signal — will help to either confirm or disprove a generic prediction of inflation.

    It will also require the addition of 500,000 detectors distributed among 21 telescopes in two distinct regions of the world, the South Pole and the Chilean desert. There, the high altitude and extremely dry conditions keep water vapor in the atmosphere from absorbing millimeter wavelength light, like that of the CMB.

    While previous experiments have touched on this polarization, the large number of new detectors will improve sensitivity to that polarization and grow our ability to capture it.

    “Literally, we have built these cameras completely from the ground up,” said Bender. ​“Our innovation is in how to make these stacks of superconducting materials work together within this detector, where you have to couple many complex factors and then actually read out the results with the TES. And that is where Argonne has contributed, hugely.”

    Down to the basics

    Argonne’s capabilities in detector technology don’t just stop at the edge of time, nor do the initiative’s investigations just look at the big picture.

    Most of the visible universe, including galaxies, stars, planets and people, are made up of protons and neutrons. Understanding the most fundamental components of those building blocks and how they interact to make atoms and molecules and just about everything else is the realm of physicists like Zein-Eddine Meziani.

    “From the perspective of the future of my field, this initiative is extremely important,” said Meziani, who leads Argonne’s Medium Energy Physics group. ​“It has given us the ability to actually explore new concepts, develop better understanding of the science and a pathway to enter into bigger collaborations and take some leadership.”

    Taking the lead of the initiative’s nuclear physics component, Meziani is steering Argonne toward a significant role in the development of the Electron-Ion Collider, a new U.S. Nuclear Physics Program facility slated for construction at DOE’s Brookhaven National Laboratory (US).

    Argonne’s primary interest in the collider is to elucidate the role that quarks, anti-quarks and gluons play in giving mass and a quantum angular momentum, called spin, to protons and neutrons — nucleons — the particles that comprise the nucleus of an atom.


    EIC Electron Animation, Inner Proton Motion.
    Electrons colliding with ions will exchange virtual photons with the nuclear particles to help scientists ​“see” inside the nuclear particles; the collisions will produce precision 3D snapshots of the internal arrangement of quarks and gluons within ordinary nuclear matter; like a combination CT/MRI scanner for atoms. (Image by Brookhaven National Laboratory.)

    While we once thought nucleons were the finite fundamental particles of an atom, the emergence of powerful particle colliders, like the Stanford Linear Accelerator Center at Stanford University and the former Tevatron at DOE’s Fermilab, proved otherwise.

    It turns out that quarks and gluons were independent of nucleons in the extreme energy densities of the early universe; as the universe expanded and cooled, they transformed into ordinary matter.

    “There was a time when quarks and gluons were free in a big soup, if you will, but we have never seen them free,” explained Meziani. ​“So, we are trying to understand how the universe captured all of this energy that was there and put it into confined systems, like these droplets we call protons and neutrons.”

    Some of that energy is tied up in gluons, which, despite the fact that they have no mass, confer the majority of mass to a proton. So, Meziani is hoping that the Electron-Ion Collider will allow science to explore — among other properties — the origins of mass in the universe through a detailed exploration of gluons.

    And just as Amy Bender is looking for the B-modes polarization in the CMB, Meziani and other researchers are hoping to use a very specific particle called a J/psi to provide a clearer picture of what’s going on inside a proton’s gluonic field.

    But producing and detecting the J/psi particle within the collider — while ensuring that the proton target doesn’t break apart — is a tricky enterprise, which requires new technologies. Again, Argonne is positioning itself at the forefront of this endeavor.

    “We are working on the conceptual designs of technologies that will be extremely important for the detection of these types of particles, as well as for testing concepts for other science that will be conducted at the Electron-Ion Collider,” said Meziani.

    Argonne also is producing detector and related technologies in its quest for a phenomenon called neutrinoless double beta decay. A neutrino is one of the particles emitted during the process of neutron radioactive beta decay and serves as a small but mighty connection between particle physics and astrophysics.

    “Neutrinoless double beta decay can only happen if the neutrino is its own anti-particle,” said Hafidi. ​“If the existence of these very rare decays is confirmed, it would have important consequences in understanding why there is more matter than antimatter in the universe.”

    Argonne scientists from different areas of the lab are working on the Neutrino Experiment with Xenon Time Projection Chamber (NEXT) collaboration to design and prototype key systems for the collaborative’s next big experiment. This includes developing a one-of-a-kind test facility and an R&D program for new, specialized detector systems.

    “We are really working on dramatic new ideas,” said Meziani. ​“We are investing in certain technologies to produce some proof of principle that they will be the ones to pursue later, that the technology breakthroughs that will take us to the highest sensitivity detection of this process will be driven by Argonne.”

    The tools of detection

    Ultimately, fundamental science is science derived from human curiosity. And while we may not always see the reason for pursuing it, more often than not, fundamental science produces results that benefit all of us. Sometimes it’s a gratifying answer to an age-old question, other times it’s a technological breakthrough intended for one science that proves useful in a host of other applications.

    Through their various efforts, Argonne scientists are aiming for both outcomes. But it will take more than curiosity and brain power to solve the questions they are asking. It will take our skills at toolmaking, like the telescopes that peer deep into the heavens and the detectors that capture hints of the earliest light or the most elusive of particles.

    We will need to employ the ultrafast computing power of new supercomputers. Argonne’s forthcoming Aurora exascale machine will analyze mountains of data for help in creating massive models that simulate the dynamics of the universe or subatomic world, which, in turn, might guide new experiments — or introduce new questions.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer, to be built at DOE’s Argonne National Laboratory.

    And we will apply artificial intelligence to recognize patterns in complex observations — on the subatomic and cosmic scales — far more quickly than the human eye can, or use it to optimize machinery and experiments for greater efficiency and faster results.

    “I think we have been given the flexibility to explore new technologies that will allow us to answer the big questions,” said Bender. ​“What we’re developing is so cutting edge, you never know where it will show up in everyday life.”

    Funding for research mentioned in this article was provided by Argonne Laboratory Directed Research and Development; Argonne program development; DOE Office of High Energy Physics: Cosmic Frontier, South Pole Telescope-3G project, Detector R&D; and DOE Office of Nuclear Physics.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DOE’s Argonne National Laboratory (US) seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their is a science and engineering research national laboratory operated by UChicago Argonne LLC for the United States Department of Energy. The facility is located in Lemont, Illinois, outside of Chicago, and is the largest national laboratory by size and scope in the Midwest.

    Argonne had its beginnings in the Metallurgical Laboratory of the University of Chicago, formed in part to carry out Enrico Fermi’s work on nuclear reactors for the Manhattan Project during World War II. After the war, it was designated as the first national laboratory in the United States on July 1, 1946. In the post-war era the lab focused primarily on non-weapon related nuclear physics, designing and building the first power-producing nuclear reactors, helping design the reactors used by the United States’ nuclear navy, and a wide variety of similar projects. In 1994, the lab’s nuclear mission ended, and today it maintains a broad portfolio in basic science research, energy storage and renewable energy, environmental sustainability, supercomputing, and national security.

    UChicago Argonne, LLC, the operator of the laboratory, “brings together the expertise of the University of Chicago (the sole member of the LLC) with Jacobs Engineering Group Inc.” Argonne is a part of the expanding Illinois Technology and Research Corridor. Argonne formerly ran a smaller facility called Argonne National Laboratory-West (or simply Argonne-West) in Idaho next to the Idaho National Engineering and Environmental Laboratory. In 2005, the two Idaho-based laboratories merged to become the DOE’s Idaho National Laboratory.
    What would become Argonne began in 1942 as the Metallurgical Laboratory at the University of Chicago, which had become part of the Manhattan Project. The Met Lab built Chicago Pile-1, the world’s first nuclear reactor, under the stands of the University of Chicago sports stadium. Considered unsafe, in 1943, CP-1 was reconstructed as CP-2, in what is today known as Red Gate Woods but was then the Argonne Forest of the Cook County Forest Preserve District near Palos Hills. The lab was named after the surrounding forest, which in turn was named after the Forest of Argonne in France where U.S. troops fought in World War I. Fermi’s pile was originally going to be constructed in the Argonne forest, and construction plans were set in motion, but a labor dispute brought the project to a halt. Since speed was paramount, the project was moved to the squash court under Stagg Field, the football stadium on the campus of the University of Chicago. Fermi told them that he was sure of his calculations, which said that it would not lead to a runaway reaction, which would have contaminated the city.

    Other activities were added to Argonne over the next five years. On July 1, 1946, the “Metallurgical Laboratory” was formally re-chartered as Argonne National Laboratory for “cooperative research in nucleonics.” At the request of the U.S. Atomic Energy Commission, it began developing nuclear reactors for the nation’s peaceful nuclear energy program. In the late 1940s and early 1950s, the laboratory moved to a larger location in unincorporated DuPage County, Illinois and established a remote location in Idaho, called “Argonne-West,” to conduct further nuclear research.

    In quick succession, the laboratory designed and built Chicago Pile 3 (1944), the world’s first heavy-water moderated reactor, and the Experimental Breeder Reactor I (Chicago Pile 4), built-in Idaho, which lit a string of four light bulbs with the world’s first nuclear-generated electricity in 1951. A complete list of the reactors designed and, in most cases, built and operated by Argonne can be viewed in the, Reactors Designed by Argonne page. The knowledge gained from the Argonne experiments conducted with these reactors 1) formed the foundation for the designs of most of the commercial reactors currently used throughout the world for electric power generation and 2) inform the current evolving designs of liquid-metal reactors for future commercial power stations.

    Conducting classified research, the laboratory was heavily secured; all employees and visitors needed badges to pass a checkpoint, many of the buildings were classified, and the laboratory itself was fenced and guarded. Such alluring secrecy drew visitors both authorized—including King Leopold III of Belgium and Queen Frederica of Greece—and unauthorized. Shortly past 1 a.m. on February 6, 1951, Argonne guards discovered reporter Paul Harvey near the 10-foot (3.0 m) perimeter fence, his coat tangled in the barbed wire. Searching his car, guards found a previously prepared four-page broadcast detailing the saga of his unauthorized entrance into a classified “hot zone”. He was brought before a federal grand jury on charges of conspiracy to obtain information on national security and transmit it to the public, but was not indicted.

    Not all nuclear technology went into developing reactors, however. While designing a scanner for reactor fuel elements in 1957, Argonne physicist William Nelson Beck put his own arm inside the scanner and obtained one of the first ultrasound images of the human body. Remote manipulators designed to handle radioactive materials laid the groundwork for more complex machines used to clean up contaminated areas, sealed laboratories or caves. In 1964, the “Janus” reactor opened to study the effects of neutron radiation on biological life, providing research for guidelines on safe exposure levels for workers at power plants, laboratories and hospitals. Scientists at Argonne pioneered a technique to analyze the moon’s surface using alpha radiation, which launched aboard the Surveyor 5 in 1967 and later analyzed lunar samples from the Apollo 11 mission.

    In addition to nuclear work, the laboratory maintained a strong presence in the basic research of physics and chemistry. In 1955, Argonne chemists co-discovered the elements einsteinium and fermium, elements 99 and 100 in the periodic table. In 1962, laboratory chemists produced the first compound of the inert noble gas xenon, opening up a new field of chemical bonding research. In 1963, they discovered the hydrated electron.

    High-energy physics made a leap forward when Argonne was chosen as the site of the 12.5 GeV Zero Gradient Synchrotron, a proton accelerator that opened in 1963. A bubble chamber allowed scientists to track the motions of subatomic particles as they zipped through the chamber; in 1970, they observed the neutrino in a hydrogen bubble chamber for the first time.

    Meanwhile, the laboratory was also helping to design the reactor for the world’s first nuclear-powered submarine, the U.S.S. Nautilus, which steamed for more than 513,550 nautical miles (951,090 km). The next nuclear reactor model was Experimental Boiling Water Reactor, the forerunner of many modern nuclear plants, and Experimental Breeder Reactor II (EBR-II), which was sodium-cooled, and included a fuel recycling facility. EBR-II was later modified to test other reactor designs, including a fast-neutron reactor and, in 1982, the Integral Fast Reactor concept—a revolutionary design that reprocessed its own fuel, reduced its atomic waste and withstood safety tests of the same failures that triggered the Chernobyl and Three Mile Island disasters. In 1994, however, the U.S. Congress terminated funding for the bulk of Argonne’s nuclear programs.

    Argonne moved to specialize in other areas, while capitalizing on its experience in physics, chemical sciences and metallurgy. In 1987, the laboratory was the first to successfully demonstrate a pioneering technique called plasma wakefield acceleration, which accelerates particles in much shorter distances than conventional accelerators. It also cultivated a strong battery research program.

    Following a major push by then-director Alan Schriesheim, the laboratory was chosen as the site of the Advanced Photon Source, a major X-ray facility which was completed in 1995 and produced the brightest X-rays in the world at the time of its construction.

    On 19 March 2019, it was reported in the Chicago Tribune that the laboratory was constructing the world’s most powerful supercomputer. Costing $500 million it will have the processing power of 1 quintillion flops. Applications will include the analysis of stars and improvements in the power grid.

    With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 8:38 am on July 18, 2021 Permalink | Reply
    Tags: "Astronomers Find Secret Planet-Making Ingredient- Magnetic Fields", , , , , , , , Supercomputing   

    From Nautilus (US) : “Astronomers Find Secret Planet-Making Ingredient- Magnetic Fields” 

    From Nautilus (US)

    7.17.18
    Robin George Andrews

    1
    Supercomputer simulations that include magnetic fields can readily form midsize planets, seen here as red dots. Credit: Hongping Deng et al.

    Scientists have long struggled to understand how common planets form. A new supercomputer simulation shows that the missing ingredient may be magnetism.

    We like to think of ourselves as unique. That conceit may even be true when it comes to our cosmic neighborhood: Despite the fact that planets between the sizes of Earth and Neptune appear to be the most common in the cosmos, no such intermediate-mass planets can be found in the solar system.

    The problem is, our best theories of planet formation—cast as they are from the molds of what we observe in our own backyard—haven’t been sufficient to truly explain how planets form. One study, however, published in Nature Astronomy in February 2021, demonstrates that by taking magnetism into account, astronomers may be able to explain the striking diversity of planets orbiting alien stars.

    It’s too early to tell if magnetism is the key missing ingredient in our planet-formation models, but the new work is nevertheless “a very cool new result,” said Anders Johansen, a planetary scientist at the University of Copenhagen [Københavns Universitet](DK) who was not involved with the work.

    Until recently, gravity has been the star of the show. In the most commonly cited theory for how planets form, known as core accretion, hefty rocks orbiting a young sun violently collide over and over again, attaching to one another and growing larger over time. They eventually create objects with enough gravity to scoop up ever more material—first becoming a small planetesimal, then a larger protoplanet, then perhaps a full-blown planet.

    Yet gravity does not act alone. The star constantly blows out radiation and winds that push material out into space. Rocky materials are harder to expel, so they coalesce nearer the sun into rocky planets. But the radiation blasts more easily vaporized elements and compounds—various ices, hydrogen, helium and other light elements—out into the distant frontiers of the star system, where they form gas giants such as Jupiter and Saturn and ice giants like Uranus and Neptune.

    But a key problem with this idea is that for most would-be planetary systems, the winds spoil the party. The dust and gas needed to make a gas giant get blown out faster than a hefty, gassy world can form. Within just a few million years, this matter either tumbles into the host star or gets pushed out by those stellar winds into deep, inaccessible space.

    For some time now, scientists have suspected that magnetism may also play a role. What, specifically, magnetic fields do has remained unclear, partly because of the difficulty in including magnetic fields alongside gravity in the computer models used to investigate planet formation. In astronomy, said Meredith MacGregor, an astronomer at the University of Colorado-Boulder (US), there’s a common refrain: “We don’t bring up magnetic fields, because they’re difficult.”

    And yet magnetic fields are commonplace around planetesimals and protoplanets, coming either from the star itself or from the movement of starlight-washed gas and dust. In general terms, astronomers know that magnetic fields may be able to protect nascent planets from a star’s wind, or perhaps stir up the disk and move planet-making material about. “We’ve known for a long time that magnetic fields can be used as a shield and be used to disrupt things,” said Zoë Leinhardt, a planetary scientist at the University of Bristol (UK) who was not involved with the work. But details have been lacking, and the physics of magnetic fields at this scale are poorly understood.

    “It’s hard enough to model the gravity of these disks in high enough resolution and to understand what’s going on,” said Ravit Helled, a planetary scientist at the University of Zürich[Universität Zürich](CH). Adding magnetic fields is a significantly larger challenge.

    In the new work, Helled, along with her Zurich colleague Lucio Mayer and Hongping Deng of the University of Cambridge (UK), used the PizDaint supercomputer, the fastest in Europe, to run extremely high-resolution simulations that incorporated magnetic fields alongside gravity.

    Magnetism seems to have three key effects. First, magnetic fields shield certain clumps of gas—those that may grow up to be smaller planets—from the destructive influence of stellar radiation. In addition, those magnetic cocoons also slow down the growth of what would have become supermassive planets. The magnetic pressure pushing out into space “stops the infalling of new matter,” said Mayer, “maybe not completely, but it reduces it a lot.”

    The third apparent effect is both destructive and creative. Magnetic fields can stir gas up. In some cases, this influence disintegrates protoplanetary clumps. In others, it pushes gas closer together, which encourages clumping.

    Taken together, these influences seem to result in a larger number of smaller worlds, and fewer giants. And while these simulations only examined the formation of gassy worlds, in reality those prototypical realms can accrete solid material too, perhaps becoming rocky realms instead.

    Altogether, these simulations hint that magnetism may be partly responsible for the abundance of intermediate-mass exoplanets out there, whether they are smaller Neptunes or larger Earths.

    “I like their results; I think it shows promise,” said Leinhardt. But even though the researchers had a supercomputer on their side, the resolution of individual worlds remains fuzzy. At this stage, we can’t be totally sure what is happening with magnetic fields on a protoplanetary scale. “This is more a proof of concept, that they can do this, they can marry the gravity and the magnetic fields to do something very interesting that I haven’t seen before.”

    The researchers don’t claim that magnetism is the arbiter of the fate of all worlds. Instead, magnetism is just another ingredient in the planet-forming potpourri. In some cases, it may be important; in others, not so much. Which fits, once you consider the billions upon billions of individual planets out there in our own galaxy alone. “That’s what makes the field so exciting and lively,” said Helled: There is never, nor will there ever be, a lack of astronomical curiosities to explore and understand.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:08 am on July 17, 2021 Permalink | Reply
    Tags: "DOE Provides $28 Million To Advance Scientific Discovery Using Supercomputers", , , , Supercomputing   

    From Department of Energy (US) : “DOE Provides $28 Million To Advance Scientific Discovery Using Supercomputers” 

    From Department of Energy (US)

    7.16.21

    The U.S. Department of Energy (DOE) today announced $28 million in funding for five research projects to develop software that will fully unleash the potential of DOE supercomputers to make new leaps in fields such as quantum information science and chemical reactions for clean energy applications.


    Mar 9, 2021
    Today, we announced plans to provide $30 million for Quantum Information Science (QIS) research that helps scientists understand how nature works on an extremely small scale—100,000 times smaller than the diameter of a human hair. QIS can help our nation solve some of the most pressing and complex challenges of the 21st century, from climate change to national security. Watch this video to learn more about Quantum Information Science.

    “DOE’s national labs are home to some of the world’s fastest supercomputers, and with more advanced software programs we can fully harness the power of these supercomputers to make breakthrough discoveries and solve the world’s hardest to crack problems,” said U.S. Secretary of Energy Jennifer M. Granholm. “These investments will help sustain U.S. leadership in science, accelerate basic research in energy, and advance solutions to the nation’s clean energy priorities.”

    Supercomputers are essential in today’s world to addressing scientific topics of national interest, including clean energy, new materials, climate change, the origins of the universe, and the nature of matter.
    These awards, made through DOE’s Scientific Discovery through Advanced Computing (SciDAC) program, bring together experts in key areas of science and energy research, applied mathematics, and computer science to address computing challenges and take maximum advantage of DOE’s supercomputers, allowing them to quicken the pace of scientific discovery.

    The five selected projects will focus on computational methods, algorithms and software to further chemical and materials research, specifically for simulating quantum phenomena and chemical reactions. Teams will partner with either or both SciDAC InstitutesFASTMath (US) and RAPIDS2 – led by DOE’s Lawrence Berkeley National Laboratory (US) and DOE’s Argonne National Laboratory (US). A list of projects can be found here.

    The projects are sponsored by the Offices of Advanced Scientific Computing Research (ASCR)(US) and Basic Energy Sciences (BES)(US) within the Department’s Office of Science (US) through the SciDAC program. Projects were chosen by competitive peer review under a DOE Funding Opportunity Announcement open to universities, national laboratories, and other research organizations. The final details for each project award are subject to negotiations between DOE and the awardees.

    Funding totals approximately $28 million, including $7 million in Fiscal Year 2021 dollars with outyear funding contingent on congressional appropriations.

    Received via email from US Department of Energy.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Department of Energy (US) is a cabinet-level department of the United States Government concerned with the United States’ policies regarding energy and safety in handling nuclear material. Its responsibilities include the nation’s nuclear weapons program; nuclear reactor production for the United States Navy; energy conservation; energy-related research; radioactive waste disposal; and domestic energy production. It also directs research in genomics. the Human Genome Project originated in a DOE initiative. DOE sponsors more research in the physical sciences than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The agency is led by the United States Secretary of Energy, and its headquarters are located in Southwest Washington, D.C., on Independence Avenue in the James V. Forrestal Building, named for James Forrestal, as well as in Germantown, Maryland.

    Formation and consolidation

    In 1942, during World War II, the United States started the Manhattan Project, a project to develop the atomic bomb, under the eye of the U.S. Army Corps of Engineers. After the war in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. In 1974, the AEC gave way to the Nuclear Regulatory Commission, which was tasked with regulating the nuclear power industry and the Energy Research and Development Administration, which was tasked to manage the nuclear weapon; naval reactor; and energy development programs.

    The 1973 oil crisis called attention to the need to consolidate energy policy. On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977 (Pub.L. 95–91, 91 Stat. 565, enacted August 4, 1977), which created the Department of Energy(US). The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration; the Energy Research and Development Administration; the Federal Power Commission; and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary.

    President Carter created the Department of Energy with the goal of promoting energy conservation and developing alternative sources of energy. He wanted to not be dependent on foreign oil and reduce the use of fossil fuels. With international energy’s future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three-Mile Island disaster, Carter was able to intervene with the help of the department. Carter made switches within the Nuclear Regulatory Commission in this case to fix the management and procedures. This was possible as nuclear energy and weapons are responsibility of the Department of Energy.

    Recent

    On March 28, 2017, a supervisor in the Office of International Climate and Clean Energy asked staff to avoid the phrases “climate change,” “emissions reduction,” or “Paris Agreement” in written memos, briefings or other written communication. A DOE spokesperson denied that phrases had been banned.

    In a May 2019 press release concerning natural gas exports from a Texas facility, the DOE used the term ‘freedom gas’ to refer to natural gas. The phrase originated from a speech made by Secretary Rick Perry in Brussels earlier that month. Washington Governor Jay Inslee decried the term “a joke”.

    Facilities

    The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows:

    Ames Laboratory
    Argonne National Laboratory
    Brookhaven National Laboratory
    Fermi National Accelerator Laboratory
    Idaho National Laboratory
    Lawrence Berkeley National Laboratory
    Lawrence Livermore National Laboratory
    Los Alamos National Laboratory
    National Energy Technology Laboratory
    National Renewable Energy Laboratory
    Oak Ridge National Laboratory
    Pacific Northwest National Laboratory
    Princeton Plasma Physics Laboratory
    Sandia National Laboratories
    Savannah River National Laboratory
    SLAC National Accelerator Laboratory
    Thomas Jefferson National Accelerator Facility

    Other major DOE facilities include:
    Albany Research Center
    Bannister Federal Complex
    Bettis Atomic Power Laboratory – focuses on the design and development of nuclear power for the U.S. Navy
    Kansas City Plant
    Knolls Atomic Power Laboratory – operates for Naval Reactors Program Research under the DOE (not a National Laboratory)
    National Petroleum Technology Office
    Nevada Test Site
    New Brunswick Laboratory
    Office of Fossil Energy
    Office of River Protection
    Pantex
    Radiological and Environmental Sciences Laboratory
    Y-12 National Security Complex
    Yucca Mountain nuclear waste repository
    Other:

    Pahute Mesa Airstrip – Nye County, Nevada, in supporting Nevada National Security Site

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: