Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:58 pm on June 30, 2022 Permalink | Reply
    Tags: "ExaSMR Models Small Modular Reactors Throughout Their Operational Lifetime", , , Current advanced reactor design approaches leverage decades of experimental and operational experience with the US nuclear fleet., , Exascale supercomputers give us a tool to model SMRs with higher resolution than possible on smaller supercomputers., ExaSMR integrates the most reliable and high-confidence numerical methods for modeling operational reactors., Investing in computer design capability means we can better evaluate and refine the designs to come up with the most efficacious solutions., Many different designs are being studied for next-generation reactors., Supercomputing, The DOE’s Exascale Computing Project, The ExaSMR team has adapted their algorithms and code to run on GPUs to realize an orders-of-magnitude increase in performance., The proposed SMR designs are generally simpler and require no human intervention or external power or the application of external force to shut down., We are already seeing significant improvements now on pre-exascale systems.   

    From The DOE’s Exascale Computing Project: “ExaSMR Models Small Modular Reactors Throughout Their Operational Lifetime” 

    From The DOE’s Exascale Computing Project

    June 8, 2022 [Just now in social media.]
    Rob Farber

    Technical Introduction

    Small modular reactors (SMRs) are advanced nuclear reactors that can be incrementally added to a power grid to provide carbon-free energy generation to match increasing energy demand.[1],[2] Their small size and modular design make them a more affordable option because they can be factory assembled and transported to an installation site as prefabricated units.

    Compared to existing nuclear reactors, proposed SMR designs are generally simpler and require no human intervention or external power or the application of external force to shut down. SMRs are designed to rely on passive systems that utilize physical phenomena, such as natural circulation, convection, gravity, and self-pressurization to eliminate or significantly lower the potential for unsafe releases of radioactivity in case of an accident.[3] Computer models are used to ensure that the SMR passive systems can safely operate the reactor regardless of the reactor’s operational mode—be it at idle, during startup, or running at full power.

    Current advanced reactor design approaches leverage decades of experimental and operational experience with the US nuclear fleet and are informed by calibrated numerical models of reactor phenomena. The exascale SMR (ExaSMR) project generates datasets of virtual reactor design simulations based on high-fidelity, coupled physics models for reactor phenomena that are truly predictive and reflect as much ground truth as experimental and operational reactor data.[4]

    An Integrated Toolkit

    The Exascale Computing Project’s (ECP’s) ExaSMR team is working to build a highly accurate, exascale-capable integrated tool kit that couples high-fidelity neutronics and computational fluid dynamics (CFD) codes to model the operational behavior of SMRs over the complete reactor lifetime. This includes accurately modeling the full-core multiphase thermal hydraulics and the fuel depletion. Even with exascale performance, reduced-order mesh numerical methodologies are required to achieve sufficient accuracy with reasonable runtimes to make these simulations tractable.

    According to Steven Hamilton (Figure 2), a senior researcher at The DOE’s Oak Ridge National Laboratory (ORNL) and PI of the project, ExaSMR integrates the most reliable and high-confidence numerical methods for modeling operational reactors.

    Specifically, ExaSMR is designed to leverage exascale systems to accurately and efficiently model the reactor’s neutron state with Monte Carlo (MC) neutronics and the reactor’s thermal fluid heat transfer efficiency with high-resolution CFD.[5] The ExaSMR team’s goal is to achieve very high spatial accuracy using models that contain 40 million spatial elements and exhibit 22 billion degrees of freedom.[6]

    Hamilton notes that high-resolution models are essential because they are used to reflect the presence of spacer grids and the complex mixing promoted by mixing vanes (or the equivalent) in the reactor. The complex fluid flows around these regions in the reactor (Figure 1) require high spatial resolution so engineers can understand the neutron distribution and the reactor’s thermal heat transfer efficiency. Of particular interest is the behavior of the reactor during low-power conditions as well as the initiation of coolant flow circulation through the SMR reactor core and its primary heat exchanger during startup.

    1
    Figure 1. Complex fluid flows and momentum cause swirling.

    To make the simulations run in reasonable times even when using an exascale supercomputer, the results of the high accuracy model are adapted so they can be utilized in a reduced order methodology. This methodology is based on momentum sources that can mimic the mixing caused by the vanes in the reactor. [7] Hamilton notes, “Essentially, we use the full core simulation on a small model that is replicated over the reactor by mapping to a coarser mesh. This coarser mesh eliminates the time-consuming complexity of the mixing vane calculations while still providing an accurate-enough representation for the overall model.” The data from the resulting virtual reactor simulations are used to fill in critical gaps in experimental and operational reactor data. These results give engineers the ability to accelerate the currently cumbersome advanced reactor concept-to-design-to-build cycle that has constrained the nuclear energy industry for decades. ExaSMR can also provide an avenue for validating existing industry design and regulatory tools.[8]

    2
    Figure 2. Steven Hamilton, PI of the ExaSMR project and Senior researcher at ORNL.

    “The importance,” Hamilton states, “is that many different designs are being studied for next-generation reactors. Investing in computer design capability means we can better evaluate and refine the designs to come up with the most efficacious solutions. Exascale supercomputers give us a tool to model SMRs with higher resolution than possible on smaller supercomputers. These resolution improvements make our simulations more predictive of the phenomena we are modeling. We are already seeing significant improvements now on pre-exascale systems and expect a similar jump in performance once we are running on the actual exascale hardware.” He concludes by noting, “Many scientists believe that nuclear is the only carbon-free energy source that is suitable for bulk deployment to meet primary energy needs with a climate-friendly technology.”

    The First Full-Core, Pin-Resolved CFD Simulations

    To achieve their goal of generating high-fidelity, coupled-physics models for truly predictive reactor models, the team must overcome limitations in computing power that have constrained past efforts to modeling only specific regions of a reactor core.[9] To this end, the ExaSMR team has adapted their algorithms and code to run on GPUs to realize an orders-of-magnitude increase in performance when running a challenge problem on the pre-exascale Summit supercomputer.

    Hamilton explains, “We were able to perform the simulations between 170× and 200× faster on the Summit supercomputer compared to the previous Titan ORNL supercomputer.

    Much of this is owed to ECP’s investment in the ExaSMR project and the Center for Efficient Exascale Discretizations (CEED) along with larger, higher performance GPU hardware. The CEED project has been instrumental for improving the algorithms we used in this simulation.”

    In demonstrating this new high watermark in performance, the team also performed (to their knowledge) the first ever full-core, pin-resolved CFD simulation that modeled coolant flow around the fuel pins in a light water reactor core. These fluid flows play a critical role in determining the reactor’s safety and performance. Hamilton notes, “This full core spacer grids and the mixing vanes (SGMV) simulation provides a high degree of spatial resolution that allows simultaneous capture of local and global effects. Capturing the effect of mixing vanes on flow and heat transfer is vital to predictive simulations.”

    The complexity of these flows can be seen in streamlines in Figure 1. Note the transition from parallel to rotating flows caused by simulation of the CFD momentum sources.

    A Two-Step Approach to Large-Scale Simulations

    A two-step approach was taken to implement a GPU-oriented CFD code using Reynolds-Averaged Navier-Stokes (RANS) equations to model the behavior in this SGMV challenge problem.

    Small simulations are performed using the more accurate yet computationally expensive large eddy simulation (LES) code. Hamilton notes these are comparatively small and do not need to be performed on the supercomputer.
    The accurate LES results are then imposed on a coarser mesh, which is used for modeling the turbulent flow at scale on the supercomputer’s GPUs. The RANS approach is needed because the Reynolds number in the core is expected to be high.[10]

    Jun Fang, an author of the study in which these results were published, reflects on the importance of these pre-exascale results by observing, “As we advance toward exascale computing, we will see more opportunities to reveal large-scale dynamics of these complex structures in regimes that were previously inaccessible, thereby giving us real information that can reshape how we approach the challenges in reactor designs.”[11]

    This basis for this optimism is reflected in the strong scaling behavior of NekRES, a GPU-enabled branch of the Nek5000 CFD code contributed by the ExaSMR team.[12] NekRS utilizes optimized finite-element flow solver kernels from the libParanumal library developed by CEED. The ExaSMR code is portable owing in part to the team’s use of the ECP-supported exascale-capable OCCA performance portability library. The OCCA library provides programmers with the ability to write portable kernels that can run on a variety of hardware platforms or be translated to backend-specific code such as OpenCL and CUDA.

    3
    Figure 3. NekRS strong scaling on Summit.

    Development of Novel Momentum Sources to Model Auxiliary Structures in the Core

    Even with the considerable computational capability of exascale hardware, the team was forced to develop a reduced-order methodology that mimics the mixing of the vanes to make the full core simulation tractable. “This methodology,” Hamilton notes, “allows the impact of mixing vanes on flow to be captured without requiring an explicit model of vanes. The objective is to model the fluid flow without the need of an expensive body-fitted mesh.” Instead, as noted in the paper, “The effects of spacer grid, mixing vanes, springs, dimples, and guidance/maintaining vanes are taken into account in the form of momentum sources and pressure drop.”[13]

    Validation of the Challenge Results

    To ensure adequate accuracy of the reduced order methodology, the momentum sources are carefully calibrated by the team with detailed LES of spacer grids performed with Nek5000.[14] The Nek5000 reference was used because it is a trusted reference in the literature.

    “The combination of RANS (full core) and LES,” the team wrote in their paper, “forms a flexible strategy that balances both efficiency and the accuracy.” Furthermore, “Continuous validation and verification studies have been conducted over years for Nek5000 for various geometries of interest to nuclear engineers, including the rod bundles with spacer grid and mixing vanes.”[15]

    Expanding on the text in the paper, Hamilton points out that “the momentum source method (MSM) was implemented in NekRS using the same approach developed in Nek5000, thereby leveraging as much as possible the same routines.”

    Validation of the simulation results includes the demonstration of the momentum sources shown in Figure 1 as well as validation of the pressure drop. Both are discussed in detail in the team’s peer-reviewed paper, which includes a numerical quantification of results by various figures of merit. Based on the success reflected in the validation metrics, the team concludes that they “clearly demonstrated that the RANS momentum sources developed can successfully reproduce the time-averaged macroscale flow physics revealed by the high-fidelity LES reference.”[16]

    The Groundwork has been Laid to Expand the Computational Domain

    Improved software, GPU acceleration, and reduced-order mesh numerical methodologies have laid the groundwork for further development of the integrated ExaSMR toolkit. In combination with operational exascale hardware, the ExaSMR team can expand their capabilities to simulate and study the system behavior concerning the neutronics and thermal–hydraulics of these small reactors.

    The implications are significant because the passive design and ease of installation means that SMRs offer a solution where the United States and the world can meet essential carbon-neutral climate goals while also addressing the need to augment existing electricity generation capacity.

    This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the US Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

    [1] https://www.iaea.org/newscenter/news/what-are-small-modular-reactors-smrs

    [2] https://www.energy.gov/ne/articles/4-key-benefits-advanced-small-modular-reactors

    [3] https://www.iaea.org/newscenter/news/what-are-small-modular-reactors-smrs

    [4] https://www.ornl.gov/project/exasmr-coupled-monte-carlo-neutronics-and-fluid-flow-simulation-small-modular-reactors

    [5] https://www.ornl.gov/project/exasmr-coupled-monte-carlo-neutronics-and-fluid-flow-simulation-small-modular-reactors

    [6] https://www.exascaleproject.org/research-project/exasmr/

    [7] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [8] https://www.exascaleproject.org/research-project/exasmr/

    [9] https://www.ans.org/news/article-2968/argonneled-team-models-fluid-dynamics-of-entire-smr-core/

    [10] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [11] https://www.ans.org/news/article-2968/argonneled-team-models-fluid-dynamics-of-entire-smr-core/

    [12] https://www.exascaleproject.org/research-project/exasmr/

    [13] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [14] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [15] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [16] https://www.osti.gov/biblio/1837194-feasibility-full-core-pin-resolved-cfd-simulations-small-modular-reactor-momentum-sources

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About The DOE’s Exascale Computing Project
    The ECP is a collaborative effort of two DOE organizations – the The DOE’s Office of Science and theThe DOE’s National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    The DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About The NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 8:54 am on June 18, 2022 Permalink | Reply
    Tags: "Harnessing a supercomputer for ATLAS", , , , , , IZUM ATOS BullSequana Vega supercomputer in Slovenia, , , , Supercomputing   

    From CERN (CH) ATLAS: “Harnessing a supercomputer for ATLAS” 

    European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN]

    Iconic view of the European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire](CH)CERN ATLAS detector

    CERN ATLAS another view Image Claudia Marcelloni ATLAS CERN

    From CERN (CH) ATLAS

    16 June, 2022

    IZUM ATOS BullSequana Vega supercomputer in Slovenia

    1
    Andrej Filipčič (left) and Jan Jona Javoršek (right) from the Jožef Stefan Institute in Ljubljana, Slovenia, next to Vega at the Institute of Information Science in Maribor. (Image: CERN)

    The ATLAS collaboration uses a global network of data centres – the Worldwide LHC Computing Grid – to perform data processing and analysis. These data centres are generally built from commodity hardware to run the whole spectrum of ATLAS data crunching, from reducing the raw data coming out of the detector down to a manageable size to producing plots for publication.

    While the Grid’s distributed approach has proven very successful, the computing needs of the LHC experiments keep expanding, so the ATLAS collaboration has been exploring the potential of integrating high-performance computing (HPC) centres in the Grid’s distributed environment. HPC harnesses the power of purpose-built supercomputers constructed from specialised hardware, and is used widely in other scientific disciplines.

    However, HPC poses significant challenges for ATLAS data processing. Access to supercomputer installations are typically subject to more restrictions than Grid sites and their CPU architectures may not be suitable for ATLAS software. Their scheduling mechanisms favour very large jobs using many thousands of nodes, which is atypical of an ATLAS workflow. Finally, the supercomputer installation may be geographically distant from storage hosting ATLAS data, which may pose network problems.

    Despite these challenges, ATLAS collaborators have been able to successfully exploit HPC over the last few years, including several near the top of the famous Top500 list of supercomputers. Technological barriers were overcome by isolating the main computation from the parts requiring network access, such as data transfer. Software issues were resolved by using container technology, which allows ATLAS software to run on any operating system, and the development of “edge services”, which enables computations to run in an offline mode without the need to contact external services.

    The most recent HPC centre to process ATLAS data is Vega – the first new petascale EuroHPC JU machine, hosted in the Institute of Information Science in Maribor, Slovenia. Vega started operation in April 2021 and consists of 960 nodes, each of which contains 128 physical CPU cores, for a total of 122 800 physical or 245 760 logical cores. To put this in perspective, the total number of cores provided to ATLAS from Grid resources is around 300 000.

    Due to close connections with the community of ATLAS physicists in Slovenia, some of whom were heavily involved in the design and commissioning of Vega, the ATLAS collaboration was one of the first users to be granted official time allocations. This was to the benefit of both the ATLAS collaboration, which could take advantage of a significant extra resource, and Vega, which was supplied with a steady, well-understood stream of jobs to assist in the commissioning phase.

    Vega was almost continually occupied with ATLAS jobs from the moment it was turned on, and the periods where fewer jobs were running were due to either other users on Vega or a lack of ATLAS jobs to submit. This huge additional computing power – essentially doubling ATLAS’s available resources – was invaluable, allowing several large-scale data-processing campaigns to run in parallel. As such, the ATLAS collaboration heads towards the restart of the LHC with a fully refreshed Run 2 data set and corresponding simulations, many of which have been significantly extended in statistics thanks to the additional resources provided by Vega.

    It is a testament to the robustness of ATLAS’s distributed computing systems that they could be scaled up to a single site equivalent in size to the entire Grid. While Vega will eventually be given over to other science projects, some fraction will continue to be dedicated to ATLAS. What’s more, the successful experience shows that ATLAS members (and their data) are ready to jump on the next available HPC centre and fully exploit its potential.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    CERN Courier (CH)

    Quantum Diaries
    QuantumDiaries

    CERN LHC underground tunnel and tube.

    SixTRack CERN LHC particles

     
  • richardmitnick 5:41 pm on June 17, 2022 Permalink | Reply
    Tags: "NASA Telescope to Help Untangle Galaxy Growth and Dark Matter Makeup", , , , , , Nancy Grace Roman Infrared Space Telescope, , Supercomputing,   

    From The NASA Goddard Space Flight Center: “NASA Telescope to Help Untangle Galaxy Growth and Dark Matter Makeup” 

    NASA Goddard Banner

    From The NASA Goddard Space Flight Center

    Jun 14, 2022

    Ashley Balzer
    NASA’s Goddard Space Flight Center, Greenbelt, Md.

    Media contact:
    Claire Andreoli
    NASA’s Goddard Space Flight Center, Greenbelt, Md.
    301-286-1940

    NASA’s Nancy Grace Roman Space Telescope will study wispy streams of stars that extend far beyond the apparent edges of many galaxies. Missions like the Hubble and James Webb space telescopes would have to patch together hundreds of small images to see these structures around nearby galaxies in full. Roman will do so in a single snapshot. Astronomers will use these observations to explore how galaxies grow and the nature of Dark Matter*.

    1
    This animation shows simulated stellar streams amid a realistic background of stars in the Andromeda galaxy (M31). Current observatories can’t see faint individual stars in and around galaxies, so we can only see the biggest stellar streams and only when selecting the stellar stream-like stars in the image. Not only will Roman be able to image individual stars in nearby galaxies – with similar processing, stellar streams will appear even more prominent.
    Credits: NASA’s Goddard Space Flight Center, based on data from Pearson et al. (2019)

    Stellar streams look like ethereal strands of hair extending outward from some galaxies, peacefully drifting through space as part of the halo – a spherical region surrounding a galaxy. But these stellar flyaways are signs of an ancient cosmic-scale drama that serve as fossil records of a galaxy’s past. Studying them transforms astronomers into galactic archaeologists.

    “Halos are mostly made from stars that were stripped away from other galaxies,” said Tjitske Starkenburg, a postdoctoral fellow at Northwestern University in Evanston, Illinois, who examined Roman’s potential in this area. “Roman’s wide, deep images will be sharp enough that we can resolve individual stars in other galaxies’ halos, making it possible to study stellar streams in a large number of galaxies for the first time.”

    The team, led by Starkenburg, will share their results at the American Astronomical Society’s 240th meeting in Pasadena, California, today.

    Galactic Cannibalism, Stolen Stars

    Simulations support the theory that galaxies grow in part by gobbling up smaller groups of stars.

    Supercomputing Reveals “Fossil Record” of Galaxy Collisions and Mergers

    3
    4
    This pair of images show two simulated galaxies in the early stages of a collision that will ultimately throw many stars from both galaxies into wide orbits, creating a faint stellar halo around the larger galaxy. The bottom image features stars and interstellar dust visible to the human eye; gases are largely invisible. The top image features interstellar low-density gases in blue to high-density gases in orange.
    Credits: Space Telescope Science Institute and Johns Hopkins University/Molly Peeples; NASA Ames/Chris Henze.

    How do galaxies evolve into the starry spirals famously seen by NASA’s Hubble Space Telescope and others?

    Researchers sought answers about the faint stars and gases surrounding galaxies like our own. Using the Pleiades supercomputer at the NASA Advanced Supercomputing facility at the agency’s Ames Research Center in California’s Silicon Valley and data from Hubble, they simulated a Milky Way-like galaxy in the early stages of a collision with another smaller galaxy.

    High End Computing Center Resources:

    Electra Supercomputer
    Aitken Supercomputer
    Pleiades Supercomputer
    Endeavour Supercomputer
    Merope Supercomputer
    Data Storage
    Networking Resources
    Visualization System: hyperwall
    Cloud Resources
    Legacy Systems

    The visualization revealed a detailed “fossil record” of information about the simulated galaxy’s history. Basically, when we gaze at the halo of stars and luminous clouds of interstellar dust surrounding a galaxy’s milky multitude, we’re seeing remnants of smaller, neighboring galaxies that were shredded by galactic mergers.

    These cosmic crashes throw many faint stars into enormous wide orbits, ultimately landing them out in the galaxies’ far-flung fringes. Researchers also found the outlier stars sometimes form streams that wrap around a larger galaxy and last billions of years.

    The data from these simulations is now helping scientists make predictions about how to detect and trace the histories of stellar streams to figure out what they can tell us about the galaxies that made them, including the streams around our own Milky Way. For example, NASA’s upcoming James Webb and Nancy Grace Roman space telescopes are expected to give us a detailed look at the stellar halos of dozens of nearby galaxies for the first time.

    Last Updated: Nov 17, 2021
    Editor: Rachel Hoover

    A dwarf galaxy captured into orbit by a larger one becomes distorted by gravity. Its stars drizzle out, tracing arcs and loops around the larger galaxy until they ultimately become its newest members.

    “As individual stars leak out of the dwarf galaxy and fall into the more massive one, they form long, thin streams that remain intact for billions of years,” said Sarah Pearson, a Hubble postdoctoral fellow at New York University in New York City and the lead author of a separate study about the mission’s projected observations in this area. “So stellar streams hold secrets from the past and can illuminate billions of years of evolution.”

    Astronomers have caught this cannibalistic process in the act using telescopes like ESA’s (European Space Agency’s) Gaia satellite, which is fine-tuned to measure the positions and motions of stars in our Milky Way galaxy.

    Roman will extend these observations by making similar measurements of stars in both the Milky Way and other galaxies.

    The Milky Way is home to at least 70 stellar streams, meaning it has likely eaten at least 70 dwarf galaxies or globular star clusters – groups of hundreds of thousands of gravitationally bound stars.

    Roman’s Milky Way images could allow astronomers to string together snapshots in time to show stars’ movement. That will help us learn about what Dark Matter – invisible matter that we can only detect via its gravitational effects on visible objects – is made of.

    One theory suggests Dark Matter is “cold,” or made up of heavy, sluggish particles. If so, it should clump together within galaxy halos, which would disturb stellar streams in ways Roman could see.

    By either detecting or ruling out these distortions, Roman could narrow down the candidates for what dark matter could be made of.

    Astronomers are also looking forward to studying stellar streams in several of the Milky Way’s neighboring galaxies. They aren’t well studied in other galaxies because they’re so faint and far away. They’re also so vast that they can wrap around an entire galaxy. It takes an unrivaled panoramic view like Roman’s to capture images that are both large and detailed enough to see them.

    3
    This series of images shows how astronomers find stellar streams by reversing the light and dark, similar to negative images. Color images of each of the nearby galaxies featured are included for context. Galaxies are surrounded by enormous halos of hot gas sprinkled with sporadic stars, seen as the shadowy regions that encase each galaxy here. Roman could improve on these observations by resolving individual stars to understand each stream’s stellar populations and see stellar streams of various sizes in even more galaxies. Credits: Carlin et al. (2016), based on images from Martínez-Delgado et al. (2008, 2010).

    Especially elusive stellar streams that formed when the Milky Way siphoned stars from globular star clusters have been detected before, but they’ve never been found in other galaxies. They’re fainter because they contain fewer stars, which makes them much more difficult to spot in other, more distant galaxies.

    Roman may detect them in several of our neighboring galaxies for the first time ever. The mission’s wide, sharp, deep vision should even reveal individual stars in these enormous, dim structures. In a previous study, Pearson led the development of an algorithm to systematically search for stellar streams originating from globular clusters in neighboring galaxies.

    Starkenburg’s new study adds to the picture by predicting that Roman should be able to detect dozens of streams in other galaxies that originated from dwarf galaxies, offering unprecedented insight into the way galaxies grow.

    “It’s exciting to learn more about our Milky Way, but if we truly want to understand galaxy formation and dark matter we need a larger sample size,” Starkenburg said. “Studying stellar streams in other galaxies with Roman will help us see the bigger picture.”

    The Nancy Grace Roman Space Telescope is managed at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, with participation by NASA’s Jet Propulsion Laboratory and Caltech/IPAC in Southern California, the Space Telescope Science Institute in Baltimore, and a science team comprising scientists from various research institutions. The primary industrial partners are Ball Aerospace and Technologies Corporation in Boulder, Colorado; L3Harris Technologies in Melbourne, Florida; and Teledyne Scientific & Imaging in Thousand Oaks, California.

    __________________________________
    *Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.
    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).
    Dark Matter Research

    Super Cryogenic Dark Matter Search from DOE’s SLAC National Accelerator Laboratory (US) at Stanford University (US) at SNOLAB (Vale Inco Mine, Sudbury, Canada).

    LBNL LZ Dark Matter Experiment (US) xenon detector at Sanford Underground Research Facility(US) Credit: Matt Kapust.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment (US) Dark Matter project at SURF, Lead, SD, USA.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    __________________________________

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition


    NASA/Goddard Campus

    NASA’s Goddard Space Flight Center, Greenbelt, MD is home to the nation’s largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

    Named for American rocketry pioneer Dr. Robert H. Goddard, the center was established in 1959 as NASA’s first space flight complex. Goddard and its several facilities are critical in carrying out NASA’s missions of space exploration and scientific discovery.

    GSFC also operates two spaceflight tracking and data acquisition networks (the NASA Deep Space Network and the Near Earth Network); develops and maintains advanced space and Earth science data information systems, and develops satellite systems for the National Oceanic and Atmospheric Administration .

    GSFC manages operations for many NASA and international missions including the NASA/ESA Hubble Space Telescope; the Explorers Program; the Discovery Program; the Earth Observing System; INTEGRAL; MAVEN; OSIRIS-REx; the Solar and Heliospheric Observatory ; the Solar Dynamics Observatory; Tracking and Data Relay Satellite System ; Fermi; and Swift. Past missions managed by GSFC include the Rossi X-ray Timing Explorer (RXTE), Compton Gamma Ray Observatory, SMM, COBE, IUE, and ROSAT. Typically, unmanned Earth observation missions and observatories in Earth orbit are managed by GSFC, while unmanned planetary missions are managed by the Jet Propulsion Laboratory (JPL) in Pasadena, California.

    Goddard is one of four centers built by NASA since its founding on July 29, 1958. It is NASA’s first, and oldest, space center. Its original charter was to perform five major functions on behalf of NASA: technology development and fabrication; planning; scientific research; technical operations; and project management. The center is organized into several directorates, each charged with one of these key functions.

    Until May 1, 1959, NASA’s presence in Greenbelt, MD was known as the Beltsville Space Center. It was then renamed the Goddard Space Flight Center (GSFC), after Robert H. Goddard. Its first 157 employees transferred from the United States Navy’s Project Vanguard missile program, but continued their work at the Naval Research Laboratory in Washington, D.C., while the center was under construction.

    Goddard Space Flight Center contributed to Project Mercury, America’s first manned space flight program. The Center assumed a lead role for the project in its early days and managed the first 250 employees involved in the effort, who were stationed at Langley Research Center in Hampton, Virginia. However, the size and scope of Project Mercury soon prompted NASA to build a new Manned Spacecraft Center, now the Johnson Space Center, in Houston, Texas. Project Mercury’s personnel and activities were transferred there in 1961.

    The Goddard network tracked many early manned and unmanned spacecraft.

    Goddard Space Flight Center remained involved in the manned space flight program, providing computer support and radar tracking of flights through a worldwide network of ground stations called the Spacecraft Tracking and Data Acquisition Network (STDN). However, the Center focused primarily on designing unmanned satellites and spacecraft for scientific research missions. Goddard pioneered several fields of spacecraft development, including modular spacecraft design, which reduced costs and made it possible to repair satellites in orbit. Goddard’s Solar Max satellite, launched in 1980, was repaired by astronauts on the Space Shuttle Challenger in 1984. The Hubble Space Telescope, launched in 1990, remains in service and continues to grow in capability thanks to its modular design and multiple servicing missions by the Space Shuttle.

    Today, the center remains involved in each of NASA’s key programs. Goddard has developed more instruments for planetary exploration than any other organization, among them scientific instruments sent to every planet in the Solar System. The center’s contribution to the Earth Science Enterprise includes several spacecraft in the Earth Observing System fleet as well as EOSDIS, a science data collection, processing, and distribution system. For the manned space flight program, Goddard develops tools for use by astronauts during extra-vehicular activity, and operates the Lunar Reconnaissance Orbiter, a spacecraft designed to study the Moon in preparation for future manned exploration.

    The National Aeronautics and Space Administration (NASA) is the agency of the United States government that is responsible for the nation’s civilian space program and for aeronautics and aerospace research.

    President Dwight D. Eisenhower established the National Aeronautics and Space Administration (NASA) in 1958 with a distinctly civilian (rather than military) orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA). The new agency became operational on October 1, 1958.

    Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program (LSP) which provides oversight of launch operations and countdown management for unmanned NASA launches. Most recently, NASA announced a new Space Launch System that it said would take the agency’s astronauts farther into space than ever before and lay the cornerstone for future human space exploration efforts by the U.S.

    NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate’s Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories [Hubble, Chandra, Spitzer, and associated programs.] NASA shares data with various national and international organizations such as from the [JAXA]Greenhouse Gases Observing Satellite.

     
  • richardmitnick 9:59 am on June 14, 2022 Permalink | Reply
    Tags: "EDL" is a mission’s shortest and most intense phase bringing spacecraft down from more than 12000 miles an hour to a safe and soft landing in about seven minutes., "EDL": Entry descent and landing, "Landing patterns", , , NASA is eager to land people and big spacecraft on Mars., Supercomputing, With Summit supercomputer power a NASA team parses approaches to putting people on Mars.   

    From DOE’s ASCR Discovery: “Landing patterns” 

    From DOE’s ASCR Discovery

    With Summit supercomputer power, a NASA team parses approaches to putting people on Mars.

    1
    Volume rendering of mass fraction of water at Mach 0.8 flight condition. Vehicle is traveling from right to left. Image courtesy of Patrick Moran/NASA Ames Research Center and Rajko Yasui-Schoeffel/NVIDIA.

    NASA aims to land humans on Mars by 2040, overcoming enormous engineering obstacles – in particular, designing a vehicle that can land on the red planet safely.

    Mars is surrounded by mostly carbon dioxide, an atmosphere about a hundredth as dense as Earth’s nitrogen- and oxygen-rich air. As such, spacecraft would fly much differently on Mars than on Earth, and there’s no way to fully test here how a lander would perform there.

    To carry people and cargo to the surface of Mars, a lander will be the size of a two-story house or bigger, so researchers working on lander designs need the scale that only emerging high-performance computing systems can offer. “As our problems get even bigger and more challenging, we’re becoming increasingly reliant on computations,” says Eric Nielsen, part of a NASA-led team applying supercomputing to the task.

    2
    Static snapshot of Mars lander at Mach 1.4 flight condition. Direction of travel is from right to left. The plumes represent isosurfaces of the mass fraction of water colored with vorticity magnitude. Grid lines indicate 10-meter spacing intervals with the bow shock shown as a transparent green surface about 100 meters upstream of the vehicle. Image courtesy of Patrick Moran/NASA Ames Research Center.

    Nielsen has been interested in space from childhood, growing up near NASA Langley Research Center in Hampton, Virginia. For the past two decades, he, colleague Ashley Korzun and others have advanced simulation capability to understand how Mars spacecraft would handle entry, descent and landing – known collectively as EDL. The team includes aerospace engineers, computer scientists, mathematicians and scientific visualization experts from Langley and NASA’s Ames Research Center, plus chipmaker NVIDIA, the National Institute of Aerospace, and partners at Old Dominion University and Georgia Tech.

    EDL is a mission’s shortest and most intense phase bringing spacecraft down from more than 12,000 miles an hour to a safe and soft landing in about seven minutes. Most successful Mars probes, including the Perseverance rover that landed in February 2021, based their EDL approaches on improvements to technologies initially developed for the 1970s’ Viking Mars landers.

    With NASA eager to land people and big spacecraft on Mars, the team has realized that incremental improvements to old methods won’t do.

    Using Summit [below], the flagship supercomputer at the Department of Energy’s Oak Ridge Leadership Computing Facility, Nielsen and his team have modeled EDL for spacecraft large enough to carry Mars explorers.

    So far, Nielsen says, they’ve logged the equivalent of billions of CPU core-hours in just a few years.

    Most of the simulations focus on spacecraft descent, says Korzun, the team’s science lead. The project aims to understand what physics are most important for models to capture, how these future vehicles will fly and how to steer and control them in a way that’s trustworthy enough to carry humans. The team’s simulations explore the environments and forces the vehicle will encounter.

    For example, larger spacecraft require big rocket engines to land, but the drag surfaces that slow a ship distort exhaust plumes and affect vehicle behavior. The spacecraft will fly through a mess of rocket exhaust that is unlike the clean, uniform atmosphere an Earth airliner cruises through at 35,000 feet. “The presence of the atmosphere on Mars really changes the way these rocket exhaust plumes behave,” Korzun says.

    The researchers were surprised when the simulations showed that the chemistry outside the engines matters more than previously believed. Besides measuring the effects of carbon dioxide, nitrogen, water vapor and other atmospheric components, the team found that hydroxyl radicals – molecules consisting of single oxygen and hydrogen atoms – were presented in larger than expected amounts. The gas, originally believed to be a minor chemical species in the Martian atmosphere, appeared to alter the chemical composition of the gas the vehicle flies through, changing pressure on its aerodynamic surfaces and thus the forces it experienced.

    The team has partially validated the Summit simulations with less complex data gathered from wind-tunnel tests on water-bottle-sized models, Korzun says. But there is no dataset that’s directly comparable with the simulations – and the hypothetical model spacecraft can’t be tested on Earth.

    Summit has given the team unprecedented capability. Before gaining access to it and other supercomputers that employ GPUs (graphics processing units) to accelerate calculations, the researchers tapped machines that yoked thousands of CPUs (conventional central processing units) for their simulations. That became untenable and computationally costly as the researchers added more detail in each iteration.

    Aaron Walden and Gabriel Nastac of NASA Langley have been leading the migration effort, Nielson says.

    Migrating from CPUs to GPUs was a huge challenge, he notes, forcing the team “to basically go back to the drawing board and learn how to port our codes to an entirely different class of architectures.”

    But it’s been worth it, Nielsen says. “If we want to continue to gain new understanding, the system and technologies have to (fundamentally) change. Incremental improvements were not going to get us there; we were looking to completely change how we do things.”

    Received via email. Subscribe here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

    The United States Department of Energy (DOE) is a cabinet-level department of the United States Government concerned with the United States’ policies regarding energy and safety in handling nuclear material. Its responsibilities include the nation’s nuclear weapons program; nuclear reactor production for the United States Navy; energy conservation; energy-related research; radioactive waste disposal; and domestic energy production. It also directs research in genomics. the Human Genome Project originated in a DOE initiative. DOE sponsors more research in the physical sciences than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The agency is led by the United States Secretary of Energy, and its headquarters are located in Southwest Washington, D.C., on Independence Avenue in the James V. Forrestal Building, named for James Forrestal, as well as in Germantown, Maryland.

    Formation and consolidation

    In 1942, during World War II, the United States started the Manhattan Project, a project to develop the atomic bomb, under the eye of the U.S. Army Corps of Engineers. After the war in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. In 1974, the AEC gave way to the Nuclear Regulatory Commission, which was tasked with regulating the nuclear power industry and the Energy Research and Development Administration, which was tasked to manage the nuclear weapon; naval reactor; and energy development programs.

    The 1973 oil crisis called attention to the need to consolidate energy policy. On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977 (Pub.L. 95–91, 91 Stat. 565, enacted August 4, 1977), which created the Department of Energy. The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration; the Energy Research and Development Administration; the Federal Power Commission; and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary.

    President Carter created the Department of Energy with the goal of promoting energy conservation and developing alternative sources of energy. He wanted to not be dependent on foreign oil and reduce the use of fossil fuels. With international energy’s future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three-Mile Island disaster, Carter was able to intervene with the help of the department. Carter made switches within the Nuclear Regulatory Commission in this case to fix the management and procedures. This was possible as nuclear energy and weapons are responsibility of the Department of Energy.

    Recent

    On March 28, 2017, a supervisor in the Office of International Climate and Clean Energy asked staff to avoid the phrases “climate change,” “emissions reduction,” or “Paris Agreement” in written memos, briefings or other written communication. A DOE spokesperson denied that phrases had been banned.

    In a May 2019 press release concerning natural gas exports from a Texas facility, the DOE used the term ‘freedom gas’ to refer to natural gas. The phrase originated from a speech made by Secretary Rick Perry in Brussels earlier that month. Washington Governor Jay Inslee decried the term “a joke”.

    Facilities

    Supercomputing

    The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows:

    Ames Laboratory
    Argonne National Laboratory
    Brookhaven National Laboratory
    Fermi National Accelerator Laboratory
    Idaho National Laboratory
    Lawrence Berkeley National Laboratory
    Lawrence Livermore National Laboratory
    Los Alamos National Laboratory
    National Renewable Energy Laboratory
    Oak Ridge National Laboratory
    Pacific Northwest National Laboratory
    Princeton Plasma Physics Laboratory
    Sandia National Laboratories
    Savannah River National Laboratory
    SLAC National Accelerator Laboratory
    Thomas Jefferson National Accelerator Facility
    Other major DOE facilities include:
    Albany Research Center
    Bannister Federal Complex
    Bettis Atomic Power Laboratory – focuses on the design and development of nuclear power for the U.S. Navy
    Kansas City Plant
    Knolls Atomic Power Laboratory – operates for Naval Reactors Program Research under the DOE (not a National Laboratory)
    National Petroleum Technology Office
    Nevada Test Site
    New Brunswick Laboratory
    Office of Fossil Energy[32]
    Office of River Protection[33]
    Pantex
    Radiological and Environmental Sciences Laboratory
    Y-12 National Security Complex
    Yucca Mountain nuclear waste repository
    Other:

    Pahute Mesa Airstrip – Nye County, Nevada, in supporting Nevada National Security Site

     
  • richardmitnick 8:46 am on June 4, 2022 Permalink | Reply
    Tags: "Great timing and supercomputer upgrade lead to successful forecast of volcanic eruption", , , , Supercomputing, ,   

    From The University of Illinois-Urbana–Champaign: “Great timing and supercomputer upgrade lead to successful forecast of volcanic eruption” 

    From The University of Illinois-Urbana–Champaign

    Jun 3, 2022
    Lois Yoksoulian
    leyok@illinois.edu

    1
    Former Illinois graduate student Yan Zhan, left, professor Patricia Gregg and research professor Seid Koric led a team that produced the fortuitous forecast of the 2018 Sierra Negra volcanic eruption five months before it occurred.
    Photo by Michelle Hassel.

    2
    Sierra Negra Volcano Eruption in June 2018 Credit: Detour Destinations

    In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers.

    NCSA National Center for Supercomputing Applications

    3
    NCSA iForge supercomputer.

    Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.

    Initially developed on an iMac computer, the new modeling approach had already garnered attention for successfully recreating the unexpected eruption of Alaska’s Okmok volcano in 2008. Gregg’s team, based out of the University of Illinois Urbana-Champaign and the National Center for Supercomputing Applications, wanted to test the model’s new high-performance computing upgrade, and Geist’s Sierra Negra observations showed signs of an imminent eruption.

    “Sierra Negra is a well-behaved volcano,” said Gregg, the lead author of a new report of the successful effort. “Meaning that, before eruptions in the past, the volcano has shown all the telltale signs of an eruption that we would expect to see like groundswell, gas release and increased seismic activity. This characteristic made Sierra Negra a great test case for our upgraded model.”

    However, many volcanoes don’t follow these neatly established patterns, the researchers said. Forecasting eruptions is one of the grand challenges in volcanology, and the development of quantitative models to help with these trickier scenarios is the focus of Gregg and her team’s work.

    Over the winter break of 2017-18, Gregg and her colleagues ran the Sierra Negra data through the new supercomputing-powered model. They completed the run in January 2018 and, even though it was intended as a test, it ended up providing a framework for understanding Sierra Negra’s eruption cycles and evaluating the potential and timing of future eruptions – but nobody realized it yet.

    “Our model forecasted that the strength of the rocks that contain Sierra Negra’s magma chamber would become very unstable sometime between June 25 and July 5, and possibly result in a mechanical failure and subsequent eruption,” said Gregg, who also is an NCSA faculty fellow. “We presented this conclusion at a scientific conference in March 2018. After that, we became busy with other work and did not look at our models again until Dennis texted me on June 26, asking me to confirm the date we had forecasted. Sierra Negra erupted one day after our earliest forecasted mechanical failure date. We were floored.”

    Though it represents an ideal scenario, the researchers said, the study shows the power of incorporating high-performance supercomputing into practical research. “The advantage of this upgraded model is its ability to constantly assimilate multidisciplinary, real-time data and process it rapidly to provide a daily forecast, similar to weather forecasting,” said Yan Zhan, a former Illinois graduate student and co-author of the study. “This takes an incredible amount of computing power previously unavailable to the volcanic forecasting community.”

    Bringing the moving parts into place to produce a modeling program of this strength requires a highly multidisciplinary approach that Gregg’s team did not have access to until working with NCSA.

    “We all speak the same language when it comes to the numerical multiphysics analysis and high-performance computing needed to forecast mechanical failure – in this case of a volcanic magma chamber,” said Seid Koric, the technical assistant director at NCSA, a research professor of mechanical sciences and engineering and a co-author of the study.

    With Koric’s expertise, the team said they hope to incorporate artificial intelligence and machine learning into the forecasting model to help make this computing power available to researchers working from standard laptop and desktop computers.

    The results of the study are published in the journal Science Advances.

    Geist is a program director at the National Science Foundation and a professor of geology at Colgate University. Falk Amelung of the University of Miami; Patricia Mothes of Instituto Geofísico Escuela Politecnica Nacional, Ecuador; and Zhang Yunjun of the California Institute of Technology also contributed to this research.

    The National Science Foundation, NASA and NCSA supported this study.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Illinois-Urbana-Champaign community of students, scholars, and alumni is changing the world.

    The University of Illinois at Urbana–Champaign is a public land-grant research university in Illinois in the twin cities of Champaign and Urbana. It is the flagship institution of the University of Illinois system and was founded in 1867.

    The University of Illinois at Urbana–Champaign is a member of the Association of American Universities and is classified among “R1: Doctoral Universities – Very high research activity”, and has been listed as a “Public Ivy” in The Public Ivies: America’s Flagship Public Universities (2001) by Howard and Matthew Greene. In fiscal year 2019, research expenditures at Illinois totaled $652 million. The campus library system possesses the second-largest university library in the United States by holdings after Harvard University (US). The university also hosts the National Center for Supercomputing Applications (NCSA).

    The university contains 16 schools and colleges and offers more than 150 undergraduate and over 100 graduate programs of study. The university holds 651 buildings on 6,370 acres (2,578 ha). The University of Illinois at Urbana–Champaign also operates a Research Park home to innovation centers for over 90 start-up companies and multinational corporations, including Abbott, AbbVie, Caterpillar, Capital One, Dow, State Farm, and Yahoo, among others.

    As of August 2020, the alumni, faculty members, or researchers of the university include 30 Nobel laureates; 27 Pulitzer Prize winners; 2 Turing Award winners and 1 Fields medalist. Illinois athletic teams compete in Division I of the NCAA and are collectively known as the Fighting Illini. They are members of the Big Ten Conference and have won the second-most conference titles. Illinois Fighting Illini football won the Rose Bowl Game in 1947, 1952, 1964 and a total of five national championships. Illinois athletes have won 29 medals in Olympic events, ranking it among the top 40 American universities with Olympic medals.

    Illinois Industrial University

    The original University Hall, which stood until 1938, when it was replaced by Gregory Hall and the Illini Union. Pieces were used in the erection of Hallene Gateway dedicated in 1998.

    The University of Illinois, originally named “Illinois Industrial University”, was one of the 37 universities created under the first Morrill Land-Grant Act, which provided public land for the creation of agricultural and industrial colleges and universities across the United States. Among several cities, Urbana was selected in 1867 as the site for the new school. From the beginning, President John Milton Gregory’s desire to establish an institution firmly grounded in the liberal arts tradition was at odds with many state residents and lawmakers who wanted the university to offer classes based solely around “industrial education”. The university opened for classes on March 2, 1868 and had two faculty members and 77 students.

    The Library which opened with the school in 1868 started with 1,039 volumes. Subsequently President Edmund J. James in a speech to the board of trustees in 1912 proposed to create a research library. It is now one of the world’s largest public academic collections. In 1870 the Mumford House was constructed as a model farmhouse for the school’s experimental farm. The Mumford House remains the oldest structure on campus. The original University Hall (1871) was the fourth building built. It stood where the Illini Union stands today.

    University of Illinois

    In 1885, the Illinois Industrial University officially changed its name to the “University of Illinois”, reflecting its agricultural; mechanical; and liberal arts curriculum.

    During his presidency Edmund J. James (1904–1920) is credited for building the foundation for the large Chinese international student population on campus. James established ties with China through the Chinese Minister to the United States Wu Ting-Fang. In addition during James’s presidency class rivalries and Bob Zuppke’s winning football teams contributed to campus morale.
    Like many universities the economic depression slowed construction and expansion on the campus. The university replaced the original university hall with Gregory Hall and the Illini Union. After World War II the university experienced rapid growth. The enrollment doubled and the academic standing improved. This period was also marked by large growth in the Graduate College and increased federal support of scientific and technological research. During the 1950s and 1960s the university experienced the turmoil common on many American campuses. Among these were the water fights of the fifties and sixties.

    University of Illinois at Urbana–Champaign

    By 1967 the University of Illinois system consisted of a main campus in Champaign-Urbana and two Chicago campuses- Chicago Circle (UICC) and Medical Center (UIMC). People began using “Urbana–Champaign” or the reverse to refer to the main campus specifically. The university name officially changed to the “University of Illinois at Urbana–Champaign” around 1982. While this was a reversal of the commonly used designation for the metropolitan area- “Champaign-Urbana” – most of the campus is located in Urbana. The name change established a separate identity for the main campus within the University of Illinois system which today includes campuses in Springfield (UIS) and Chicago (UIC) (formed by the merger of UICC and UIMC).

    In 1998 the Hallene Gateway Plaza was dedicated. The Plaza features the original sandstone portal of University Hall which was originally the fourth building on campus. In recent years state support has declined from 4.5% of the state’s tax appropriations in 1980 to 2.28% in 2011- a nearly 50% decline. As a result the university’s budget has shifted away from relying on state support with nearly 84% of the budget now coming from other sources.

    On March 12, 2015, the Board of Trustees approved the creation of a medical school, the first college created at Urbana–Champaign in 60 years. The Carle-Illinois College of Medicine began classes in 2018.

    Research

    The University of Illinois at Urbana–Champaign is often regarded as a world-leading magnet for engineering and sciences (both applied and basic). Having been classified into the category comprehensive doctoral with medical/veterinary and very high research activity by The Carnegie Foundation for the Advancement of Teaching Illinois offers a wide range of disciplines in undergraduate and postgraduate programs.

    According to the National Science Foundation the university spent $625 million on research and development in 2018 ranking it 37th in the nation. It is also listed as one of the Top 25 American Research Universities by The Center for Measuring University Performance. Beside annual influx of grants and sponsored projects the university manages an extensive modern research infrastructure. The university has been a leader in computer based education and hosted the PLATO project which was a precursor to the internet and resulted in the development of the plasma display. Illinois was a 2nd-generation ARPAnet site in 1971 and was the first institution to license the UNIX operating system from Bell Labs.

     
  • richardmitnick 4:10 pm on May 30, 2022 Permalink | Reply
    Tags: "Frontier supercomputer debuts as world’s fastest-breaking exascale barrier", , , , , , , Supercomputing,   

    From The DOE’s Oak Ridge National Laboratory: “Frontier supercomputer debuts as world’s fastest-breaking exascale barrier” 

    From The DOE’s Oak Ridge National Laboratory

    May 30, 2022

    Media Contacts:

    Sara Shoemaker
    shoemakerms@ornl.gov,
    865.576.9219

    Secondary Media Contact
    Katie Bethea
    Oak Ridge Leadership Computing Facility
    betheakl@ornl.gov
    757.817.2832


    Frontier: The World’s First Exascale Supercomputer Has Arrived

    The Frontier supercomputer [below] at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s fastest on the 59th TOP500 list, with 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.

    Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system [below]. The system leverages ORNL’s extensive expertise in accelerated computing and will enable scientists to develop critically needed technologies for the country’s energy, economic and national security, helping researchers address problems of national importance that were impossible to solve just five years ago.

    “Frontier is ushering in a new era of exascale computing to solve the world’s biggest scientific challenges,” ORNL Director Thomas Zacharia said. “This milestone offers just a preview of Frontier’s unmatched capability as a tool for scientific discovery. It is the result of more than a decade of collaboration among the national laboratories, academia and private industry, including DOE’s Exascale Computing Project, which is deploying the applications, software technologies, hardware and integration necessary to ensure impact at the exascale.”

    Rankings were announced at the International Supercomputing Conference 2022 in Hamburg, Germany, which gathers leaders from around the world in the field of high-performance computing, or HPC. Frontier’s speeds surpassed those of any other supercomputer in the world, including ORNL’s Summit, which is also housed at ORNL’s Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility.

    Frontier, a HPE Cray EX supercomputer, also claimed the number one spot on the Green500 list, which rates energy use and efficiency by commercially available supercomputing systems, with 62.68 gigaflops performance per watt. Frontier rounded out the twice-yearly rankings with the top spot in a newer category, mixed-precision computing, that rates performance in formats commonly used for artificial intelligence, with a performance of 6.88 exaflops.

    The work to deliver, install and test Frontier began during the COVID-19 pandemic, as shutdowns around the world strained international supply chains. More than 100 members of a public-private team worked around the clock, from sourcing millions of components to ensuring deliveries of system parts on deadline to carefully installing and testing 74 HPE Cray EX supercomputer cabinets, which include more than 9,400 AMD-powered nodes and 90 miles of networking cables.

    “When researchers gain access to the fully operational Frontier system later this year, it will mark the culmination of work that began over three years ago involving hundreds of talented people across the Department of Energy and our industry partners at HPE and AMD,” ORNL Associate Lab Director for computing and computational sciences Jeff Nichols said. “Scientists and engineers from around the world will put these extraordinary computing speeds to work to solve some of the most challenging questions of our era, and many will begin their exploration on Day One.”

    3

    Frontier’s overall performance of 1.1 exaflops translates to more than one quintillion floating point operations per second, or flops, as measured by the High-Performance Linpack Benchmark test. Each flop represents a possible calculation, such as addition, subtraction, multiplication or division.

    Frontier’s early performance on the Linpack benchmark amounts to more than seven times that of Summit at 148.6 petaflops. Summit continues as an impressive, highly ranked workhorse machine for open science, listed at number four on the TOP500.

    Frontier’s mixed-precision computing performance clocked in at roughly 6.88 exaflops, or more than 6.8 quintillion flops per second, as measured by the High-Performance Linpack-Accelerator Introspection, or HPL-AI, test. The HPL-AI test measures calculation speeds in the computing formats typically used by the machine-learning methods that drive advances in artificial intelligence.

    Detailed simulations relied on by traditional HPC users to model such phenomena as cancer cells, supernovas, the coronavirus or the atomic structure of elements require 64-bit precision, a computationally demanding form of computing accuracy. Machine-learning algorithms typically require much less precision — sometimes as little as 32-, 24- or 16-bit accuracy — and can take advantage of special hardware in the graphic processing units, or GPUs, relied on by machines like Frontier to reach even faster speeds.

    ORNL and its partners continue to execute the bring-up of Frontier on schedule. Next steps include continued testing and validation of the system, which remains on track for final acceptance and early science access later in 2022 and open for full science at the beginning of 2023.

    4
    Credit: Laddy Fields/ORNL, U.S. Dept. of Energy.

    FACTS ABOUT FRONTIER

    The Frontier supercomputer’s exascale performance is enabled by some of the world’s most advanced pieces of technology from HPE and AMD:

    Frontier has 74 HPE Cray EX supercomputer cabinets, which are purpose-built to support next-generation supercomputing performance and scale, once open for early science access.

    Each node contains one optimized EPYC™ processor and four AMD Instinct™ accelerators, for a total of more than 9,400 CPUs and more than 37,000 GPUs in the entire system. These nodes provide developers with easier capabilities to program their applications, due to the coherency enabled by the EPYC processors and Instinct accelerators.

    HPE Slingshot, the world’s only high-performance Ethernet fabric designed for next-generation HPC and AI solutions, including larger, data-intensive workloads, to address demands for higher speed and congestion control for applications to run smoothly and boost performance.

    An I/O subsystem from HPE that will come online this year to support Frontier and the OLCF. The I/O subsystem features an in-system storage layer and Orion, a Lustre-based enhanced center-wide file system that is also the world’s largest and fastest single parallel file system, based on the Cray ClusterStor E1000 storage system. The in-system storage layer will employ compute-node local storage devices connected via PCIe Gen4 links to provide peak read speeds of more than 75 terabytes per second, peak write speeds of more than 35 terabytes per second, and more than 15 billion random-read input/output operations per second. The Orion center-wide file system will provide around 700 petabytes of storage capacity and peak write speeds of 5 terabytes per second.

    As a next-generation supercomputing system and the world’s fastest for open science, Frontier is also energy-efficient, due to its liquid-cooled capabilities. This cooling system promotes a quieter data center by removing the need for a noisier, air-cooled system.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    Established in 1942, The DOE’s Oak Ridge National Laboratory is the largest science and energy national laboratory in the Department of Energy system (by size) and third largest by annual budget. It is located in the Roane County section of Oak Ridge, Tennessee. Its scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries.

    ORNL has several of the world’s top supercomputers, including Summit [below], ranked by the TOP500 as Earth’s second-most powerful.

    ORNL OLCF IBM Q AC922 SUMMIT supercomputer, was No.1 on the TOP500..

    The lab is a leading neutron and nuclear power research facility that includes the Spallation Neutron Source and High Flux Isotope Reactor.

    ORNL Spallation Neutron Source annotated.

    It hosts the Center for Nanophase Materials Sciences, the BioEnergy Science Center, and the Consortium for Advanced Simulation of Light Water Nuclear Reactors.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    Areas of research

    ORNL conducts research and development activities that span a wide range of scientific disciplines. Many research areas have a significant overlap with each other; researchers often work in two or more of the fields listed here. The laboratory’s major research areas are described briefly below.

    Chemical sciences – ORNL conducts both fundamental and applied research in a number of areas, including catalysis, surface science and interfacial chemistry; molecular transformations and fuel chemistry; heavy element chemistry and radioactive materials characterization; aqueous solution chemistry and geochemistry; mass spectrometry and laser spectroscopy; separations chemistry; materials chemistry including synthesis and characterization of polymers and other soft materials; chemical biosciences; and neutron science.
    Electron microscopy – ORNL’s electron microscopy program investigates key issues in condensed matter, materials, chemical and nanosciences.
    Nuclear medicine – The laboratory’s nuclear medicine research is focused on the development of improved reactor production and processing methods to provide medical radioisotopes, the development of new radionuclide generator systems, the design and evaluation of new radiopharmaceuticals for applications in nuclear medicine and oncology.
    Physics – Physics research at ORNL is focused primarily on studies of the fundamental properties of matter at the atomic, nuclear, and subnuclear levels and the development of experimental devices in support of these studies.
    Population – ORNL provides federal, state and international organizations with a gridded population database, called Landscan, for estimating ambient population. LandScan is a raster image, or grid, of population counts, which provides human population estimates every 30 x 30 arc seconds, which translates roughly to population estimates for 1 kilometer square windows or grid cells at the equator, with cell width decreasing at higher latitudes. Though many population datasets exist, LandScan is the best spatial population dataset, which also covers the globe. Updated annually (although data releases are generally one year behind the current year) offers continuous, updated values of population, based on the most recent information. Landscan data are accessible through GIS applications and a USAID public domain application called Population Explorer.

     
  • richardmitnick 8:39 pm on May 27, 2022 Permalink | Reply
    Tags: "HPE and Cerebras to Install AI Supercomputer at Leibniz Supercomputing Centre", AI compute demand is doubling every three to four months for the system users., , , LRZ provides researchers with advanced and reliable IT services for their science., Powered by the largest processor ever built the Cerebras Wafer-Scale Engine 2 (WSE-2) the CS-2 delivers greater AI-optimization than any other deep learning processor in existence., Supercomputing, The new system is an additional resource to Germany’s national supercomputing computing center.   

    From InsideHPC : “HPE and Cerebras to Install AI Supercomputer at Leibniz Supercomputing Centre” 

    From InsideHPC

    May 25, 2022

    The Leibniz Supercomputing Centre (LRZ), Cerebras Systems, and Hewlett Packard Enterprise (HPE), today announced the joint development of a system designed to accelerate scientific research and innovation in AI at Leibniz Supercomputing Centre (LRZ), an institute of the Bavarian Academy of Sciences and Humanities (BAdW).

    The system is purpose-built for scientific research and is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, which makes it the first solution in Europe to leverage the Cerebras CS-2 system Cerebras said. The HPE Superdome Flex server delivers a modular, scale-out solution to meet computing demands and features specialized capabilities for in-memory processing required for high volumes of data.

    Additionally, the HPE Superdome Flex server’s specific pre-and post-data processing capabilities for AI model training and inference “is ideal to support the Cerebras CS-2 system, which delivers the deep learning performance of 100s of graphics processing units (GPUs), with the programming ease of a single node,” Cerebras said. “Powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 delivers greater AI-optimized compute cores, faster memory, and more fabric bandwidth than any other deep learning processor in existence.”

    The system will be used by local scientists and engineers for research use cases. Applications include Natural Language Processing (NLP), medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognosis, and computational fluid dynamics (CFD) to advance understanding in areas such as aerospace engineering and manufacturing.

    “Currently, we observe that AI compute demand is doubling every three to four months with our users. With the high integration of processors, memory and on-board networks on a single chip, Cerebras enables high performance and speed. This promises significantly more efficiency in data processing and thus faster breakthrough of scientific findings,” said Prof. Dr. Dieter Kranzlmüller, Director of the LRZ. “As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science. To ensure optimal use of the system, we will work closely with our users and our partners Cerebras and HPE to identify ideal use cases in the community and to help achieve groundbreaking results.”

    The new system is funded by the Free State of Bavaria through the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region’s mission to becoming an international AI hotspot. The new system is also an additional resource to Germany’s national supercomputing computing center, and part of LRZ’s Future Computing Program that represents a portfolio of heterogenous computing architectures across CPUs, GPUs, FPGSs and ASICs.

    1
    Cerebras CS2-HPE Superdome Flex

    Cerebras said WSE-2 is 46,225 square millimeters of silicon, housing 2.6 trillion transistors and 850,000 AI-optimized computational cores as well as evenly distributed memory that hold up to 40 gigabytes of data and fast interconnects to transport them across the disk at 220 petabytes per second. This allows the WSE-2 to keep all the parameters of multi-layered neural networks on one chip during execution, which in turn reduces computation time and data processing. To date, the CS-2 system is being used in a number of U.S. research facilities and enterprises and is proving particularly effective in image and pattern recognition and natural language processing (NLP). Additional efficiency is also provided by water cooling, which reduces power consumption.

    To support the Cerebras CS-2 system, the HPE Superdome Flex server provides large-memory capabilities and scalability to process the massive, data-intensive machine learning projects that the Cerebra’ CS-2 system targets, Cerebras said. The HPE Superdome Flex server also manages and schedules jobs according to AI application needs, enables cloud access, and stages larger research datasets. In addition, the HPE Superdome Flex server includes a software stack with programs to build AI procedures and models.

    In addition to AI workloads, the combined technologies from HPE and Cerebras will also be considered for more traditional HPC workloads in support of larger, memory-intensive modeling and simulation needs, the companies said.

    “The future of computing is becoming more complex, with systems becoming more heterogeneous and tuned to specific applications. We should stop thinking in terms of HPC or AI systems,” says Laura Schulz, Head of Strategy at LRZ. “AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras. We’re working towards a future where the underlying compute is complex, but doesn’t impact the user; that the technology–whether HPC, AI or quantum–is available and approachable for our researchers in pursuit of their scientific discovery.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, InsideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    InsideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239
    Phone: (503) 877-5048

     
  • richardmitnick 7:23 pm on May 25, 2022 Permalink | Reply
    Tags: "Finding Superconductivity in Nickelates", , , , Supercomputing, ,   

    From The Texas Advanced Computing Center: “Finding Superconductivity in Nickelates” 

    From The Texas Advanced Computing Center

    at

    The University of Texas-Austin

    May 25, 2022
    Aaron Dubrow

    1
    The quantum phenomena that Antia Botana studies occur at the smallest scales known and can only be probed obliquely by physical experiment. Botana uses computational simulations to make predictions, help interpret experiments, and deduce the behavior and dynamics of materials like infinite-layer nickelate.

    The study of superconductivity is littered with disappointments, dead-ends, and serendipitous discoveries, according to Antia Botana, professor of physics at Arizona State University.

    “As theorists, we generally fail in predicting new superconductors,” she said.

    However, in 2021, she experienced the highlight of her early career. Working with experimentalist Julia Mundy at Harvard University, she discovered a new superconducting material —a quintuple-layer nickelate. They reported their findings in Nature Materials in September 2021.

    “It was one of the best moments of my life,” Botana recalled. “I was flying back from Spain, and I received a message from my collaborator Julia Mundy during my layover. When I saw the resistivity drop to zero — there’s nothing better than that.”

    2
    Electronic phase diagram and structural description of the layered nickelates. A: Schematic phase diagram for the electronic phases of the cuprates (top) and nickelates (bottom). B: Crystal structures of the quintuple-layer nickelates in the Nd6Ni5O16 Ruddlesden–Popper phase (left) and Nd6Ni5O12 reduced square-planar phase (right), depicted at the same scale. [Credit: Botana et al.]

    Botana was chosen as a 2022 Sloan Research Fellow. Her research is supported by a CAREER award from the National Science Foundation (NSF).

    “Prof. Botana is one of the most influential theorists in the field of unconventional superconductivity, particularly in layered nickelates that have received tremendous attention from the materials and condensed matter physics communities,” said Serdar Ogut, Program Director in the Division of Materials Research at the National Science Foundation. “I expect that her pioneering theoretical studies, in collaboration with leading experimentalists in the US, will continue to push the boundaries, result in the discovery of new superconducting materials, and uncover fundamental mechanisms that could one day pave the way to room temperature superconductivity.”

    Superconductivity is a phenomenon that occurs when electrons form pairs rather than travelling in isolation, repulsing all magnetism, and allowing electrons to travel without losing energy. Developing room-temperature superconductors would allow loss-free electricity transmission and faster, cheaper quantum computers. Studying these materials is the domain of condensed matter theory.

    “We try to understand what are called quantum materials — materials where everything classical that we learned in our undergraduate studies falls apart and no one understands why they do the fun things they do,” Botana joked.

    She began investigating nickelates, largely, to better understand cuprates — copper-oxide based superconductors first discovered in 1986. Thirty years on, the mechanism that produces superconductivity in these materials is still hotly contested.

    Botana approaches the problem by looking at materials that look like cuprates. “Copper and nickel are right next to each other on the periodic table,” she said. “This was an obvious thing to do, so people had been looking at nickelates for a long time without success.”

    But then, in 2019, a team from Stanford discovered superconductivity in a nickelate [Nature], albeit one that had been ‘doped,’ or chemically-altered to improve its electronic characteristics. “The material that they found in 2019 is part of a larger family, which is what we want, because it lets us do comparisons to cuprates in a better way,” she said.

    Botana’s discovery in 2021 built on that foundation, using a form of undoped nickelate with a unique, square-planar, layered structure. She decided to investigate this specific form of nickelate — a rare-earth, quintuple-layer, square-planar nickelate — based on intuition.

    “Having played with many different materials for years, it’s the type of intuition that people who study electronic structure develop,” she said. “I have seen that over the years with my mentors.”

    Identifying another form of superconducting nickelate lets researchers tease out similarities and differences among nickelates and between nickelates and cuprates. So far, the more nickelates that are studied, the more like cuprates they look.

    “The phase diagram seems quite similar. The electron pairing mechanism seems to be the same,” Botana says, “but this is a question yet to be settled.”

    Conventional superconductors exhibit s-wave pairing — electrons can pair in any direction and can sit on top of each other, so the wave is a sphere. Nickelates, on the other hand, likely display d-wave pairing, meaning that the cloudlike quantum wave that describes the paired electrons is shaped like a four-leaf clover. Another key difference is how strongly oxygen and transition metals overlap in these materials. Cuprates exhibit a large ‘super-exchange’ — the material trades electrons in copper atoms through a pathway that contains oxygen, rather than directly.

    “We think that may be one of the factors that governs superconductivity and causes the lower critical temperature of the nickelates,” she said. “We can look for ways of optimizing that characteristic.”

    Botana and colleagues Kwan-Woo Lee, Michael R. Norman, Victor Pardo, Warren E. Pickett described some of these differences in a review article for Frontiers in Physics in February 2022.

    Searching for Root Causes of Superconductivity

    Writing in Physical Review X in March 2022, Botana and collaborators from the Brookhaven National Laboratory and Argonne National Labs delved deeper into the role of oxygen states in the low-valence nickelate, La4Ni3O8. Using computational and experimental methods, they compared the material to a prototypical cuprate with a similar electron filling. The work was unique in that it directly measured the energy of the Nickel-Oxygen hybridized states.

    They found that despite requiring more energy to transfer charges, nickelates retained a sizable capacity for superexchange. They conclude that both the “Coulomb interactions” (the attraction or repulsion of particles or objects because of their electric charge) and charge-transfer processes need to be considered when interpreting the properties of nickelates.

    The quantum phenomena that Botana studies occur at the smallest scales known and can only be probed obliquely by physical experiment (as in the Physical Review X paper). Botana uses computational simulations to make predictions, help interpret experiments, and deduce the behavior and dynamics of materials like infinite-layer nickelate.

    Her research uses Density Functional Theory, or DFT — a means of computationally solving the Schrödinger equation that describes the wave function of a quantum-mechanical system — as well as a newer, more precise offshoot known as dynamical mean field theory that can treat electrons that are strongly correlated.

    To conduct her research, Botana uses the Stampede2 supercomputer of the Texas Advanced Computing Center (TACC) — the second fastest at any university in the U.S. — as well as machines at Arizona State University. Even on the fastest supercomputers in the world, studying quantum materials is no simple matter.

    “If I see a problem with too many atoms, I say, ‘I can’t study that,'” Botana said. “Twenty years ago, a few atoms might have looked like too much.” But more powerful supercomputers are allowing physicists to study larger, more complicated systems — like nickelates — and add tools, like dynamical mean field theory, that can better capture quantum behavior.

    Despite living in a Golden Age of Discovery, the field of condensed matter physics still doesn’t have the reputation it deserves, Botana says.

    “Your phone or computer would not be possible without research in condensed matter physics — from the screen, to the battery, to the little camera. It’s important for the public to understand that even if it’s fundamental research, and even if the researchers don’t know how it will be used later, this type of research in materials is critical.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

    Founded in 2001, TACC is one of the centers of computational excellence in the United States. Through the National Science Foundation (NSF) Extreme Science and Engineering Discovery Environment (XSEDE) project, TACC’s resources and services are made available to the national academic research community. TACC is located on UT’s J. J. Pickle Research Campus.

    TACC collaborators include researchers in other UT Austin departments and centers, at Texas universities in the High Performance Computing Across Texas Consortium, and at other U.S. universities and government laboratories.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Ranch long-term mass data storage system

    TACC DELL EMC Stampede2 supercomputer


    Stampede2 Arrives!

    TACC Frontera Dell EMC supercomputer fastest at any university

    University Texas at Austin

    U Texas Austin campus

    The University of Texas-Austin is a public research university in Austin, Texas and the flagship institution of the University of Texas System. Founded in 1883, the University of Texas was inducted into the Association of American Universities in 1929, becoming only the third university in the American South to be elected. The institution has the nation’s seventh-largest single-campus enrollment, with over 50,000 undergraduate and graduate students and over 24,000 faculty and staff.

    A Public Ivy, it is a major center for academic research. The university houses seven museums and seventeen libraries, including the LBJ Presidential Library and the Blanton Museum of Art, and operates various auxiliary research facilities, such as the J. J. Pickle Research Campus and the McDonald Observatory. As of November 2020, 13 Nobel Prize winners, four Pulitzer Prize winners, two Turing Award winners, two Fields medalists, two Wolf Prize winners, and two Abel prize winners have been affiliated with the school as alumni, faculty members or researchers. The university has also been affiliated with three Primetime Emmy Award winners, and has produced a total of 143 Olympic medalists.

    Student-athletes compete as the Texas Longhorns and are members of the Big 12 Conference. Its Longhorn Network is the only sports network featuring the college sports of a single university. The Longhorns have won four NCAA Division I National Football Championships, six NCAA Division I National Baseball Championships, thirteen NCAA Division I National Men’s Swimming and Diving Championships, and has claimed more titles in men’s and women’s sports than any other school in the Big 12 since the league was founded in 1996.

    Establishment

    The first mention of a public university in Texas can be traced to the 1827 constitution for the Mexican state of Coahuila y Tejas. Although Title 6, Article 217 of the Constitution promised to establish public education in the arts and sciences, no action was taken by the Mexican government. After Texas obtained its independence from Mexico in 1836, the Texas Congress adopted the Constitution of the Republic, which, under Section 5 of its General Provisions, stated “It shall be the duty of Congress, as soon as circumstances will permit, to provide, by law, a general system of education.”

    On April 18, 1838, “An Act to Establish the University of Texas” was referred to a special committee of the Texas Congress, but was not reported back for further action. On January 26, 1839, the Texas Congress agreed to set aside fifty leagues of land—approximately 288,000 acres (117,000 ha)—towards the establishment of a publicly funded university. In addition, 40 acres (16 ha) in the new capital of Austin were reserved and designated “College Hill”. (The term “Forty Acres” is colloquially used to refer to the University as a whole. The original 40 acres is the area from Guadalupe to Speedway and 21st Street to 24th Street.)

    In 1845, Texas was annexed into the United States. The state’s Constitution of 1845 failed to mention higher education. On February 11, 1858, the Seventh Texas Legislature approved O.B. 102, an act to establish the University of Texas, which set aside $100,000 in United States bonds toward construction of the state’s first publicly funded university (the $100,000 was an allocation from the $10 million the state received pursuant to the Compromise of 1850 and Texas’s relinquishing claims to lands outside its present boundaries). The legislature also designated land reserved for the encouragement of railroad construction toward the university’s endowment. On January 31, 1860, the state legislature, wanting to avoid raising taxes, passed an act authorizing the money set aside for the University of Texas to be used for frontier defense in west Texas to protect settlers from Indian attacks.

    Texas’s secession from the Union and the American Civil War delayed repayment of the borrowed monies. At the end of the Civil War in 1865, The University of Texas’s endowment was just over $16,000 in warrants and nothing substantive had been done to organize the university’s operations. This effort to establish a University was again mandated by Article 7, Section 10 of the Texas Constitution of 1876 which directed the legislature to “establish, organize and provide for the maintenance, support and direction of a university of the first class, to be located by a vote of the people of this State, and styled “The University of Texas”.

    Additionally, Article 7, Section 11 of the 1876 Constitution established the Permanent University Fund, a sovereign wealth fund managed by the Board of Regents of the University of Texas and dedicated to the maintenance of the university. Because some state legislators perceived an extravagance in the construction of academic buildings of other universities, Article 7, Section 14 of the Constitution expressly prohibited the legislature from using the state’s general revenue to fund construction of university buildings. Funds for constructing university buildings had to come from the university’s endowment or from private gifts to the university, but the university’s operating expenses could come from the state’s general revenues.

    The 1876 Constitution also revoked the endowment of the railroad lands of the Act of 1858, but dedicated 1,000,000 acres (400,000 ha) of land, along with other property appropriated for the university, to the Permanent University Fund. This was greatly to the detriment of the university as the lands the Constitution of 1876 granted the university represented less than 5% of the value of the lands granted to the university under the Act of 1858 (the lands close to the railroads were quite valuable, while the lands granted the university were in far west Texas, distant from sources of transportation and water). The more valuable lands reverted to the fund to support general education in the state (the Special School Fund).

    On April 10, 1883, the legislature supplemented the Permanent University Fund with another 1,000,000 acres (400,000 ha) of land in west Texas granted to the Texas and Pacific Railroad but returned to the state as seemingly too worthless to even survey. The legislature additionally appropriated $256,272.57 to repay the funds taken from the university in 1860 to pay for frontier defense and for transfers to the state’s General Fund in 1861 and 1862. The 1883 grant of land increased the land in the Permanent University Fund to almost 2.2 million acres. Under the Act of 1858, the university was entitled to just over 1,000 acres (400 ha) of land for every mile of railroad built in the state. Had the 1876 Constitution not revoked the original 1858 grant of land, by 1883, the university lands would have totaled 3.2 million acres, so the 1883 grant was to restore lands taken from the university by the 1876 Constitution, not an act of munificence.

    On March 30, 1881, the legislature set forth the university’s structure and organization and called for an election to establish its location. By popular election on September 6, 1881, Austin (with 30,913 votes) was chosen as the site. Galveston, having come in second in the election (with 20,741 votes), was designated the location of the medical department (Houston was third with 12,586 votes). On November 17, 1882, on the original “College Hill,” an official ceremony commemorated the laying of the cornerstone of the Old Main building. University President Ashbel Smith, presiding over the ceremony, prophetically proclaimed “Texas holds embedded in its earth rocks and minerals which now lie idle because unknown, resources of incalculable industrial utility, of wealth and power. Smite the earth, smite the rocks with the rod of knowledge and fountains of unstinted wealth will gush forth.” The University of Texas officially opened its doors on September 15, 1883.

    Expansion and growth

    In 1890, George Washington Brackenridge donated $18,000 for the construction of a three-story brick mess hall known as Brackenridge Hall (affectionately known as “B.Hall”), one of the university’s most storied buildings and one that played an important place in university life until its demolition in 1952.

    The old Victorian-Gothic Main Building served as the central point of the campus’s 40-acre (16 ha) site, and was used for nearly all purposes. But by the 1930s, discussions arose about the need for new library space, and the Main Building was razed in 1934 over the objections of many students and faculty. The modern-day tower and Main Building were constructed in its place.

    In 1910, George Washington Brackenridge again displayed his philanthropy, this time donating 500 acres (200 ha) on the Colorado River to the university. A vote by the regents to move the campus to the donated land was met with outrage, and the land has only been used for auxiliary purposes such as graduate student housing. Part of the tract was sold in the late-1990s for luxury housing, and there are controversial proposals to sell the remainder of the tract. The Brackenridge Field Laboratory was established on 82 acres (33 ha) of the land in 1967.

    In 1916, Gov. James E. Ferguson became involved in a serious quarrel with the University of Texas. The controversy grew out of the board of regents’ refusal to remove certain faculty members whom the governor found objectionable. When Ferguson found he could not have his way, he vetoed practically the entire appropriation for the university. Without sufficient funding, the university would have been forced to close its doors. In the middle of the controversy, Ferguson’s critics brought to light a number of irregularities on the part of the governor. Eventually, the Texas House of Representatives prepared 21 charges against Ferguson, and the Senate convicted him on 10 of them, including misapplication of public funds and receiving $156,000 from an unnamed source. The Texas Senate removed Ferguson as governor and declared him ineligible to hold office.

    In 1921, the legislature appropriated $1.35 million for the purchase of land next to the main campus. However, expansion was hampered by the restriction against using state revenues to fund construction of university buildings as set forth in Article 7, Section 14 of the Constitution. With the completion of Santa Rita No. 1 well and the discovery of oil on university-owned lands in 1923, the university added significantly to its Permanent University Fund. The additional income from Permanent University Fund investments allowed for bond issues in 1931 and 1947, which allowed the legislature to address funding for the university along with the Agricultural and Mechanical College (now known as Texas A&M University). With sufficient funds to finance construction on both campuses, on April 8, 1931, the Forty Second Legislature passed H.B. 368. which dedicated the Agricultural and Mechanical College a 1/3 interest in the Available University Fund, the annual income from Permanent University Fund investments.

    The University of Texas was inducted into The Association of American Universities in 1929. During World War II, the University of Texas was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission.

    In 1950, following Sweatt v. Painter, the University of Texas was the first major university in the South to accept an African-American student. John S. Chase went on to become the first licensed African-American architect in Texas.

    In the fall of 1956, the first black students entered the university’s undergraduate class. Black students were permitted to live in campus dorms, but were barred from campus cafeterias. The University of Texas integrated its facilities and desegregated its dorms in 1965. UT, which had had an open admissions policy, adopted standardized testing for admissions in the mid-1950s at least in part as a conscious strategy to minimize the number of Black undergraduates, given that they were no longer able to simply bar their entry after the Brown decision.

    Following growth in enrollment after World War II, the university unveiled an ambitious master plan in 1960 designed for “10 years of growth” that was intended to “boost the University of Texas into the ranks of the top state universities in the nation.” In 1965, the Texas Legislature granted the university Board of Regents to use eminent domain to purchase additional properties surrounding the original 40 acres (160,000 m^2). The university began buying parcels of land to the north, south, and east of the existing campus, particularly in the Blackland neighborhood to the east and the Brackenridge tract to the southeast, in hopes of using the land to relocate the university’s intramural fields, baseball field, tennis courts, and parking lots.

    On March 6, 1967, the Sixtieth Texas Legislature changed the university’s official name from “The University of Texas” to “The University of Texas at Austin” to reflect the growth of the University of Texas System.

    Recent history

    The first presidential library on a university campus was dedicated on May 22, 1971, with former President Johnson, Lady Bird Johnson and then-President Richard Nixon in attendance. Constructed on the eastern side of the main campus, the Lyndon Baines Johnson Library and Museum is one of 13 presidential libraries administered by the National Archives and Records Administration.

    A statue of Martin Luther King Jr. was unveiled on campus in 1999 and subsequently vandalized. By 2004, John Butler, a professor at the McCombs School of Business suggested moving it to Morehouse College, a historically black college, “a place where he is loved”.

    The University of Texas at Austin has experienced a wave of new construction recently with several significant buildings. On April 30, 2006, the school opened the Blanton Museum of Art. In August 2008, the AT&T Executive Education and Conference Center opened, with the hotel and conference center forming part of a new gateway to the university. Also in 2008, Darrell K Royal-Texas Memorial Stadium was expanded to a seating capacity of 100,119, making it the largest stadium (by capacity) in the state of Texas at the time.

    On January 19, 2011, the university announced the creation of a 24-hour television network in partnership with ESPN, dubbed the Longhorn Network. ESPN agreed to pay a $300 million guaranteed rights fee over 20 years to the university and to IMG College, the school’s multimedia rights partner. The network covers the university’s intercollegiate athletics, music, cultural arts, and academics programs. The channel first aired in September 2011.

     
  • richardmitnick 4:35 pm on May 22, 2022 Permalink | Reply
    Tags: "CDWs": charge density waves-ripples in the density of electrons in the material, "Superconductivity and charge density waves caught intertwining at the nanoscale", "YBCO": yttrium barium copper oxide, , , , , , , Supercomputing, These LCLS experiments generated terabytes of data-a challenge for processing.,   

    From DOE’s SLAC National Accelerator Laboratory : “Superconductivity and charge density waves caught intertwining at the nanoscale” 

    From DOE’s SLAC National Accelerator Laboratory

    May 20, 2022
    Jennifer Huber


    Credit: Greg Stewart/SLAC National Accelerator Laboratory.

    Scientists discover superconductivity and charge density waves are intrinsically interconnected at the nanoscopic level, a new understanding that could help lead to the next generation of electronics and computers.

    Room-temperature superconductors could transform everything from electrical grids to particle accelerators to computers – but before they can be realized, researchers need to better understand how existing high-temperature superconductors work.

    Now, researchers from the Department of Energy’s SLAC National Accelerator Laboratory, the University of British Columbia, Yale University and others have taken a step in that direction by studying the fast dynamics of a material called yttrium barium copper oxide, or YBCO.

    The team reports May 20 in Science that YBCO’s superconductivity is intertwined in unexpected ways with another phenomenon known as charge density waves (CDWs), or ripples in the density of electrons in the material. As the researchers expected, CDWs get stronger when they turned off YBCO’s superconductivity. However, they were surprised to find the CDWs also suddenly became more spatially organized, suggesting superconductivity somehow fundamentally shapes the form of the CDWs at the nanoscale.

    “A big part of what we don’t know is the relationship between charge density waves and superconductivity,” said Giacomo Coslovich, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory, who led the study. “As one of the cleanest high-temperature superconductors that can be grown, YBCO offers us the opportunity to understand this physics in a very direct way, minimizing the effects of disorder.”

    He added, “If we can better understand these materials, we can make new superconductors that work at higher temperatures, enabling many more applications and potentially addressing a lot of societal challenges – from climate change to energy efficiency to availability of fresh water.”


    The team aimed infrared laser pulses at the YBCO sample to switch off its superconducting state, then used X-ray laser pulses to illuminate the sample and examined the X-ray light scattered from it. Their results revealed that regions of superconductivity and charge density waves were arranged in unexpected ways. (Courtesy Giacomo Coslovich/SLAC National Accelerator Laboratory)

    Observing fast dynamics

    The researchers studied YBCO’s dynamics at SLAC’s Linac Coherent Light Source (LCLS) X-ray laser [below]. They switched off superconductivity in the YBCO samples with infrared laser pulses, and then bounced X-ray pulses off those samples. For each shot of X-rays, the team pieced together a kind of snapshot of the CDWs’ electron ripples. By pasting those together, they recreated the CDWs rapid evolution.

    “We did these experiments at the LCLS because we needed ultrashort pulses of X-rays, which can be made at very few places in the world. And we also needed soft X-rays, which have longer wavelengths than typical X-rays, to directly detect the CDWs,” said staff scientist and study co-author Joshua Turner, who is also a researcher at the Stanford Institute for Materials and Energy Sciences. “Plus, the people at LCLS are really great to work with.”

    These LCLS experiments generated terabytes of data-a challenge for processing. “Using many hours of supercomputing time, LCLS beamline scientists binned our huge amounts of data into a more manageable form so our algorithms could extract the feature characteristics,” said MengXing (Ketty) Na, a University of British Columbia graduate student and co-author on the project.

    The team found that charge density waves within the YBCO samples became more correlated – that is, more electron ripples were periodic or spatially synchronized – after lasers switched off the superconductivity.

    “Doubling the number of waves that are correlated with just a flash of light is quite remarkable, because light typically would produce the opposite effect. We can use light to completely disorder the charge density waves if we push too hard,” Coslovich said.


    Blue areas are superconducting regions, and yellow areas represent charge density waves. After a laser pulse (red), the superconducting regions are rapidly turned off and the charge density waves react by rearranging their pattern, becoming more orderly and coherent. (Greg Stewart/SLAC National Accelerator Laboratory)

    To explain these experimental observations, the researchers then modeled how regions of CDWs and superconductivity ought to interact given a variety of underlying assumptions about how YBCO works. For example, their initial model assumed that a uniform region of superconductivity when shut off with light would become a uniform CDW region – but of course that didn’t agree with their results.

    “The model that best fits our data so far indicates that superconductivity is acting like a defect within a pattern of the waves. This suggests that superconductivity and charge density waves like to be arranged in a very specific, nanoscopic way,” explained Coslovich. “They are intertwined orders at the length scale of the waves themselves.”

    Illuminating the future

    Coslovich said that being able to turn superconductivity off with light pulses was a significant advance, enabling observations on the time scale of less than a trillionth of a second, with major advantages over previous approaches.

    “When you use other methods, like applying a high magnetic field, you have to wait a long time before making measurements, so CDWs rearrange around disorder and other phenomena can take place in the sample,” he said. “Using light allowed us to show this is an intrinsic effect, a real connection between superconductivity and charge density waves.”

    The research team is excited to expand on this pivotal work, Turner said. First, they want to study how the CDWs become more organized when the superconductivity is shut off with light. They are also planning to tune the laser’s wavelength or polarization in future LCLS experiments in hopes of also using light to enhance, instead of quench, the superconducting state, so they could readily turn the superconducting state off and on.

    “There is an overall interest in trying to do this with pulses of light on very fast time scales, because that can potentially lead to the development of superconducting, light-controlled devices for the new generation of electronics and computing,” said Coslovich. “Ultimately, this work can also help guide people who are trying to build room-temperature superconductors.”

    This research is part of a collaboration between researchers from LCLS, SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), UBC, Yale University, the Institut National de la Recherche Scientifique in Canada, North Carolina State University, Universita Cattolica di Brescia and other institutions. This work was funded in part by the DOE Office of Science. LCLS and SSRL are DOE Office of Science user facilities.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    SLAC National Accelerator Laboratory originally named Stanford Linear Accelerator Center, is a United States Department of Energy National Laboratory operated by Stanford University under the programmatic direction of the U.S. Department of Energy Office of Science and located in Menlo Park, California. It is the site of the Stanford Linear Accelerator, a 3.2 kilometer (2-mile) linear accelerator constructed in 1966 and shut down in the 2000s, which could accelerate electrons to energies of 50 GeV.

    Today SLAC research centers on a broad program in atomic and solid-state physics, chemistry, biology, and medicine using X-rays from synchrotron radiation and a free-electron laser as well as experimental and theoretical research in elementary particle physics, astroparticle physics, and cosmology.

    Founded in 1962 as the Stanford Linear Accelerator Center, the facility is located on 172 hectares (426 acres) of Stanford University-owned land on Sand Hill Road in Menlo Park, California—just west of the University’s main campus. The main accelerator is 3.2 kilometers (2 mi) long—the longest linear accelerator in the world—and has been operational since 1966.

    Research at SLAC has produced three Nobel Prizes in Physics

    1976: The charm quark—see J/ψ meson
    1990: Quark structure inside protons and neutrons
    1995: The tau lepton

    SLAC’s meeting facilities also provided a venue for the Homebrew Computer Club and other pioneers of the home computer revolution of the late 1970s and early 1980s.

    In 1984 the laboratory was named an ASME National Historic Engineering Landmark and an IEEE Milestone.

    SLAC developed and, in December 1991, began hosting the first World Wide Web server outside of Europe.

    In the early-to-mid 1990s, the Stanford Linear Collider (SLC) investigated the properties of the Z boson using the Stanford Large Detector.

    As of 2005, SLAC employed over 1,000 people, some 150 of whom were physicists with doctorate degrees, and served over 3,000 visiting researchers yearly, operating particle accelerators for high-energy physics and the Stanford Synchrotron Radiation Laboratory (SSRL) for synchrotron light radiation research, which was “indispensable” in the research leading to the 2006 Nobel Prize in Chemistry awarded to Stanford Professor Roger D. Kornberg.

    In October 2008, the Department of Energy announced that the center’s name would be changed to SLAC National Accelerator Laboratory. The reasons given include a better representation of the new direction of the lab and the ability to trademark the laboratory’s name. Stanford University had legally opposed the Department of Energy’s attempt to trademark “Stanford Linear Accelerator Center”.

    In March 2009, it was announced that the SLAC National Accelerator Laboratory was to receive $68.3 million in Recovery Act Funding to be disbursed by Department of Energy’s Office of Science.

    In October 2016, Bits and Watts launched as a collaboration between SLAC and Stanford University to design “better, greener electric grids”. SLAC later pulled out over concerns about an industry partner, the state-owned Chinese electric utility.

    Accelerator

    The main accelerator was an RF linear accelerator that accelerated electrons and positrons up to 50 GeV. At 3.2 km (2.0 mi) long, the accelerator was the longest linear accelerator in the world, and was claimed to be “the world’s most straight object.” until 2017 when the European x-ray free electron laser opened. The main accelerator is buried 9 m (30 ft) below ground and passes underneath Interstate Highway 280. The above-ground klystron gallery atop the beamline, was the longest building in the United States until the LIGO project’s twin interferometers were completed in 1999. It is easily distinguishable from the air and is marked as a visual waypoint on aeronautical charts.

    A portion of the original linear accelerator is now part of the Linac Coherent Light Source [below].

    Stanford Linear Collider

    The Stanford Linear Collider was a linear accelerator that collided electrons and positrons at SLAC. The center of mass energy was about 90 GeV, equal to the mass of the Z boson, which the accelerator was designed to study. Grad student Barrett D. Milliken discovered the first Z event on 12 April 1989 while poring over the previous day’s computer data from the Mark II detector. The bulk of the data was collected by the SLAC Large Detector, which came online in 1991. Although largely overshadowed by the Large Electron–Positron Collider at CERN, which began running in 1989, the highly polarized electron beam at SLC (close to 80%) made certain unique measurements possible, such as parity violation in Z Boson-b quark coupling.

    Presently no beam enters the south and north arcs in the machine, which leads to the Final Focus, therefore this section is mothballed to run beam into the PEP2 section from the beam switchyard.

    The SLAC Large Detector (SLD) was the main detector for the Stanford Linear Collider. It was designed primarily to detect Z bosons produced by the accelerator’s electron-positron collisions. Built in 1991, the SLD operated from 1992 to 1998.

    SLAC National Accelerator Laboratory Large Detector

    PEP

    PEP (Positron-Electron Project) began operation in 1980, with center-of-mass energies up to 29 GeV. At its apex, PEP had five large particle detectors in operation, as well as a sixth smaller detector. About 300 researchers made used of PEP. PEP stopped operating in 1990, and PEP-II began construction in 1994.

    PEP-II

    From 1999 to 2008, the main purpose of the linear accelerator was to inject electrons and positrons into the PEP-II accelerator, an electron-positron collider with a pair of storage rings 2.2 km (1.4 mi) in circumference. PEP-II was host to the BaBar experiment, one of the so-called B-Factory experiments studying charge-parity symmetry.

    SLAC National Accelerator Laboratory BaBar

    Fermi Gamma-ray Space Telescope

    SLAC plays a primary role in the mission and operation of the Fermi Gamma-ray Space Telescope, launched in August 2008. The principal scientific objectives of this mission are:

    To understand the mechanisms of particle acceleration in AGNs, pulsars, and SNRs.
    To resolve the gamma-ray sky: unidentified sources and diffuse emission.
    To determine the high-energy behavior of gamma-ray bursts and transients.
    To probe dark matter and fundamental physics.


    KIPAC

    The Stanford PULSE Institute (PULSE) is a Stanford Independent Laboratory located in the Central Laboratory at SLAC. PULSE was created by Stanford in 2005 to help Stanford faculty and SLAC scientists develop ultrafast x-ray research at LCLS.

    The Linac Coherent Light Source (LCLS)[below] is a free electron laser facility located at SLAC. The LCLS is partially a reconstruction of the last 1/3 of the original linear accelerator at SLAC, and can deliver extremely intense x-ray radiation for research in a number of areas. It achieved first lasing in April 2009.

    The laser produces hard X-rays, 10^9 times the relative brightness of traditional synchrotron sources and is the most powerful x-ray source in the world. LCLS enables a variety of new experiments and provides enhancements for existing experimental methods. Often, x-rays are used to take “snapshots” of objects at the atomic level before obliterating samples. The laser’s wavelength, ranging from 6.2 to 0.13 nm (200 to 9500 electron volts (eV)) is similar to the width of an atom, providing extremely detailed information that was previously unattainable. Additionally, the laser is capable of capturing images with a “shutter speed” measured in femtoseconds, or million-billionths of a second, necessary because the intensity of the beam is often high enough so that the sample explodes on the femtosecond timescale.

    The LCLS-II [below] project is to provide a major upgrade to LCLS by adding two new X-ray laser beams. The new system will utilize the 500 m (1,600 ft) of existing tunnel to add a new superconducting accelerator at 4 GeV and two new sets of undulators that will increase the available energy range of LCLS. The advancement from the discoveries using this new capabilities may include new drugs, next-generation computers, and new materials.

    FACET

    In 2012, the first two-thirds (~2 km) of the original SLAC LINAC were recommissioned for a new user facility, the Facility for Advanced Accelerator Experimental Tests (FACET). This facility was capable of delivering 20 GeV, 3 nC electron (and positron) beams with short bunch lengths and small spot sizes, ideal for beam-driven plasma acceleration studies. The facility ended operations in 2016 for the constructions of LCLS-II which will occupy the first third of the SLAC LINAC. The FACET-II project will re-establish electron and positron beams in the middle third of the LINAC for the continuation of beam-driven plasma acceleration studies in 2019.

    The Next Linear Collider Test Accelerator (NLCTA) is a 60-120 MeV high-brightness electron beam linear accelerator used for experiments on advanced beam manipulation and acceleration techniques. It is located at SLAC’s end station B

    SSRL and LCLS are DOE Office of Science user facilities.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members.

    Stanford University, officially Leland Stanford Junior University, is a private research university located in Stanford, California. Stanford was founded in 1885 by Leland and Jane Stanford in memory of their only child, Leland Stanford Jr., who had died of typhoid fever at age 15 the previous year. Stanford is consistently ranked as among the most prestigious and top universities in the world by major education publications. It is also one of the top fundraising institutions in the country, becoming the first school to raise more than a billion dollars in a year.

    Leland Stanford was a U.S. senator and former governor of California who made his fortune as a railroad tycoon. The school admitted its first students on October 1, 1891, as a coeducational and non-denominational institution. Stanford University struggled financially after the death of Leland Stanford in 1893 and again after much of the campus was damaged by the 1906 San Francisco earthquake. Following World War II, provost Frederick Terman supported faculty and graduates’ entrepreneurialism to build self-sufficient local industry in what would later be known as Silicon Valley.

    The university is organized around seven schools: three schools consisting of 40 academic departments at the undergraduate level as well as four professional schools that focus on graduate programs in law, medicine, education, and business. All schools are on the same campus. Students compete in 36 varsity sports, and the university is one of two private institutions in the Division I FBS Pac-12 Conference. It has gained 126 NCAA team championships, and Stanford has won the NACDA Directors’ Cup for 24 consecutive years, beginning in 1994–1995. In addition, Stanford students and alumni have won 270 Olympic medals including 139 gold medals.

    As of October 2020, 84 Nobel laureates, 28 Turing Award laureates, and eight Fields Medalists have been affiliated with Stanford as students, alumni, faculty, or staff. In addition, Stanford is particularly noted for its entrepreneurship and is one of the most successful universities in attracting funding for start-ups. Stanford alumni have founded numerous companies, which combined produce more than $2.7 trillion in annual revenue, roughly equivalent to the 7th largest economy in the world (as of 2020). Stanford is the alma mater of one president of the United States (Herbert Hoover), 74 living billionaires, and 17 astronauts. It is also one of the leading producers of Fulbright Scholars, Marshall Scholars, Rhodes Scholars, and members of the United States Congress.

    Stanford University was founded in 1885 by Leland and Jane Stanford, dedicated to Leland Stanford Jr, their only child. The institution opened in 1891 on Stanford’s previous Palo Alto farm.

    Jane and Leland Stanford modeled their university after the great eastern universities, most specifically Cornell University. Stanford opened being called the “Cornell of the West” in 1891 due to faculty being former Cornell affiliates (either professors, alumni, or both) including its first president, David Starr Jordan, and second president, John Casper Branner. Both Cornell and Stanford were among the first to have higher education be accessible, nonsectarian, and open to women as well as to men. Cornell is credited as one of the first American universities to adopt this radical departure from traditional education, and Stanford became an early adopter as well.

    Despite being impacted by earthquakes in both 1906 and 1989, the campus was rebuilt each time. In 1919, The Hoover Institution on War, Revolution and Peace was started by Herbert Hoover to preserve artifacts related to World War I. The Stanford Medical Center, completed in 1959, is a teaching hospital with over 800 beds. The DOE’s SLAC National Accelerator Laboratory(originally named the Stanford Linear Accelerator Center), established in 1962, performs research in particle physics.

    Land

    Most of Stanford is on an 8,180-acre (12.8 sq mi; 33.1 km^2) campus, one of the largest in the United States. It is located on the San Francisco Peninsula, in the northwest part of the Santa Clara Valley (Silicon Valley) approximately 37 miles (60 km) southeast of San Francisco and approximately 20 miles (30 km) northwest of San Jose. In 2008, 60% of this land remained undeveloped.

    Stanford’s main campus includes a census-designated place within unincorporated Santa Clara County, although some of the university land (such as the Stanford Shopping Center and the Stanford Research Park) is within the city limits of Palo Alto. The campus also includes much land in unincorporated San Mateo County (including the SLAC National Accelerator Laboratory and the Jasper Ridge Biological Preserve), as well as in the city limits of Menlo Park (Stanford Hills neighborhood), Woodside, and Portola Valley.

    Non-central campus

    Stanford currently operates in various locations outside of its central campus.

    On the founding grant:

    Jasper Ridge Biological Preserve is a 1,200-acre (490 ha) natural reserve south of the central campus owned by the university and used by wildlife biologists for research.
    SLAC National Accelerator Laboratory is a facility west of the central campus operated by the university for the Department of Energy. It contains the longest linear particle accelerator in the world, 2 miles (3.2 km) on 426 acres (172 ha) of land.
    Golf course and a seasonal lake: The university also has its own golf course and a seasonal lake (Lake Lagunita, actually an irrigation reservoir), both home to the vulnerable California tiger salamander. As of 2012 Lake Lagunita was often dry and the university had no plans to artificially fill it.

    Off the founding grant:

    Hopkins Marine Station, in Pacific Grove, California, is a marine biology research center owned by the university since 1892.
    Study abroad locations: unlike typical study abroad programs, Stanford itself operates in several locations around the world; thus, each location has Stanford faculty-in-residence and staff in addition to students, creating a “mini-Stanford”.

    Redwood City campus for many of the university’s administrative offices located in Redwood City, California, a few miles north of the main campus. In 2005, the university purchased a small, 35-acre (14 ha) campus in Midpoint Technology Park intended for staff offices; development was delayed by The Great Recession. In 2015 the university announced a development plan and the Redwood City campus opened in March 2019.

    The Bass Center in Washington, DC provides a base, including housing, for the Stanford in Washington program for undergraduates. It includes a small art gallery open to the public.

    China: Stanford Center at Peking University, housed in the Lee Jung Sen Building, is a small center for researchers and students in collaboration with Beijing University [北京大学](CN) (Kavli Institute for Astronomy and Astrophysics at Peking University(CN) (KIAA-PKU).

    Administration and organization

    Stanford is a private, non-profit university that is administered as a corporate trust governed by a privately appointed board of trustees with a maximum membership of 38. Trustees serve five-year terms (not more than two consecutive terms) and meet five times annually.[83] A new trustee is chosen by the current trustees by ballot. The Stanford trustees also oversee the Stanford Research Park, the Stanford Shopping Center, the Cantor Center for Visual Arts, Stanford University Medical Center, and many associated medical facilities (including the Lucile Packard Children’s Hospital).

    The board appoints a president to serve as the chief executive officer of the university, to prescribe the duties of professors and course of study, to manage financial and business affairs, and to appoint nine vice presidents. The provost is the chief academic and budget officer, to whom the deans of each of the seven schools report. Persis Drell became the 13th provost in February 2017.

    As of 2018, the university was organized into seven academic schools. The schools of Humanities and Sciences (27 departments), Engineering (nine departments), and Earth, Energy & Environmental Sciences (four departments) have both graduate and undergraduate programs while the Schools of Law, Medicine, Education and Business have graduate programs only. The powers and authority of the faculty are vested in the Academic Council, which is made up of tenure and non-tenure line faculty, research faculty, senior fellows in some policy centers and institutes, the president of the university, and some other academic administrators, but most matters are handled by the Faculty Senate, made up of 55 elected representatives of the faculty.

    The Associated Students of Stanford University (ASSU) is the student government for Stanford and all registered students are members. Its elected leadership consists of the Undergraduate Senate elected by the undergraduate students, the Graduate Student Council elected by the graduate students, and the President and Vice President elected as a ticket by the entire student body.

    Stanford is the beneficiary of a special clause in the California Constitution, which explicitly exempts Stanford property from taxation so long as the property is used for educational purposes.

    Endowment and donations

    The university’s endowment, managed by the Stanford Management Company, was valued at $27.7 billion as of August 31, 2019. Payouts from the Stanford endowment covered approximately 21.8% of university expenses in the 2019 fiscal year. In the 2018 NACUBO-TIAA survey of colleges and universities in the United States and Canada, only Harvard University, the University of Texas System, and Yale University had larger endowments than Stanford.

    In 2006, President John L. Hennessy launched a five-year campaign called the Stanford Challenge, which reached its $4.3 billion fundraising goal in 2009, two years ahead of time, but continued fundraising for the duration of the campaign. It concluded on December 31, 2011, having raised a total of $6.23 billion and breaking the previous campaign fundraising record of $3.88 billion held by Yale. Specifically, the campaign raised $253.7 million for undergraduate financial aid, as well as $2.33 billion for its initiative in “Seeking Solutions” to global problems, $1.61 billion for “Educating Leaders” by improving K-12 education, and $2.11 billion for “Foundation of Excellence” aimed at providing academic support for Stanford students and faculty. Funds supported 366 new fellowships for graduate students, 139 new endowed chairs for faculty, and 38 new or renovated buildings. The new funding also enabled the construction of a facility for stem cell research; a new campus for the business school; an expansion of the law school; a new Engineering Quad; a new art and art history building; an on-campus concert hall; a new art museum; and a planned expansion of the medical school, among other things. In 2012, the university raised $1.035 billion, becoming the first school to raise more than a billion dollars in a year.

    Research centers and institutes

    DOE’s SLAC National Accelerator Laboratory
    Stanford Research Institute, a center of innovation to support economic development in the region.
    Hoover Institution, a conservative American public policy institution and research institution that promotes personal and economic liberty, free enterprise, and limited government.
    Hasso Plattner Institute of Design, a multidisciplinary design school in cooperation with the Hasso Plattner Institute of University of Potsdam [Universität Potsdam](DE) that integrates product design, engineering, and business management education).
    Martin Luther King Jr. Research and Education Institute, which grew out of and still contains the Martin Luther King Jr. Papers Project.
    John S. Knight Fellowship for Professional Journalists
    Center for Ocean Solutions
    Together with The University of California- Berkeley and UC San Francisco, Stanford is part of the Biohub, a new medical science research center founded in 2016 by a $600 million commitment from Facebook CEO and founder Mark Zuckerberg and pediatrician Priscilla Chan.

    Discoveries and innovation

    Natural sciences

    Biological synthesis of deoxyribonucleic acid (DNA) – Arthur Kornberg synthesized DNA material and won the Nobel Prize in Physiology or Medicine 1959 for his work at Stanford.
    First Transgenic organism – Stanley Cohen and Herbert Boyer were the first scientists to transplant genes from one living organism to another, a fundamental discovery for genetic engineering. Thousands of products have been developed on the basis of their work, including human growth hormone and hepatitis B vaccine.
    Laser – Arthur Leonard Schawlow shared the 1981 Nobel Prize in Physics with Nicolaas Bloembergen and Kai Siegbahn for his work on lasers.
    Nuclear magnetic resonance – Felix Bloch developed new methods for nuclear magnetic precision measurements, which are the underlying principles of the MRI.

    Computer and applied sciences

    ARPANETStanford Research Institute, formerly part of Stanford but on a separate campus, was the site of one of the four original ARPANET nodes.

    Internet—Stanford was the site where the original design of the Internet was undertaken. Vint Cerf led a research group to elaborate the design of the Transmission Control Protocol (TCP/IP) that he originally co-created with Robert E. Kahn (Bob Kahn) in 1973 and which formed the basis for the architecture of the Internet.

    Frequency modulation synthesis – John Chowning of the Music department invented the FM music synthesis algorithm in 1967, and Stanford later licensed it to Yamaha Corporation.

    Google – Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford. They were working on the Stanford Digital Library Project (SDLP). The SDLP’s goal was “to develop the enabling technologies for a single, integrated and universal digital library” and it was funded through the National Science Foundation, among other federal agencies.

    Klystron tube – invented by the brothers Russell and Sigurd Varian at Stanford. Their prototype was completed and demonstrated successfully on August 30, 1937. Upon publication in 1939, news of the klystron immediately influenced the work of U.S. and UK researchers working on radar equipment.

    RISCARPA funded VLSI project of microprocessor design. Stanford and UC Berkeley are most associated with the popularization of this concept. The Stanford MIPS would go on to be commercialized as the successful MIPS architecture, while Berkeley RISC gave its name to the entire concept, commercialized as the SPARC. Another success from this era were IBM’s efforts that eventually led to the IBM POWER instruction set architecture, PowerPC, and Power ISA. As these projects matured, a wide variety of similar designs flourished in the late 1980s and especially the early 1990s, representing a major force in the Unix workstation market as well as embedded processors in laser printers, routers and similar products.
    SUN workstation – Andy Bechtolsheim designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation, which led to Sun Microsystems.

    Businesses and entrepreneurship

    Stanford is one of the most successful universities in creating companies and licensing its inventions to existing companies; it is often held up as a model for technology transfer. Stanford’s Office of Technology Licensing is responsible for commercializing university research, intellectual property, and university-developed projects.

    The university is described as having a strong venture culture in which students are encouraged, and often funded, to launch their own companies.

    Companies founded by Stanford alumni generate more than $2.7 trillion in annual revenue, equivalent to the 10th-largest economy in the world.

    Some companies closely associated with Stanford and their connections include:

    Hewlett-Packard, 1939, co-founders William R. Hewlett (B.S, PhD) and David Packard (M.S).
    Silicon Graphics, 1981, co-founders James H. Clark (Associate Professor) and several of his grad students.
    Sun Microsystems, 1982, co-founders Vinod Khosla (M.B.A), Andy Bechtolsheim (PhD) and Scott McNealy (M.B.A).
    Cisco, 1984, founders Leonard Bosack (M.S) and Sandy Lerner (M.S) who were in charge of Stanford Computer Science and Graduate School of Business computer operations groups respectively when the hardware was developed.[163]
    Yahoo!, 1994, co-founders Jerry Yang (B.S, M.S) and David Filo (M.S).
    Google, 1998, co-founders Larry Page (M.S) and Sergey Brin (M.S).
    LinkedIn, 2002, co-founders Reid Hoffman (B.S), Konstantin Guericke (B.S, M.S), Eric Lee (B.S), and Alan Liu (B.S).
    Instagram, 2010, co-founders Kevin Systrom (B.S) and Mike Krieger (B.S).
    Snapchat, 2011, co-founders Evan Spiegel and Bobby Murphy (B.S).
    Coursera, 2012, co-founders Andrew Ng (Associate Professor) and Daphne Koller (Professor, PhD).

    Student body

    Stanford enrolled 6,996 undergraduate and 10,253 graduate students as of the 2019–2020 school year. Women comprised 50.4% of undergraduates and 41.5% of graduate students. In the same academic year, the freshman retention rate was 99%.

    Stanford awarded 1,819 undergraduate degrees, 2,393 master’s degrees, 770 doctoral degrees, and 3270 professional degrees in the 2018–2019 school year. The four-year graduation rate for the class of 2017 cohort was 72.9%, and the six-year rate was 94.4%. The relatively low four-year graduation rate is a function of the university’s coterminal degree (or “coterm”) program, which allows students to earn a master’s degree as a 1-to-2-year extension of their undergraduate program.

    As of 2010, fifteen percent of undergraduates were first-generation students.

    Athletics

    As of 2016 Stanford had 16 male varsity sports and 20 female varsity sports, 19 club sports and about 27 intramural sports. In 1930, following a unanimous vote by the Executive Committee for the Associated Students, the athletic department adopted the mascot “Indian.” The Indian symbol and name were dropped by President Richard Lyman in 1972, after objections from Native American students and a vote by the student senate. The sports teams are now officially referred to as the “Stanford Cardinal,” referring to the deep red color, not the cardinal bird. Stanford is a member of the Pac-12 Conference in most sports, the Mountain Pacific Sports Federation in several other sports, and the America East Conference in field hockey with the participation in the inter-collegiate NCAA’s Division I FBS.

    Its traditional sports rival is The University of California-Berkeley, the neighbor to the north in the East Bay. The winner of the annual “Big Game” between the Cal and Cardinal football teams gains custody of the Stanford Axe.

    Stanford has had at least one NCAA team champion every year since the 1976–77 school year and has earned 126 NCAA national team titles since its establishment, the most among universities, and Stanford has won 522 individual national championships, the most by any university. Stanford has won the award for the top-ranked Division 1 athletic program—the NACDA Directors’ Cup, formerly known as the Sears Cup—annually for the past twenty-four straight years. Stanford athletes have won medals in every Olympic Games since 1912, winning 270 Olympic medals total, 139 of them gold. In the 2008 Summer Olympics, and 2016 Summer Olympics, Stanford won more Olympic medals than any other university in the United States. Stanford athletes won 16 medals at the 2012 Summer Olympics (12 gold, two silver and two bronze), and 27 medals at the 2016 Summer Olympics.

    Traditions

    The unofficial motto of Stanford, selected by President Jordan, is Die Luft der Freiheit weht. Translated from the German language, this quotation from Ulrich von Hutten means, “The wind of freedom blows.” The motto was controversial during World War I, when anything in German was suspect; at that time the university disavowed that this motto was official.
    Hail, Stanford, Hail! is the Stanford Hymn sometimes sung at ceremonies or adapted by the various University singing groups. It was written in 1892 by mechanical engineering professor Albert W. Smith and his wife, Mary Roberts Smith (in 1896 she earned the first Stanford doctorate in Economics and later became associate professor of Sociology), but was not officially adopted until after a performance on campus in March 1902 by the Mormon Tabernacle Choir.
    “Uncommon Man/Uncommon Woman”: Stanford does not award honorary degrees, but in 1953 the degree of “Uncommon Man/Uncommon Woman” was created to recognize individuals who give rare and extraordinary service to the University. Technically, this degree is awarded by the Stanford Associates, a voluntary group that is part of the university’s alumni association. As Stanford’s highest honor, it is not conferred at prescribed intervals, but only when appropriate to recognize extraordinary service. Recipients include Herbert Hoover, Bill Hewlett, Dave Packard, Lucile Packard, and John Gardner.
    Big Game events: The events in the week leading up to the Big Game vs. UC Berkeley, including Gaieties (a musical written, composed, produced, and performed by the students of Ram’s Head Theatrical Society).
    “Viennese Ball”: a formal ball with waltzes that was initially started in the 1970s by students returning from the now-closed Stanford in Vienna overseas program. It is now open to all students.
    “Full Moon on the Quad”: An annual event at Main Quad, where students gather to kiss one another starting at midnight. Typically organized by the Junior class cabinet, the festivities include live entertainment, such as music and dance performances.
    “Band Run”: An annual festivity at the beginning of the school year, where the band picks up freshmen from dorms across campus while stopping to perform at each location, culminating in a finale performance at Main Quad.
    “Mausoleum Party”: An annual Halloween Party at the Stanford Mausoleum, the final resting place of Leland Stanford Jr. and his parents. A 20-year tradition, the “Mausoleum Party” was on hiatus from 2002 to 2005 due to a lack of funding, but was revived in 2006. In 2008, it was hosted in Old Union rather than at the actual Mausoleum, because rain prohibited generators from being rented. In 2009, after fundraising efforts by the Junior Class Presidents and the ASSU Executive, the event was able to return to the Mausoleum despite facing budget cuts earlier in the year.
    Former campus traditions include the “Big Game bonfire” on Lake Lagunita (a seasonal lake usually dry in the fall), which was formally ended in 1997 because of the presence of endangered salamanders in the lake bed.

    Award laureates and scholars

    Stanford’s current community of scholars includes:

    19 Nobel Prize laureates (as of October 2020, 85 affiliates in total)
    171 members of the National Academy of Sciences
    109 members of National Academy of Engineering
    76 members of National Academy of Medicine
    288 members of the American Academy of Arts and Sciences
    19 recipients of the National Medal of Science
    1 recipient of the National Medal of Technology
    4 recipients of the National Humanities Medal
    49 members of American Philosophical Society
    56 fellows of the American Physics Society (since 1995)
    4 Pulitzer Prize winners
    31 MacArthur Fellows
    4 Wolf Foundation Prize winners
    2 ACL Lifetime Achievement Award winners
    14 AAAI fellows
    2 Presidential Medal of Freedom winners

    Stanford University Seal

     
  • richardmitnick 11:42 am on May 20, 2022 Permalink | Reply
    Tags: "Is it topological? A new materials database has the answer", , Electron band structure, , Researchers at MIT and elsewhere have discovered that-in fact-topological materials are everywhere if you know how to look for them., Scientists found that 90 percent of all known crystalline structures contain at least one topological property., Supercomputing, , The new study was motivated by a desire to speed up the traditional search for topological materials., Topological quantum chemistry, Topology stems from a branch of mathematics that studies shapes that can be manipulated or deformed without losing certain core properties., What will it take to make our electronics smarter faster and more resilient? One idea is to build them from materials that are topological.   

    From The Massachusetts Institute of Technology: “Is it topological? A new materials database has the answer” 

    From The Massachusetts Institute of Technology

    May 19, 2022
    Jennifer Chu

    1
    Searchable tool reveals more than 90,000 known materials with electronic properties that remain unperturbed in the face of disruption. Image: Christine Daniloff, MIT.

    What will it take to make our electronics smarter, faster, and more resilient? One idea is to build them from materials that are topological.

    Topology stems from a branch of mathematics that studies shapes that can be manipulated or deformed without losing certain core properties. A donut is a common example: If it were made of rubber, a donut could be twisted and squeezed into a completely new shape, such as a coffee mug, while retaining a key trait — namely, its center hole, which takes the form of the cup’s handle. The hole, in this case, is a topological trait, robust against certain deformations.

    In recent years, scientists have applied concepts of topology to the discovery of materials with similarly robust electronic properties. In 2007, researchers predicted the first electronic topological insulators — materials in which electrons that behave in ways that are “topologically protected,” or persistent in the face of certain disruptions.

    Since then, scientists have searched for more topological materials with the aim of building better, more robust electronic devices. Until recently, only a handful of such materials were identified, and were therefore assumed to be a rarity.

    Now researchers at MIT and elsewhere have discovered that-in fact-topological materials are everywhere if you know how to look for them.

    In a paper published today in Science, the team, led by Nicolas Regnault of Princeton University and the École Normale Supérieure Paris, reports harnessing the power of multiple supercomputers to map the electronic structure of more than 96,000 natural and synthetic crystalline materials. They applied sophisticated filters to determine whether and what kind of topological traits exist in each structure.

    Overall, they found that 90 percent of all known crystalline structures contain at least one topological property, and more than 50 percent of all naturally occurring materials exhibit some sort of topological behavior.

    “We found there’s a ubiquity — topology is everywhere,” says Benjamin Wieder, the study’s co-lead, and a postdoc in MIT’s Department of Physics.

    The team has compiled the newly identified materials into a new, freely accessible Topological Materials Database resembling a periodic table of topology. With this new library, scientists can quickly search materials of interest for any topological properties they might hold, and harness them to build ultra-low-power transistors, new magnetic memory storage, and other devices with robust electronic properties.

    The paper includes co-lead author Maia Vergniory of the Donostia International Physics Center, Luis Elcoro of the University of Basque Country, Stuart Parkin and Claudia Felser of the Max Planck Institute, and Andrei Bernevig of Princeton University.

    Beyond intuition

    The new study was motivated by a desire to speed up the traditional search for topological materials.

    “The way the original materials were found was through chemical intuition,” Wieder says. “That approach had a lot of early successes. But as we theoretically predicted more kinds of topological phases, it seemed intuition wasn’t getting us very far.”

    Wieder and his colleagues instead utilized an efficient and systematic method to root out signs of topology, or robust electronic behavior, in all known crystalline structures, also known as inorganic solid-state materials.

    For their study, the researchers looked to the Inorganic Crystal Structure Database, or ICSD, a repository into which researchers enter the atomic and chemical structures of crystalline materials that they have studied. The database includes materials found in nature, as well as those that have been synthesized and manipulated in the lab. The ICSD is currently the largest materials database in the world, containing over 193,000 crystals whose structures have been mapped and characterized.

    The team downloaded the entire ICSD, and after performing some data cleaning to weed out structures with corrupted files or incomplete data, the researchers were left with just over 96,000 processable structures. For each of these structures, they performed a set of calculations based on fundamental knowledge of the relation between chemical constituents, to produce a map of the material’s electronic structure, also known as the electron band structure.

    The team was able to efficiently carry out the complicated calculations for each structure using multiple supercomputers, which they then employed to perform a second set of operations, this time to screen for various known topological phases, or persistent electrical behavior in each crystal material.

    “We’re looking for signatures in the electronic structure in which certain robust phenomena should occur in this material,” explains Wieder, whose previous work involved refining and expanding the screening technique, known as topological quantum chemistry.

    From their high-throughput analysis, the team quickly discovered a surprisingly large number of materials that are naturally topological, without any experimental manipulation, as well as materials that can be manipulated, for instance with light or chemical doping, to exhibit some sort of robust electronic behavior. They also discovered a handful of materials that contained more than one topological state when exposed to certain conditions.

    “Topological phases of matter in 3D solid-state materials have been proposed as venues for observing and manipulating exotic effects, including the interconversion of electrical current and electron spin, the tabletop simulation of exotic theories from high-energy physics, and even, under the right conditions, the storage and manipulation of quantum information,” Wieder notes.

    For experimentalists who are studying such effects, Wieder says the team’s new database now reveals a menagerie of new materials to explore.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

    The Massachusetts Institute of Technology is a private land-grant research university in Cambridge, Massachusetts. The institute has an urban campus that extends more than a mile (1.6 km) alongside the Charles River. The institute also encompasses a number of major off-campus facilities such as the MIT Lincoln Laboratory , the MIT Bates Research and Engineering Center , and the Haystack Observatory , as well as affiliated laboratories such as the Broad Institute of MIT and Harvard and Whitehead Institute.

    Massachusettes Institute of Technology-Haystack Observatory Westford, Massachusetts, USA, Altitude 131 m (430 ft).

    Founded in 1861 in response to the increasing industrialization of the United States, Massachusetts Institute of Technology adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. It has since played a key role in the development of many aspects of modern science, engineering, mathematics, and technology, and is widely known for its innovation and academic strength. It is frequently regarded as one of the most prestigious universities in the world.

    As of December 2020, 97 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 80 Marshall Scholars, 3 Mitchell Scholars, 22 Schwarzman Scholars, 41 astronauts, and 16 Chief Scientists of the U.S. Air Force have been affiliated with The Massachusetts Institute of Technology . The university also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. Massachusetts Institute of Technology is a member of the Association of American Universities (AAU).

    Foundation and vision

    In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a “Conservatory of Art and Science”, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861.

    Rogers, a professor from the University of Virginia , wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that:

    “The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.”

    The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories.

    Early developments

    Two days after The Massachusetts Institute of Technology was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT’s first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions “to promote the liberal and practical education of the industrial classes” and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst ). In 1866, the proceeds from land sales went toward new buildings in the Back Bay.

    The Massachusetts Institute of Technology was informally called “Boston Tech”. The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand.

    The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these “Boston Tech” years, Massachusetts Institute of Technology faculty and alumni rebuffed Harvard University president (and former MIT faculty) Charles W. Eliot’s repeated attempts to merge MIT with Harvard College’s Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty, students, and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme.

    In 1916, The Massachusetts Institute of Technology administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT’s move to a spacious new campus largely consisting of filled land on a one-mile-long (1.6 km) tract along the Cambridge side of the Charles River. The neoclassical “New Technology” campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious “Mr. Smith”, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, and founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($236.6 million in 2015 dollars) in cash and Kodak stock to MIT.

    Curricular reforms

    In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms “renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering”. Unlike Ivy League schools, Massachusetts Institute of Technology catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. The school was elected to the Association of American Universities in 1934.

    Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at The Massachusetts Institute of Technology that “the Institute is widely conceived as basically a vocational school”, a “partly unjustified” perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980.

    The Massachusetts Institute of Technology‘s involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at Massachusetts Institute of Technology ‘s Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper’s Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, The Massachusetts Institute of Technology became the nation’s largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo.

    These activities affected The Massachusetts Institute of Technology profoundly. A 1949 report noted the lack of “any great slackening in the pace of life at the Institute” to match the return to peacetime, remembering the “academic tranquility of the prewar years”, though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of The Massachusetts Institute of Technology between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, The Massachusetts Institute of Technology no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government.

    In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and The Massachusetts Institute of Technology ‘s defense research. In this period Massachusetts Institute of Technology’s various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. The Massachusetts Institute of Technology ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to “greater strength and unity” after these times of turmoil. However six Massachusetts Institute of Technology students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT’s role in military research and its suppression of these protests. (Richard Leacock’s film, November Actions, records some of these tumultuous events.)

    In the 1980s, there was more controversy at The Massachusetts Institute of Technology over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, The Massachusetts Institute of Technology’s research for the military has included work on robots, drones and ‘battle suits’.

    Recent history

    The Massachusetts Institute of Technology has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman’s GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the MIT OpenCourseWare project has made course materials for over 2,000 Massachusetts Institute of Technology classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005.

    The Massachusetts Institute of Technology was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new “backlot” buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School’s eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption.

    In 2001, inspired by the open source and open access movements, The Massachusetts Institute of Technology launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, The Massachusetts Institute of Technology announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its “MITx” program, for a modest fee. The “edX” online platform supporting MITx was initially developed in partnership with Harvard and its analogous “Harvardx” initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the Massachusetts Institute of Technology faculty adopted an open-access policy to make its scholarship publicly accessible online.

    The Massachusetts Institute of Technology has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier’s memorial service was attended by more than 10,000 people, in a ceremony hosted by the Massachusetts Institute of Technology community with thousands of police officers from the New England region and Canada. On November 25, 2013, The Massachusetts Institute of Technology announced the creation of the Collier Medal, to be awarded annually to “an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of The Massachusetts Institute of Technology community and in all aspects of his life”. The announcement further stated that “Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness”.

    In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion.

    The Caltech/MIT Advanced aLIGO was designed and constructed by a team of scientists from California Institute of Technology , Massachusetts Institute of Technology, and industrial contractors, and funded by the National Science Foundation .

    Caltech /MIT Advanced aLigo

    It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and Massachusetts Institute of Technology physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also a Massachusetts Institute of Technology graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO.

    The mission of The Massachusetts Institute of Technology is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of The Massachusetts Institute of Technology community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: