Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:18 am on November 10, 2017 Permalink | Reply
    Tags: Computational Infrastructure for Geodynamics is headquartered at UC Davis, , Earth’s magnetic field is an essential part of life on our planet, , Supercomputing, UC Davis egghead blog   

    From UC Davis egghead blog: “Supercomputer Simulates Dynamic Magnetic Fields of Jupiter, Earth, Sun 

    UC Davis bloc

    UC Davis

    UC Davis egghead blog

    November 9th, 2017
    Becky Oskin

    As the Juno space probe approached Jupiter in June last year, researchers with the Computational Infrastructure for Geodynamics’ Dynamo Working Group were starting to run simulations of the giant planet’s magnetic field on one of the world’s fastest computers.

    NASA/Juno

    While the timing was coincidental, the supercomputer modeling should help scientists interpret the data from Juno, and vice versa.

    “Even with Juno, we’re not going to be able to get a great physical sampling of the turbulence occurring in Jupiter’s deep interior,” Jonathan Aurnou, a geophysics professor at UCLA who leads the geodynamo working group, said in an article for Argonne National Laboratory news. “Only a supercomputer can help get us under that lid.”

    Computational Infrastructure for Geodynamics is headquartered at UC Davis.

    2

    The CIG describes itself as a community organization of scientists that disseminates software for geophysics and related fields. The CIG’s Geodynamo Working Group, led by Aurnou, includes researchers from UC Berkeley, UC Boulder, UC Davis, UC Santa Cruz, the University of Alberta, UW-Madison and Johns Hopkins University.

    Earth’s magnetic field is an essential part of life on our planet — from guiding birds on vast migrations to shielding us from solar storms.

    3
    Representation of Earth’s Invisible Magnetic Field. NASA

    Scientists think Earth’s magnetic field is generated by the swirling liquid iron in the planet’s outer core (called the geodynamo), but many mysteries remain. For example, observations of magnetic fields encircling other planets and stars suggest there could be many ways of making a planet-sized magnetic field. And why has the field has flipped polarity (swapping magnetic north and south) more than 150 times in the past 70 million years?

    “The geodynamo is one of the most challenging geophysical problems in existence — and one of the most challenging computational problems as well,” said Louise Kellogg, director of the CIG and a professor in the UC Davis Department of Earth and Planetary Sciences.

    The working group was awarded 260 million core hours on the Mira supercomputer at the U.S. Department of Energy’s Argonne National Laboratory – rated the sixth-fastest in the world — to model magnetic fields inside the Earth, Sun and Jupiter.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    The CIG project was funded by the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program, which provides access to computing centers at Argonne and Oak Ridge national laboratories. Researchers from academia, government and industry will share a total of 5.8 billion core hours on two supercomputers, Titan at Oak Ridge National Laboratory and Mira at Argonne.

    ORNL Cray XK7 Titan Supercomputer

    Video: Simulation of magnetic fields inside the Earth

    More information

    The inner secrets of planets and stars (Argonne National Lab)

    Juno Mission Home (NASA)

    Computational Infrastructure for Geodynamics

    About INCITE grants

    Videos by CIG Geodynamo Working Group/U.S. Department of Energy Argonne National Lab.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Egghead

    Egghead is a blog about research by, with or related to UC Davis. Comments on posts are welcome, as are tips and suggestions for posts. General feedback may be sent to Andy Fell. This blog is created and maintained by UC Davis Strategic Communications, and mostly edited by Andy Fell.

    UC Davis Campus

    The University of California, Davis, is a major public research university located in Davis, California, just west of Sacramento. It encompasses 5,300 acres of land, making it the second largest UC campus in terms of land ownership, after UC Merced.

    Advertisements
     
  • richardmitnick 1:39 pm on October 31, 2017 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From ALCF: “The inner secrets of planets and stars” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    October 31, 2017
    Jim Collins

    1
    2
    3

    Top image: As part of the team’s research on Jupiter’s dynamo, they performed planetary atmospheric dynamics simulations of rotating, deep convection in a 3D spherical shell, with shallow stable stratification. This image is a snapshot from a video that shows the evolution of radial vorticity near the outer boundary from a north polar perspective view. Intense anticyclones (blue) drift westward and undergo multiple mergers, while the equatorial jet flows rapidly to the east. Displayed simulation time and radial vorticity in units of planetary rotation (radial vorticity of 2 is equal to the planetary rotation rate). The video, which shows over 5,300 planetary rotations, can be viewed here: https://www.youtube.com/watch?v=OUICRNiFhpU. (Credit: Moritz Heimpel, University of Alberta)

    Middle image: A 3D rendering of simulated solar convection realized at different rotation rates. Regions of upflow and downflow are rendered in red and blue, respectively. As rotational influence increases from left (non-rotating) to right (rapidly-rotating), convective patterns become increasingly more organized and elongated (Featherstone & Hindman, 2016, ApJ Letters, 830 L15). Understanding the Sun’s location along this spectrum represents a major step toward understanding how it sustains a magnetic field. (Credit: Nick Featherstone and Bradley Hindman, University of Colorado Boulder)

    Bottom image: Radial velocity field (red = positive; blue = negative) on the equatorial plane of a numerical simulation of Earth’s core dynamo. These small-scale convective flows generate a strong planetary-scale magnetic field. (Credit: Rakesh Yadav, Harvard University)

    Using Argonne’s Mira supercomputer, researchers are developing advanced models to study magnetic field generation on the Earth, Jupiter, and the Sun at an unprecedented level of detail. A better understanding of this process will provide new insights into the birth and evolution of the solar system.

    After a five-year, 1.74 billion-mile journey, NASA’s Juno spacecraft entered Jupiter’s orbit in July 2016, to begin its mission to collect data on the structure, atmosphere, and magnetic and gravitational fields of the mysterious planet.

    NASA/Juno

    For UCLA geophysicist Jonathan Aurnou, the timing could not have been much better.

    Just as Juno reached its destination, Aurnou and his colleagues from the Computational Infrastructure for Geodynamics (CIG) had begun carrying out massive 3D simulations at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, to model and predict the turbulent interior processes that produce Jupiter’s intense magnetic field.

    While the timing of the two research efforts was coincidental, it presents an opportunity to compare the most detailed Jupiter observations ever captured with the highest-resolution Jupiter simulations ever performed.

    Aurnou, lead of the CIG’s Geodynamo Working Group, hopes that the advanced models they are creating with Mira, the ALCF’s 10-petaflops supercomputer, will complement the NASA probe’s findings to reveal a full understanding of the Jupiter’s internal dynamics.

    “Even with Juno, we’re not going to be able to get a great physical sampling of the turbulence occurring in Jupiter’s deep interior,” he said. “Only a supercomputer can help get us under that lid. Mira is allowing us to develop some of the most accurate models of turbulence possible in extremely remote astrophysical settings.”

    But Aurnou and his collaborators are not just looking at Jupiter. Their three-year ALCF project also is using Mira to develop models to study magnetic field generation on the Earth and the Sun at an unprecedented level of detail.

    Dynamic dynamos

    Magnetic fields are generated deep in the cores of planets and stars by a process known as dynamo action. This phenomenon occurs when the rotating, convective motion of electrically conducting fluids (e.g., liquid metal in planets and plasma in stars) converts kinetic energy into magnetic energy. A better understanding of the dynamo process will provide new insights into the birth and evolution of the solar system, and shed light on planetary systems being discovered around other stars.

    Modeling the internal dynamics of Jupiter, the Earth, and the Sun all bring unique challenges, but the three vastly different astrophysical bodies do share one thing in common—simulating their extremely complex dynamo processes requires a massive amount of computing power.

    To date, dynamo models have been unable to accurately simulate turbulence in fluids similar to those found in planets and stars. Conventional models also are unable to resolve the broad range of spatial scales present in turbulent dynamo action. However, the continued advances in computing hardware and software are now allowing researchers to overcome such limitations.

    With their project at the ALCF, the CIG team set out to develop and demonstrate high-resolution 3D dynamo models at the largest scale possible. Using Rayleigh, an open-source code designed to study magnetohydrodynamic convection in spherical geometries, they have been able to resolve a range of spatial scales previously inaccessible to numerical simulation.

    While the code transitioned to Mira’s massively parallel architecture smoothly, Rayleigh’s developer, Nick Featherstone, worked with ALCF computational scientist Wei Jiang to achieve optimal performance on the system. Their work included redesigning Rayleigh’s initialization phase to make it run up to 10 times faster, and rewriting parts of the code to make use of a hybrid MPI/OpenMP programming model that performs about 20 percent better than the original MPI version.

    “We did the coding and porting, but running it properly on a supercomputer is a whole different thing,” said Featherstone, a researcher from the University of Colorado Boulder. “The ALCF has done a lot of performance analysis for us. They just really made sure we’re running as well as we can run.”

    Stellar research

    When the project began in 2015, the team’s primary focus was the Sun. An understanding of the solar dynamo is key to predicting solar flares, coronal mass ejections, and other drivers of space weather, which can impact the performance and reliability of space-borne and ground-based technological systems, such as satellite-based communications.

    “We’re really trying to get at the linchpin that is stopping progress on understanding how the Sun generates its magnetic field,” said Featherstone, who is leading the project’s solar dynamo research. “And that is determining the typical flow speed of plasmas in the region of convection.”

    The team began by performing 3D stellar convection simulations of a non-rotating star to fine-tune parameters so that their calculations were on a trajectory similar to observations of flow structures on the Sun’s surface. Next, they incorporated rotation into the simulations, which allowed them to begin making meaningful comparisons against observations. This led to a paper in The Astrophysical Journal Letters last year, in which the researchers were able to place upper bounds on the typical flow speed in the solar convection zone.

    The team’s research also shed light on a mysterious observation that has puzzled scientists for decades. The Sun’s visible surface is covered with patches of convective bubbles, known as granules, which cluster into groups that are about 30,000 kilometers across, known as supergranules. Many scientists have theorized that the clustering should exist on even larger scales, but Featherstone’s simulations suggest that rotation may be the reason the clusters are smaller than expected.

    “These patches of convection are the surface signature of dynamics taking place deep in the Sun’s interior,” he said. “With Mira, we’re starting to show that this pattern we see on the surface results naturally from flows that are slower than we expected, and their interaction with rotation.”

    According to Featherstone, these new insights were enabled by their model’s ability to simulate rotation and the Sun’s spherical shape, which were too computationally demanding to incorporate in previous modeling efforts.

    “To study the deep convection zone, you need the sphere,” he said. “And to get it right, it needs to be rotating.

    Getting to the core of planets

    Magnetic field generation in terrestrial planets like Earth is driven by the physical properties of their liquid metal cores. However, due to limited computing power, previous Earth dynamo models have been forced to simulate fluids with electrical conductivities that far exceed that of actual liquid metals.

    To overcome this issue, the CIG team is building a high-resolution model that is capable of simulating the metallic properties of Earth’s molten iron core. Their ongoing geodynamo simulations are already showing that flows and coupled magnetic structures develop on both small and large scales, revealing new magnetohydrodynamic processes that do not appear in lower resolution computations.

    “If you can’t simulate a realistic metal, you’re going to have trouble simulating turbulence accurately,” Aurnou said. “Nobody could afford to do this computationally, until now. So, a big driver for us is to open the door to the community and provide a concrete example of what is possible with today’s fastest supercomputers.”

    In Jupiter’s case, the team’s ultimate goal is to create a coupled model that accounts for both its dynamo region and its powerful atmospheric winds, known as jets. This involves developing a “deep atmosphere” model in which Jupiter’s jet region extends all the way through the planet and connects to the dynamo region.

    Thus far, the researchers have made significant progress with the atmospheric model, enabling the highest-resolution giant-planet simulations yet achieved. The Jupiter simulations will be used to make detailed predictions of surface vortices, zonal jet flows, and thermal emissions that will be compared to observational data from the Juno mission.

    Ultimately, the team plans to make their results publicly available to the broader research community.

    “You can almost think of our computational efforts like a space mission,” Aurnou said. “Just like the Juno spacecraft, Mira is a unique and special device. When we get datasets from these amazing scientific tools, we want to make them openly available and put them out to the whole community to look at in different ways.”

    This project was awarded computing time and resources at the ALCF through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program supported by DOE’s Office of Science. The development of the Rayleigh code was funded by CIG, which is supported by the National Science Foundation.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 9:49 am on October 30, 2017 Permalink | Reply
    Tags: , Supercomputing, ,   

    From University of Texas at Austin: “UT Is Now Home to the Fastest Supercomputer at Any U.S. University” 

    U Texas Austin bloc

    University of Texas at Austin

    October 27, 2017
    Anna Daugherty

    The term “medical research” might bring to mind a sterile room with white lab coats, goggles, and vials. But for cutting-edge researchers, that picture is much more high-tech: it’s a room filled with row after row of metal racks housing 300,000 computer processors, each blinking green, wires connecting each processor, and the deafening sound of a powerful machine at work. It’s a room like the one housing the 4,000-square-foot supercomputer Stampede2 at The University of Texas’ J.J. Pickle Research Campus.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

    At peak performance, Stampede2, the flagship supercomputer at UT Austin’s Texas Advanced Computing Center (TACC), will be capable of performing 18 quadrillion operations per second (18 petaflops, in supercomputer lingo). That’s more powerful than 100,000 desktops. As the fastest supercomputer at any university in the U.S., it’s a level of computing that the average citizen can’t comprehend. Most people do their computing on phones the size of their hands—but then again, most aren’t mining cancer data, predicting earthquakes, or analyzing black holes.

    Funded by a $30 million grant from the National Science Foundation, Stampede2 replaces the original Stampede system, which went live in 2013. Designed to be twice as powerful while using half the energy of the older system, Stampede2 is already being used by researchers around the country. In June 2017, Stampede2 went public with 12 petaflops and was ranked as the 12th most powerful computer in the world. Phase two added six petaflops in September and phase three will complete the system in 2018 by adding a new type of memory capacity to the computer.

    For researchers like Rommie Amaro, professor of chemistry at the University of California, San Diego, a tool like Stampede2 is essential. As the director of the National Biomedical Computation Resource, Amaro says nearly all of their drug research is done on supercomputers.

    Most of her work with the original Stampede system focused on a protein called p53, which prevents tumor growth; the protein is mutated in approximately half of all cancer patients. Due to the nature of p53, it’s difficult to track with standard imaging tools, so Amaro’s team took available images of the protein to supercomputers and turned them into a simulation showing how the 1.6 million atoms in p53 move. Using Stampede, they were able to find weaknesses in p53 and simulate interactions with more than a million compounds; several hundred seemed capable of restoring p53. More than 30 proved successful in labs and are now being tested by a pharmaceutical company.

    “The first Stampede gave us really outstanding, breakthrough research for cancer,” Amaro says. “And we already have some really interesting preliminary data on what Stampede2 is going to give us.”

    And it’s not just the medical field that benefits. Stampede has created weather phenomena models that have shown new ways to measure tornado strength, and produced seismic hazard maps that predict the likelihood of earthquakes in California. It has also helped increase the accuracy of hurricane predictions by 20–25 percent. During Hurricane Harvey in August, researchers used TACC supercomputers to forecast how high water would rise near the coast and to predict flooding in rivers and creeks in its aftermath.

    Aaron Dubrow, strategic communications specialist at TACC, says supercomputer users either use publicly available programs or create an application from the mathematics of the problem they are researching. “You take an idea like how cells divide and turn that into a computer algorithm and it becomes a program of sorts,” he says. Researchers can log into the supercomputer remotely or send their program to TACC staff. Stampede2 also has web portals for smaller problems in topics like drug discovery or natural disasters.

    For Dan Stanzione, executive director at the TACC, some of the most important research isn’t immediately applied. “Basic science has dramatic impacts on the world, but you might not see that until decades from now.” He points to Einstein’s 100-year-old theory of gravitational waves, which was recently confirmed with the help of supercomputers across the nation, including Stampede. “You might wonder why we care about gravitational waves. But now we have satellite, TV, and instant communications around the world because of Einstein’s theories about gravitational waves 100 years ago.”

    According to Stanzione, there were nearly 40,000 users of the first Stampede and an approximate 3,500 projects completed. Similar to Stampede, the new Stampede2 is expected to have a four-year lifespan. “Your smartphone starts to feel old and slow after four or five years, and supercomputers are the same,” he says. “They may still be fast, but it’s made out of four-year-old processors. The new ones are faster and more power efficient to run.” The old processors don’t go to waste though—most will be donated to state institutions across Texas.

    In order to use a supercomputer, researchers must submit proposals to an NSF board, which then delegates hours of usage. Stanzione says there are requests for nearly a billion processor hours every quarter, which is several times higher than what is available nationwide. While Stanzione says nearly every university has some sort of supercomputer now, the U.S. still lags behind China in computing power. The world’s top two computers are both Chinese, and the first is nearly five times more powerful than the largest in the states.

    Regardless, Stampede2 will still manage to serve researchers from more than 400 universities. Other users include private businesses, such as Firefly Space Company in nearby Cedar Park, and some government users like the Department of Energy and the U.S. Department of Agriculture. Stanzione says all work done on Stampede2 must be public and published research.

    “Being the leader in large-scale computational sciences and engineering means we can attract the top researchers who need these resources,” he says. “It helps attract those top scholars to UT. And then hopefully once they’re here, it helps them reach these innovations a little faster.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Texas Arlington Campus

    In 1839, the Congress of the Republic of Texas ordered that a site be set aside to meet the state’s higher education needs. After a series of delays over the next several decades, the state legislature reinvigorated the project in 1876, calling for the establishment of a “university of the first class.” Austin was selected as the site for the new university in 1881, and construction began on the original Main Building in November 1882. Less than one year later, on Sept. 15, 1883, The University of Texas at Austin opened with one building, eight professors, one proctor, and 221 students — and a mission to change the world. Today, UT Austin is a world-renowned higher education, research, and public service institution serving more than 51,000 students annually through 18 top-ranked colleges and schools.

     
  • richardmitnick 8:05 am on October 19, 2017 Permalink | Reply
    Tags: , , For the first time researchers could calculate the quantitative contributions from constituent quarks gluons and sea quarks –– to nucleon spin, Nucleons — protons and neutrons — are the principal constituents of the atomic nuclei, Piz Daint super computer, Quarks contribute only 30 percent of the proton spin, , Supercomputing, Theoretical models originally assumed that the spin of the nucleon came only from its constituent quarks, To calculate the spin of the different particles in their simulations the researchers consider the true physical mass of the quarks   

    From Science Node: “The mysterious case of Piz Daint and the proton spin puzzle” 

    Science Node bloc
    Science Node

    10 Oct, 2017 [Better late than…]
    Simone Ulmer

    Nucleons — protons and neutrons — are the principal constituents of the atomic nuclei. Those particles in turn are made up of yet smaller elementary particles: Their constituent quarks and gluons.

    Each nucleon has its own intrinsic angular momentum, or spin. Knowing the spin of elementary particles is important for understanding physical and chemical processes. University of Cyprus researchers may have solved the proton spin puzzle – with a little help from the Piz Daint supercomputer.

    Cray Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    Proton spin crisis

    Spin is responsible for a material’s fundamental properties, such as phase changes in non-conducting materials that suddenly turn them into superconductors at very low temperatures.

    1
    Inside job. Artist’s impression of what the proton is made of. The quarks and gluons contribute to give exactly half the spin of the proton. The question of how is it done and how much each contributes has been a puzzle since 1987. Courtesy Brookhaven National Laboratory.

    Theoretical models originally assumed that the spin of the nucleon came only from its constituent quarks. But then in 1987, high-energy physics experiments conducted by the European Muon Collaboration precipitated what came to be known as the ‘proton spin crisis’: experiments performed at European Organization for Nuclear Research (CERN), Deutsches Elektronen-Synchrotron (DESY) and Stanford Linear Accelerator Center (SLAC) showed that quarks contribute only 30 percent of the proton spin.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    DESY

    DESY Belle II detector

    DESY European XFEL

    DESY Helmholtz Centres & Networks

    DESY Nanolab II

    DESY Helmholtz Centres & Networks

    SLAC

    SLAC Campus

    SLAC/LCLS

    SLAC/LCLS II

    Since then, it has been unclear what other effects are contributing to the spin, and to what extent. Furhter high-energy physics studies suggested that quark-antiquark pairs, with their short-lived intermediate states might be in play here – in other words, purely relativistic quantum effects.

    Thirty years later, these mysterious effects have finally been accounted for in the calculations performed on CSCS supercomputer Piz Daint by a research group led by Constantia Alexandrou of the Computation-based Science and Technology Research Center of the Cyprus Institute and the Physics Department of the University of Cyprus in Nicosia. That group also included researchers from DESY-Zeuthen, Germany, and from the University of Utah and Temple University in the US.

    For the first time, researchers could calculate the quantitative contributions from constituent quarks, gluons, and sea quarks –– to nucleon spin. (Sea quarks are a short-lived intermediate state of quark-antiquark pairs inside the nucleon). With their calculations, the group made a crucial step towards solving the puzzle that brought on the proton spin crisis.

    To calculate the spin of the different particles in their simulations, the researchers consider the true physical mass of the quarks.

    “This is a numerically challenging task, but of essential importance for making sure that the values of the used parameters in the simulations correspond to reality,” says Karl Jansen, lead scientist at DESY-Zeuthen and project co-author.

    The strong [interaction] acting here, which is transmitted by the gluons, is one of the four fundamental forces of physics. The strong [interaction] is indeed strong enough to prevent the removal of a quark from a proton. This property, known as confinement, results in huge binding energy that ultimately holds together the nucleon constituents.

    The researchers used the mass of the pion, a so-called meson, consisting of one up and one down antiquark –the ‘light quarks’ – to fix the mass of the up and down quarks to the physical quark mass entering in the simulations. If the mass of the pion calculated from the simulation corresponds with the experimentally determined value, then the researchers consider that the simulation is done with the actual physical values for the quark mass.

    And that is exactly what Alexandrou and her team have achieved in their recently published research, Physical Review Letters.

    Their simulations also took into account the valence quarks (constituent quarks), sea quarks, and gluons. The researchers used the lattice theory of quantum chromodynamics (lattice QCD) to calculate this sea of particles and their QCD interactions [ETH Zürich].

    Elaborate conversion to physical values

    The biggest challenge with the simulations was to reduce statistical errors in calculating the ‘spin contributions’ from sea quarks and gluons, says Alexandrou. “In addition, a significant part was to carry out the renormalisation of these quantities.”

    3
    Spin cycle. Composition of the proton spin among the constituent quarks (blue and purple columns with the lines), sea quarks (blue, purple, and red solid columns) and gluons (green column). The errors are shown by the bars. Courtesy Constantia Alexandrou.

    In other words, they had to convert the dimensionless values determined by the simulations into a physical value that can be measured experimentally – such as the spin carried by the constituent and sea quarks and the gluons that the researchers were seeking.

    Alexandrou’s team is the first to have achieved this computation including gluons, whereby they had to calculate millions of the ‘propagators’ describing how quarks move between two points in space-time.

    “Making powerful supercomputers like Piz Daint open and available across Europe is extremely important for European science,” notes Jansen.

    “Simulations as elaborate as this were possible only thanks to the power of Piz Daint,” adds Alexandrou.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:18 pm on October 18, 2017 Permalink | Reply
    Tags: , , , Supercomputing   

    From ECP: ” Accelerating Delivery of a Capable Exascale Ecosystem” 

    Exascale Computing Project

    Accelerating Delivery of a Capable Exascale Ecosystem

    October 18, 2017
    Doug Kothe

    The Second Wave

    You may know that the ECP has been cited numerous times by the US Department of Energy (DOE)—by Secretary Perry, in fact—as one of DOE’s highest priorities. This is not only incredibly exciting but also a tremendous responsibility for us. There are high expectations for the ECP, expectations that we should not just meet—I believe we can far exceed them. All of us involved in this project are undoubtedly believers in the value and imperative of computer and computational science and engineering, and more recently of data science—especially within an exascale ecosystem. Meeting and exceeding our goals represents a tremendous return on investment for US taxpayers and potentially for the nation’s science and technology base for decades to come. This is a career opportunity for everyone involved.

    I would be remiss if I were not to thank—on behalf of all of us—Paul Messina, our inaugural ECP director. His experience and expertise have been invaluable in moving ECP through an admittedly difficult startup. The ECP is, after all, an extremely complicated endeavor. His steady hand, mentoring, and leadership, from which I benefitted first hand as the Application Development lead, have been vital to the project’s early successes. We will miss Paul but will not let him “hide”—we’ll maintain a steady line of communication with him for advice, as a sounding board, etc. Thanks again, Paul!

    As we focus our research teams on years 2 and 3 of the ECP, we must collectively and quickly move into a “steady state” mode of execution, i.e., delivering impactful milestones on a regular cadence, settling into a pattern of right-sized project management processes, and moving past exploration of technology integration opportunities and into commitments for new integrated products and deliverables. We are not there yet but will be soon. Some of this challenge has involved working with our DOE sponsors to find the right balance of “projectizing” R&D while delivering tangible products and solutions on a resource-loaded schedule that can accommodate the exploratory high-risk/high-reward nature of R&D activities so important for innovation.

    Changes in the ECP Leadership

    We are currently implementing several changes in the ECP, something that is typical of most large projects transitioning from “startup” to “steady state.” First, some ECP positions need to be filled. ECP is fortunate to have access to some of the best scientists in the world for leadership roles, but these positions take time away from personal research interests and projects, so some ECP leaders periodically may rotate back into full-time research. Fortunately, the six DOE labs responsible for leading the ECP provide plenty of “bench strength” of potential new leaders. Next, our third focus area, Hardware Technology, is being expanded in scope and renamed Hardware and Integration. It now includes an additional focus on engagement with DOE and National Nuclear Security Administration computing facilities and integrated product delivery. More information on both topics will follow.

    Looking toward the horizon, we must refine our resource-loaded schedule to ensure delivery on short-term goals, prepare for our next project review by DOE (an Independent Project Review, or IPR) in January 2018, and work more closely with US PathForward vendors and DOE HPC facilities to better understand architecture requirements and greatly improve overall software and application readiness. ECP leadership is focused on preparing for the IPR, which we must pass with flying colors. Therefore, we must collectively execute on all research milestones with a sense of urgency—in about a year, we will all know the details of the first two exascale systems!

    We’ve recently spent some time refining our strategic goals to ensure a clear message for our advocates, stakeholders, and project team members. ECP’s principal goals are threefold, and they align directly with our focus areas, as follows:

    Applications are the foundational element of the ECP and the vehicle for delivery of results from the exascale systems enabled by the ECP. Each application addresses an exascale challenge problem—a problem of strategic importance and national interest that is intractable without at least 50 times the computational power of today’s systems.
    Software Technologies are the underlying technologies on which applications are built and are essential for application performance, portability, integrity, and resilience. Software technologies span low-level system software to high-level application development environments, including infrastructure for large-scale data science and an expanded and vertically integrated software stack with advanced mathematical libraries and frameworks, extreme-scale programming environments, tools, and visualization libraries.
    Hardware and Integration points to key ECP-enabled partnerships between US vendors and the ECP (and community-wide) application and software developers to develop a new generation of commodity computing components. This partnership must ensure at least two diverse and viable exascale computing technology pathways for the nation to meet identified mission needs.
    The expected ECP outcome is the accelerated delivery of a capable exascale computing ecosystem to provide breakthrough solutions addressing our most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. Capable implies a wide range of applications able to effectively use the systems developed through the ECP, thereby ensuring that both science and security needs will be addressed because the system is affordable, usable, and useful. Exascale, of course, refers to the ability to perform >1018 operations per second, and ecosystem implies not just more powerful systems, but rather all methods and tools needed for effective use of ECP-enabled exascale systems to be acquired by DOE labs.

    To close, I’m very excited and honored to be working with the most talented computer and computational scientists in the world as we collectively pursue an incredibly important and compelling national mission. I think taking the journey will be just as fun as arriving at our destination, and to get there we will need everyone’s support, talent, and hard work. Please contact me personally if you ever have any questions, comments, or concerns.

    In the meantime, as former University of Tennessee Lady Vols basketball coach Pat Summitt said, “Keep on keeping on.”

    Doug

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    EXASCALE COMPUTING PROJECT

    The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem.

    Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.

    The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).

    ECP is chartered with accelerating delivery of a capable exascale computing ecosystem to provide breakthrough modeling and simulation solutions to address the most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security.

    This role goes far beyond the limited scope of a physical computing system. ECP’s work encompasses the development of an entire exascale ecosystem: applications, system software, hardware technologies and architectures, along with critical workforce development.

     
  • richardmitnick 10:43 am on October 9, 2017 Permalink | Reply
    Tags: A free flexible and secure way to provide multiple factors of authentication to your community, OpenMFA, Supercomputing,   

    From TACC: “A free, flexible, and secure way to provide multiple factors of authentication to your community” 

    TACC bloc

    Texas Advanced Computing Center

    TACC develops multi-factor authentication solution, makes it available open-source.

    2
    Published on October 9, 2017 by Aaron Dubrow

    How does a supercomputing center enable tens of thousands of researchers to securely access its high-performance computing systems while still allowing ease of use? And how can it be done affordably?

    These are questions that the Texas Advanced Computing Center (TACC), asked themselves when they sought to upgrade their system security. They had previously relied on users’ names and passwords for access, but with a growing focus on hosting confidential health data and the increased compliance standards that entails, they realized they needed a more rigorous solution.

    3
    In October 2016, use of the MFA became mandatory for TACC users. Since that time, OpenMFA has recorded more than half a million logins and counting.

    In 2015, TACC began looking for an appropriate multi-factor authentication (MFA) solution that would provide an extra layer of protection against brute-force attacks. What they quickly discovered was that the available commercial solutions would cost them tens to hundreds of thousands of dollars per year to provide to their large community of users.

    Moreover, most MFA systems lacked the flexibility needed to allow diverse researchers to access TACC systems in a variety of ways — from the command line, through science gateways (which perform computations without requiring researchers to directly access HPC systems), and using automated workflows.

    So, they did what any group of computing experts and software developers would do: they built our own MFA system, which they call OpenMFA.

    They didn’t start from scratch. Instead they scoured the pool of state-of-the-art open source tools available. Among them was LinOTP, a one-time password platform developed and maintained by KeyIdentity GmbH, a German software company. To this, they added the standard networking protocols RADIUS and HTTPS, and glued it all together using custom pluggable authentication modules (PAM) that they developed in-house.

    3
    TACC Token App generating token code.

    This approach integrates cleanly with common data transfer protocols, adds flexibility to the system (in part, so they could create whitelists that include the IP addresses that should be exempted), and supports opt-in or mandatory deployments. Researchers can use the TACC-developed OpenMFA system in three ways: via a software token, an SMS, or a low-cost hardware token.

    Over three months, they transitioned 10,000 researchers to OpenMFA, while giving them the opportunity to test the new system at their leisure. In October 2016, use of the MFA became mandatory for TACC users.

    Since that time, OpenMFA has recorded more than half a million logins and counting. TACC has also open-sourced the tool for free, public use. The Extreme Science and Engineering Discovery Environment (XSEDE) is considering OpenMFA for its large user base, and many other universities and research centers have expressed interest in using the tool.

    TACC developed OpenMFA to suit the center’s needs and to save money. But in the end, the tool will also help many other tax-payer-funded institutions improve their security while maintaining research productivity. This allows funding to flow into other efforts, thus increasing the amount of science that can be accomplished, while making that research more secure.

    TACC staff will present the details of OpenMFA’s development at this year’s Internet2 Technology Exchange and at The International Conference for High Performance Computing, Networking, Storage and Analysis (SC17).

    To learn more about OpenMFA or explore the code, visit the Github repository.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

     
  • richardmitnick 8:17 am on October 7, 2017 Permalink | Reply
    Tags: , , , Leaning into the supercomputing learning curve, Supercomputing   

    From ALCF: “Leaning into the supercomputing learning curve” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    1
    Recently, 70 scientists — graduate students, computational scientists, and postdoctoral and early-career researchers — attended the fifth annual Argonne Training Program on Extreme-Scale Computing (ATPESC) in St. Charles, Illinois. Over two weeks, they learned how to seize opportunities offered by the world’s fastest supercomputers. Credit: Image by Argonne National Laboratory

    October 6, 2017
    Andrea Manning

    What would you do with a supercomputer that is at least 50 times faster than today’s fastest machines? For scientists and engineers, the emerging age of exascale computing opens a universe of possibilities to simulate experiments and analyze reams of data — potentially enabling, for example, models of atomic structures that lead to cures for disease.

    But first, scientists need to learn how to seize this opportunity, which is the mission of the Argonne Training Program on Extreme-Scale Computing (ATPESC). The training is part of the Exascale Computing Project, a collaborative effort of the U.S. Department of Energy’s (DOE) Office of Science and its National Nuclear Security Administration.

    Starting in late July, 70 participants — graduate students, computational scientists, and postdoctoral and early-career researchers — gathered at the Q Center in St. Charles, Illinois, for the program’s fifth annual training session. This two-week course is designed to teach scientists key skills and tools and the most effective ways to use leading-edge supercomputers to further their research aims.

    This year’s ATPESC agenda once again was packed with technical lectures, hands-on exercises and dinner talks.

    “Supercomputers are extremely powerful research tools for a wide range of science domains,” said ATPESC program director Marta García, a computational scientist at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility at the department’s Argonne National Laboratory.

    “But using them efficiently requires a unique skill set. With ATPESC, we aim to touch on all of the key skills and approaches a researcher needs to take advantage of the world’s most powerful computing systems.”

    To address all angles of high-performance computing, the training focuses on programming methodologies that are effective across a variety of supercomputers — and that are expected to apply to exascale systems. Renowned scientists, high-performance computing experts and other leaders in the field served as lecturers and guided the hands-on sessions.

    This year, experts covered:

    Hardware architectures
    Programming models and languages
    Data-intensive computing, input/output (I/O) and machine learning
    Numerical algorithms and software for extreme-scale science
    Performance tools and debuggers
    Software productivity
    Visualization and data analysis

    In addition, attendees tapped hundreds of thousands of cores of computing power on some of today’s most powerful supercomputing resources, including the ALCF’s Mira, Cetus, Vesta, Cooley and Theta systems; the Oak Ridge Leadership Computing Facility’s Titan system; and the National Energy Research Scientific Computing Center’s Cori and Edison systems – all DOE Office of Science User Facilities.

    “I was looking at how best to optimize what I’m currently using on these new architectures and also figure out where things are going,” said Justin Walker, a Ph.D. student in the University of Wisconsin-Madison’s Physics Department. “ATPESC delivers on instructing us on a lot of things.”

    Shikhar Kumar, Ph.D. candidate in nuclear science and engineering at the Massachusetts Institute of Technology, elaborates: “On the issue of I/O, data processing, data visualization and performance tools, there isn’t a single option that is regarded as the ‘industry standard.’ Instead, we learned about many of the alternatives, which encourages learning high-performance computing from the ground up.”

    “You can’t get this material out of a textbook,” said Eric Nielsen, a research scientist at NASA’s Langley Research Center. Added Johann Dahm of IBM Research, “I haven’t had this material presented to me in this sort of way ever.”

    Jonathan Hoy, a Ph.D. student at the University of Southern California, pointed to the larger, “ripple effect” role of this type of gathering: “It is good to have all these people sit down together. In a way, we’re setting standards here.”

    Lisa Goodenough, a postdoctoral researcher in high energy physics at Argonne, said: “The theme has been about barriers coming down.” Goodenough referred to both barriers to entry and training barriers hindering scientists from realizing scientific objectives.

    “The program was of huge benefit for my postdoctoral researcher,” said Roseanna Zia, assistant professor of chemical engineering at Stanford University. “Without the financial assistance, it would have been out of my reach,” she said, highlighting the covered tuition fees, domestic airfare, meals and lodging.

    Now, anyone can learn from the program’s broad curriculum, including the slides and videos of the lectures from some of the world’s foremost experts in extreme-scale computing, online — underscoring program organizers’ efforts to extend its reach beyond the classroom. The slides and the videos of the lectures captured at ATPESC 2017 are now available online at: http://extremecomputingtraining.anl.gov/2017-slides and http://extremecomputingtraining.anl.gov/2017-videos, respectively.

    For more information on ATPESC, including on applying for selection to attend next year’s program, visit http://extremecomputingtraining.anl.gov.

    The Exascale Computing Project is a collaborative effort of two DOE organizations — the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:24 pm on September 28, 2017 Permalink | Reply
    Tags: , “ExaSky” - “Computing the Sky at Extreme Scales” project or, Cartography of the cosmos, , , , , Salman Habib, Supercomputing, The computer can generate many universes with different parameters, There are hundreds of billions of stars in our own Milky Way galaxy   

    From ALCF: “Cartography of the cosmos” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    September 27, 2017
    John Spizzirri

    2
    Argonne’s Salman Habib leads the ExaSky project, which takes on the biggest questions, mysteries, and challenges currently confounding cosmologists.

    1
    No image caption or credit

    There are hundreds of billions of stars in our own Milky Way galaxy.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    Estimates indicate a similar number of galaxies in the observable universe, each with its own large assemblage of stars, many with their own planetary systems. Beyond and between these stars and galaxies are all manner of matter in various phases, such as gas and dust. Another form of matter, dark matter, exists in a very different and mysterious form, announcing its presence indirectly only through its gravitational effects.

    This is the universe Salman Habib is trying to reconstruct, structure by structure, using precise observations from telescope surveys combined with next-generation data analysis and simulation techniques currently being primed for exascale computing.

    “We’re simulating all the processes in the structure and formation of the universe. It’s like solving a very large physics puzzle,” said Habib, a senior physicist and computational scientist with the High Energy Physics and Mathematics and Computer Science divisions of the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

    Habib leads the “Computing the Sky at Extreme Scales” project or “ExaSky,” one of the first projects funded by the recently established Exascale Computing Project (ECP), a collaborative effort between DOE’s Office of Science and its National Nuclear Security Administration.

    From determining the initial cause of primordial fluctuations to measuring the sum of all neutrino masses, this project’s science objectives represent a laundry list of the biggest questions, mysteries, and challenges currently confounding cosmologists.

    There is the question of dark energy, the potential cause of the accelerated expansion of the universe, while yet another is the nature and distribution of dark matter in the universe.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LUX Dark matter Experiment at SURF, Lead, SD, USA

    ADMX Axion Dark Matter Experiment, U Uashington

    These are immense questions that demand equally expansive computational power to answer. The ECP is readying science codes for exascale systems, the new workhorses of computational and big data science.

    Initiated to drive the development of an “exascale ecosystem” of cutting-edge, high-performance architectures, codes and frameworks, the ECP will allow researchers to tackle data and computationally intensive challenges such as the ExaSky simulations of the known universe.

    In addition to the magnitude of their computational demands, ECP projects are selected based on whether they meet specific strategic areas, ranging from energy and economic security to scientific discovery and healthcare.

    “Salman’s research certainly looks at important and fundamental scientific questions, but it has societal benefits, too,” said Paul Messina, Argonne Distinguished Fellow. “Human beings tend to wonder where they came from, and that curiosity is very deep.”

    HACC’ing the night sky

    For Habib, the ECP presents a two-fold challenge — how do you conduct cutting-edge science on cutting-edge machines?

    The cross-divisional Argonne team has been working on the science through a multi-year effort at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. The team is running cosmological simulations for large-scale sky surveys on the facility’s 10-petaflop high-performance computer, Mira. The simulations are designed to work with observational data collected from specialized survey telescopes, like the forthcoming Dark Energy Spectroscopic Instrument (DESI) and the Large Synoptic Survey Telescope (LSST).

    LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Survey telescopes look at much larger areas of the sky — up to half the sky, at any point — than does the Hubble Space Telescope, for instance, which focuses more on individual objects.

    NASA/ESA Hubble Telescope

    One night concentrating on one patch, the next night another, survey instruments systematically examine the sky to develop a cartographic record of the cosmos, as Habib describes it.

    Working in partnership with Los Alamos and Lawrence Berkeley National Laboratories, the Argonne team is readying itself to chart the rest of the course.

    Their primary code, which Habib helped develop, is already among the fastest science production codes in use. Called HACC (Hardware/Hybrid Accelerated Cosmology Code), this particle-based cosmology framework supports a variety of programming models and algorithms.

    Unique among codes used in other exascale computing projects, it can run on all current and prototype architectures, from the basic X86 chip used in most home PCs, to graphics processing units, to the newest Knights Landing chip found in Theta, the ALCF’s latest supercomputing system.

    As robust as the code is already, the HACC team continues to develop it further, adding significant new capabilities, such as hydrodynamics and associated subgrid models.

    “When you run very large simulations of the universe, you can’t possibly do everything, because it’s just too detailed,” Habib explained. “For example, if we’re running a simulation where we literally have tens to hundreds of billions of galaxies, we cannot follow each galaxy in full detail. So we come up with approximate approaches, referred to as subgrid models.”

    Even with these improvements and its successes, the HACC code still will need to increase its performance and memory to be able to work in an exascale framework. In addition to HACC, the ExaSky project employs the adaptive mesh refinement code Nyx, developed at Lawrence Berkeley. HACC and Nyx complement each other with different areas of specialization. The synergy between the two is an important element of the ExaSky team’s approach.

    A cosmological simulation approach that melds multiple approaches allows the verification of difficult-to-resolve cosmological processes involving gravitational evolution, gas dynamics and astrophysical effects at very high dynamic ranges. New computational methods like machine learning will help scientists to quickly and systematically recognize features in both the observational and simulation data that represent unique events.

    A trillion particles of light

    The work produced under the ECP will serve several purposes, benefitting both the future of cosmological modeling and the development of successful exascale platforms.

    On the modeling end, the computer can generate many universes with different parameters, allowing researchers to compare their models with observations to determine which models fit the data most accurately. Alternatively, the models can make predictions for observations yet to be made.

    Models also can produce extremely realistic pictures of the sky, which is essential when planning large observational campaigns, such as those by DESI and LSST.

    “Before you spend the money to build a telescope, it’s important to also produce extremely good simulated data so that people can optimize observational campaigns to meet their data challenges,” said Habib.

    But the cost of realism is expensive. Simulations can range in the trillion-particle realm and produce several petabytes — quadrillions of bytes — of data in a single run. As exascale becomes prevalent, these simulations will produce 10 to 100 times as much data.

    The work that the ExaSky team is doing, along with that of the other ECP research teams, will help address these challenges and those faced by computer manufacturers and software developers as they create coherent, functional exascale platforms to meet the needs of large-scale science. By working with their own codes on pre-exascale machines, the ECP research team can help guide vendors in chip design, I/O bandwidth and memory requirements and other features.

    “All of these things can help the ECP community optimize their systems,” noted Habib. “That’s the fundamental reason why the ECP science teams were chosen. We will take the lessons we learn in dealing with this architecture back to the rest of the science community and say, ‘We have found a solution.’”

    The Exascale Computing Project is a collaborative effort of two DOE organizations — the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:27 am on September 27, 2017 Permalink | Reply
    Tags: , , Rutgers Amarel Computing Cluster, , Supercomputing   

    From Rutgers: Amarel Computing 

    Rutgers University
    Rutgers University

    Rutgers Amarel Computing Cluster

    Rutgers Amarel CentOS 7 Lenovo Linux compute cluster supercomputer

    Amarel is Rutgers’ new advanced computing cluster for research computing that is theoretically able to perform over 500 trillion mathematical operations every second! Watch our video to learn who Amarel is named after (hint: he’s a Rutgers pioneer). For more info visit: http://oarc.rutgers.edu

    The Office of Advanced Research Computing (OARC) is excited to announce the next phase in Rutgers advanced computing. In early 2017, OARC is unveiling Amarel, a “condominium” style computing environment developed to serve the university’s wide ranging research needs.

    Named in honor of Dr. Saul Amarel, a Rutgers pioneer in artificial intelligence and research computing, Amarel is a shared, community-owned advanced computing environment available to any investigator or student with projects requiring research computing resources on any Rutgers campus.

    See the full article here .

    Follow Rutgers Research here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    rutgers-campus

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    Rutgers smaller
    Please give us back our original beautiful seal which the University stole away from us.
    As a ’67 graduate of University college, second in my class, I am proud to be a member of

    Alpha Sigma Lamda, National Honor Society of non-tradional students.

     
  • richardmitnick 1:19 pm on September 13, 2017 Permalink | Reply
    Tags: , , , , , PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), Supercomputing, TFIIH-Transcription factor IIH   

    From LBNL: “Berkeley Lab Scientists Map Key DNA Protein Complex at Near-Atomic Resolution” 

    Berkeley Logo

    Berkeley Lab

    September 13, 2017
    Sarah Yang
    scyang@lbl.gov
    (510) 486-4575

    1
    The cryo-EM structure of Transcription Factor II Human (TFIIH). The atomic coordinate model, colored according to the different TFIIH subunits, is shown inside the semi-transparent cryo-EM map. (Credit: Basil Greber/Berkeley Lab and UC Berkeley)

    Chalking up another success for a new imaging technology that has energized the field of structural biology, researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) obtained the highest resolution map yet of a large assembly of human proteins that is critical to DNA function.

    The scientists are reporting their achievement today in an advanced online publication of the journal Nature. They used cryo-electron microscopy (cryo-EM) to resolve the 3-D structure of a protein complex called transcription factor IIH (TFIIH) at 4.4 angstroms, or near-atomic resolution. This protein complex is used to unzip the DNA double helix so that genes can be accessed and read during transcription or repair.

    “When TFIIH goes wrong, DNA repair can’t occur, and that malfunction is associated with severe cancer propensity, premature aging, and a variety of other defects,” said study principal investigator Eva Nogales, faculty scientist at Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division. “Using this structure, we can now begin to place mutations in context to better understand why they give rise to misbehavior in cells.”

    TFIIH’s critical role in DNA function has made it a prime target for research, but it is considered a difficult protein complex to study, especially in humans.

    ___________________________________________________________________
    How to Capture a Protein
    1
    It takes a large store of patience and persistence to prepare specimens of human transcription factor IIH (TFIIH) for cryo-EM. Because TFIIH exists in such minute amounts in a cell, the researchers had to grow 50 liters of human cells in culture to yield a few micrograms of the purified protein.

    Human TFIIH is particularly fragile and prone to falling apart in the flash-freezing process, so researchers need to use an optimized buffer solution to help protect the protein structure.

    “These compounds that protect the proteins also work as antifreeze agents, but there’s a trade-off between protein stability and the ability to produce a transparent film of ice needed for cryo-EM,” said study lead author Basil Greber.

    Once Greber obtains a usable sample, he settles down for several days at the cryo-electron microscope at UC Berkeley’s Stanley Hall for imaging.

    “Once you have that sample inside the microscope, you keep collecting data as long as you can,” he said. “The process can take four days straight.”
    ___________________________________________________________________

    Mapping complex proteins

    “As organisms get more complex, these proteins do, too, taking on extra bits and pieces needed for regulatory functions at many different levels,” said Eva Nogales, who is also a UC Berkeley professor of molecular and cell biology and a Howard Hughes Medical Institute investigator. “The fact that we resolved this protein structure from human cells makes this even more relevant to disease research. There’s no need to extrapolate the protein’s function based upon how it works in other organisms.”

    Biomolecules such as proteins are typically imaged using X-ray crystallography, but that method requires a large amount of stable sample for the crystallization process to work. The challenge with TFIIH is that it is hard to produce and purify in large quantities, and once obtained, it may not form crystals suitable for X-ray diffraction.

    Enter cryo-EM, which can work even when sample amounts are very small. Electrons are sent through purified samples that have been flash-frozen at ultracold temperatures to prevent crystalline ice from forming.

    Cryo-EM has been around for decades, but major advances over the past five years have led to a quantum leap in the quality of high-resolution images achievable with this technique.

    “When your goal is to get resolutions down to a few angstroms, the problem is that any motion gets magnified,” said study lead author Basil Greber, a UC Berkeley postdoctoral fellow at the California Institute for Quantitative Biosciences (QB3). “At high magnifications, the slight movement of the specimen as electrons move through leads to a blurred image.”

    Making movies

    The researchers credit the explosive growth in cryo-EM to advanced detector technology that Berkeley Lab engineer Peter Denes helped develop. Instead of a single picture taken for each sample, the direct detector camera shoots multiple frames in a process akin to recording a movie. The frames are then put together to create a high-resolution image. This approach resolves the blur from sample movement. The improved images contain higher quality data, and they allow researchers to study the sample in multiple states, as they exist in the cell.

    Since shooting a movie generates far more data than a single frame, and thousands of movies are being collected during a microscopy session, the researchers needed the processing punch of supercomputers at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab.

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6 supercomputer

    The output from these computations was a 3-D map that required further interpretation.

    “When we began the data processing, we had 1.5 million images of individual molecules to sort through,” said Greber. “We needed to select particles that are representative of an intact complex. After 300,000 CPU hours at NERSC, we ended up with 120,000 images of individual particles that were used to compute the 3-D map of the protein.”

    To obtain an atomic model of the protein complex based on this 3-D map, the researchers used PHENIX (Python-based Hierarchical ENvironment for Integrated Xtallography), a software program whose development is led by Paul Adams, director of Berkeley Lab’s Molecular Biophysics and Integrated Bioimaging Division and a co-author of this study.

    Not only does this structure improve basic understanding of DNA repair, the information could be used to help visualize how specific molecules are binding to target proteins in drug development.

    “In studying the physics and chemistry of these biological molecules, we’re often able to determine what they do, but how they do it is unclear,” said Nogales. “This work is a prime example of what structural biologists do. We establish the framework for understanding how the molecules function. And with that information, researchers can develop finely targeted therapies with more predictive power.”

    Other co-authors on this study are Pavel Afonine and Thi Hoang Duong Nguyen, both of whom have joint appointments at Berkeley Lab and UC Berkeley; and Jie Fang, a researcher at the Howard Hughes Medical Institute.

    NERSC is a DOE Office of Science User Facility located at Berkeley Lab. In addition to NERSC, the researchers used the Lawrencium computing cluster at Berkeley Lab. This work was funded by the National Institute of General Medical Sciences and the Swiss National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: