Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:19 pm on June 19, 2018 Permalink | Reply
    Tags: , Sunway TaihuLight China, Supercomputing, Why the US and China's brutal supercomputer war matters,   

    From Wired: “Why the US and China’s brutal supercomputer war matters” 

    Wired logo

    Wired

    19 June 2018
    Chris Stokel-Walker

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Thought global arms races are all about ballistic missiles, space or nuclear development? Think again: the new diplomatic frontline is over processing power and computer chips.
    Multi-million dollar projects to eke out an advantage in processing power aren’t really about science, they’re an exercise in soft power.

    A major shift has taken place, with a new claimant to the crown of world’s fastest supercomputer. IBM’s Summit at Oak Ridge National Laboratory in Tennessee uses Power9 CPUs and NVIDIA Tesla V100 GPUs and has 4,068 servers powered by ten petabytes of memory working concurrently to process 200,000 trillion calculations per second – 200 petaflops. That’s a lot of numbers – and here’s one more. Summit’s processing power is 117 petaflops more than the previous record-holder, China’s TaihuLight.

    Sunway TaihuLight, China, US News

    While it may seem significant, it’s actually largely symbolic, says Andrew Jones of the Numerical Algorithms Group, a high-performance computing consultancy. “I put no value on being twice as fast or 20 per cent faster other than bragging rights.”

    That’s not to say that supercomputers don’t matter. They are “being driven by science”, says Jack Dongarra, a computer science professor at the University of Tennessee and the compiler of the world’s top 500 supercomputer list. And science is driven today by computer simulation, he adds – with high-powered computers crucial to carry out those tests.

    Supercomputers can crunch data far faster and more easily than regular computers, making them ideal for handling big data – from cybersecurity to medical informatics to astronomy. “We could quite easily go another four or five orders of magnitude and still find scientific and business reasons to benefit from it,” says Jones.

    Oak Ridge, where Summit is housed, is already soliciting bids for a project called Coral II, the successor to the Coral project which resulted in the Summit supercomputer. The Coral II will involve three separate hardware systems, each of which has a price tag of $600 million, says Dongarra. The goal? To build a supercomputer capable of calculating at a rate of exaflops – five times faster than Summit.

    While they are faster and more powerful, supercomputers are actually not much different from the hardware we interact with on a daily basis, says Jones. “The basic components are the same as a standard server,” he says. But because of their scale, and the complexity involved in programming them to process information as a single, co-ordinated unit, supercomputer projects require significant financial outlay to build, and political support to attract that funding.

    That political involvement transforms them from a simple computational tool into a way of exercising soft power and stoking intercontinental rivalries.

    With Summit, the US has wrested back the title of the world’s most powerful supercomputer for the first time since 2012 – though it still languishes behind China in terms of overall processing power. China is the home of 202 of the 500 most powerful supercomputers, having overtaken the US in November 2017.

    “What’s quite striking is that in 2001 there were no Chinese machines that’d be considered a supercomputer, and today they dominate,” explains Dongarra. The sudden surge of supercomputers in China over the last two decades is an indication of significant investment, says Jones. “It’s more a reflection of who’s got their lobbying sorted than anything else,” he adds.

    Recently, the Chinese leadership has been drifting away “from an aspirational ‘catch-up with the west’ mentality to aspiring to be world class and to lead,” says Jonathan Sullivan, director of the China Policy Institute at the University of Nottingham. “These achievements like the longest bridge, biggest dam and most powerful supercomputer aren’t just practical solutions, they also have symbolic meaning,” he adds.

    Or putting it differently: bragging rights matter enormously to whoever’s on top.

    [TaihuLight is about 2 years old. The Chinese supercomputer people have not been sitting on their hands. They knew this was coming. We will see how long Summit is at the top.]

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Advertisements
     
  • richardmitnick 3:43 pm on June 19, 2018 Permalink | Reply
    Tags: ClusterStor, , Cray Introduces All Flash Lustre Storage Solution Targeting HPC, , L300F a scalable all-flash storage solution, Lustre 2.11, Supercomputing   

    From HPC Wire: “Cray Introduces All Flash Lustre Storage Solution Targeting HPC” 

    From HPC Wire

    June 19, 2018
    John Russell

    1

    Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose primary use case is to support high IOPS rates to/from a scratch storage pool in the Lustre file system. Cray also announced that sometime in August, it would be supporting Lustre 2.11 just released in April. This rapid productizing of Lustre’s latest release is likely to be appreciated by the user community which sometimes criticizes vendors for being slow to commercialize the latest features of the open source parallel file system.

    “Lustre 2.11 has been one of the drivers for us because it has unique performance enhancements, usability enhancements, and we think some of those features will pair nicely with a flash-based solution that’s sitting underneath the file system,” said Mark Wiertalla, product marketing director.

    The broader driver is the rise in use cases with demanding IOPS characteristics often including files of small size. Hard disk drives, by their nature, handle these workloads poorly. Cray cites AI, for example, as a good use case with high IOPS requirements.

    2

    Here’s a brief description from Cray of how L300F fits into the Cray ClusterStor systems:

    Unlike the existing building blocks in the ClusterStor family which use a 5U84 form factor (5 rack units high/84 drives slots) mainly for Hard Disk Drives (HDD) the L300F is a 2U24 form factorfilled exclusively with Solid State Drives (SDD).
    Like the existing building blocks (L300 and L300N) the L300F features two embedded server modules in a high availability configuration for the Object Storage Server (OSS) functionality of the open source, parallel file system Lustre.
    Like the existing building blocks, the L300 converges the Lustre Object Storage Servers (OSS) and the Object Storage Targets (OST) in the same building block for linear scalability.
    Like all ClusterStor building blocks the L300F is purpose-engineered to deliver the most effective parallel file system storage infrastructure for the leadership class of supercomputing environments.

    The existing L300 model is an all-HDD Lustre solution, well suited for environments using applications with large, sequential I/O workloads. The L300N model, by contrast, is a hybrid SSD/HDD solution with flash-accelerated NXD software that redirects I/O to the appropriate storage medium, delivering cost-effective, consistent performance on mixed I/O workloads while shielding the application, file system and users from complexity through transparent flash acceleration.

    In positioning L300F, Cray said, “L300F enables users such as engineers, researchers and scientists to dramatically reduce the runtime of their applications allowing jobs to reliably complete within their required schedule, supporting more iterations and faster time to insight. Supplementing Cray’s ClusterStor portfolio with an all-flash storage option, the ClusterStor L300F integrates with and complements the existing L300/L300N models to provide a comprehensive storage architecture. It allows customers to address performance bottlenecks without needlessly overprovisioning HDD storage capacity, creating a cost-competitive solution for improved application run time.”

    3

    Analysts are likewise bullish on flash. “Flash is poised to become an essential technology in every HPC storage solution,” said Jeff Janukowicz, IDC’s Research vice president, Solid State Drives and Enabling Technologies. “It has the unique role of satisfying the high-performance appetite of artificial intelligence applications even while helping customers optimize their storage budget for big data. With the ClusterStor L300F, Cray has positioned itself to be at the leading edge of next generation of HPC storage solutions.”

    According to Cray L300F simplifies storage management for storage administrators, allowing them to stand up a high-performance flash pool within their existing Lustre file system using existing tools and skills. “This eliminates the need for product-unique training or to administer a separate file system. Using ClusterStor Manager, administrators can reduce the learning curve and accelerate time-to-proficiency, thereby improving ROI. When coupled with Cray’s exclusive monitoring application Cray View for ClusterStor, administrators get an end-to-end view of Lustre jobs, network status and storage system performance. Cray View forClusterStor provides visibility into job runtime variability, event correlation, trend analysis and offers custom alerts based on any selected metric,” according to the announcement.

    Price remains an issue for flash. It’s currently about 13X more expensive on per terabyte basis. “But when flash is viewed on a dollar per IOPS basis, it is small fraction of the cost compared to hard disk drives. What our customers are telling us is they have unlocked that secret. Now they can think about uses cases and say here’s three of them that make sense immediately. That’s how they will deploy it. They’ll use it as a tactical tool,” said Wiertalla.

    “We see the L300F allowing many customers to start testing the waters with flash storage. We are seeing RFPs [and] we think we are going to see, as the delta in prices between flash and disk narrows over the next 3-5 years, that customers will find incrementally new use cases where flash become cost competitive and they will adopt it gradually. Maybe in the 2020s we’ll start to see customers think about putting file systems exclusively on flash.”

    Given Cray is approaching the first anniversary of its acquisition of the ClusterStor portfolio it is likely to showcase the line at ISC2018 (booth #E-921) next week (see HPCwire article, Cray Moves to Acquire the Seagate ClusterStor Line) and perhaps issue other news in its storage line.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 5:23 pm on June 18, 2018 Permalink | Reply
    Tags: ARM at Sandia, Supercomputing   

    From Sandia Lab: “Arm-based supercomputer prototype to be deployed at Sandia National Laboratories by DOE” 


    From Sandia Lab

    June 18, 2018
    Neal Singer
    nsinger@sandia.gov
    505-845-7078

    1
    A computer-automated design conception of Sandia National Laboratories’ Astra supercomputer, used to work out the floor layout of the supercomputer’s compute, cooling, network and data storage cabinets. (Illustration courtesy of Hewlett Packard Enterprise.)

    Microprocessors designed by Arm are ubiquitous in automobile electronics, cellphones and other embedded applications, but until recently they have not provided the performance necessary to make them practical for high-performance computing.

    Astra, one of the first supercomputers to use processors based on the Arm architecture in a large-scale high-performance computing platform, is expected to be deployed in late summer at Sandia National Laboratories.

    The U.S. Department of Energy’s National Nuclear Security Administration today announced that Astra, the first of a potential series of advanced architecture prototype platforms, will be deployed as part of the Vanguard program. This program will evaluate the feasibility of emerging high-performance computing architectures as production platforms to support NNSA’s mission to maintain and enhance the safety, security and effectiveness of the U.S. nuclear stockpile.

    Astra will be based on the recently announced Cavium Inc. ThunderX2 64-bit Arm-v8 microprocessor. The platform will consist of 2,592 compute nodes, of which each is 28-core, dual-socket, and will be at a theoretical peak of more than 2.3 petaflops, equivalent to 2.3 quadrillion floating-point operations (FLOPS), or calculations, per second. While being the fastest is not one of the goals of Astra or the Vanguard program in general, a single Astra node is roughly one hundred times faster than a modern Arm-based cellphone.

    “One of the important questions Astra will help us answer is how well does the peak performance of this architecture translate into real performance for our mission applications,” says Mark Anderson, program director for NNSA’s Advanced Simulation and Computing program, which funds Astra.

    A first step for Vanguard

    Scott Collis, director of Sandia’s Center for Computing Research, says: “Emerging architectures come with many challenges. Since the NNSA has not previously deployed high-performance computing platforms based on Arm processors, there are gaps in the software that must be addressed before considering this technology for future platforms much larger in scale than Astra.”

    As part of a multiple lab partnership, researchers anticipate continually improving Astra and future platforms.

    “Sandia researchers partnering with counterparts at Los Alamos and Lawrence Livermore national laboratories expect to develop an improved software-and-tools environment that will enable mission codes to make increasingly effective use of Astra as well as future leadership-class platforms,” says Ken Alvin, senior manager of Sandia’s extreme scale computing group. “The Vanguard program is designed to allow the NNSA to take prudent risks in exploring emerging technologies and broadening our future computing options.”

    Astra will be installed at Sandia in an expanded part of the building that originally housed the innovative Red Storm supercomputer.

    The Astra platform will be deployed in partnership with Westwind Computer Products Inc. and Hewlett Packard Enterprise.

    “Astra, like Red Storm, will require a very intimate collaboration between Sandia and commercial partners,” says James Laros, Vanguard project lead. “In this case, all three NNSA defense labs will work closely with Westwind, HP Enterprise, Arm, Cavium and the wider high-performance computing community to achieve a successful outcome of this project.”

    Astra takes its name from the Latin phrase “per aspera ad astra,” which translates as “through difficulties to the stars.”

    ____________________________________________________________

    “The development of a scalable Arm platform based on the HPE Apollo 70 will become a key resource to expand the Arm high-performance computing ecosystem. Westwind is honored to be entrusted by Sandia, in its continued commitment to developing small businesses here in New Mexico, to implement such an important project.”

    — Steve Hull, president of Westwind Computer Product

    “By delivering the world’s largest Arm-based supercomputer featuring the HPE Apollo 70 platform, a purpose-built architecture that includes advanced performance and storage capabilities, we are enabling the U.S. Department of Energy and National Nuclear Security Administration to power innovative solutions for energy and national security uses.”

    — Mike Vildibill, vice president of the Advanced Technology Group at HPE

    “Arm has been deeply engaged with Sandia National Laboratories working to comprehend and deliver on the needs of the high-performance computing community. We are eager to support the Vanguard program as a key milestone deployment for Arm and our partners, delivering on a shared vision to spur innovation in this critical domain.”

    — Drew Henry, senior vice president and general manager of Arm’s Infrastructure Business Line

    “Cavium is pleased to partner with Sandia National Laboratories to enable the Arm-v8-based high-performance computing cluster as part of the Vanguard program. Vanguard is an additional proof point regarding readiness and maturity of ThunderX2 processors for large-scale deployments and will further accelerate the entire computing ecosystem on the Arm server architecture.”

    — Gopal Hegde, vice president and general manager of the Data Center Processor Group at Cavium

    ____________________________________________________________

    [Wishful thinking?]

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.
    i1
    i2
    i3

     
  • richardmitnick 1:29 pm on June 3, 2018 Permalink | Reply
    Tags: , Computational astronomy, NAOJ ATERUI II Cray XC50 supercomputer, Supercomputing   

    From Astro Watch: “Supercomputer Astronomy: The Next Generation” 

    Astro Watch bloc

    From Astro Watch

    NAOJ ATERUI II Cray XC50 supercomputer

    The supercomputer Cray XC50, nicknamed NS-05 “ATERUI II” started operation on June 1, 2018. With a theoretical peak performance of 3.087 petaflops, ATERUI II is the world’s fastest supercomputer for astrophysical simulations. ATERUI II simulates a wide range of astronomical phenomena inaccessible to observational astronomy, allowing us to boldly go where no one has gone before, from the birth of the Universe itself to the interior of a dying star.

    Professor Eiichiro Kokubo, the CfCA Project Director says, “Computational astronomy is gaining popularity in many fields. A new ‘telescope’ for theoretical astronomy has opened its eyes. I expect that ATERUI II will explore the Universe through more realistic simulations.”

    ATERUI II is a massive parallel supercomputer and the 5th generation of systems operated by the National Astronomical Observatory of Japan (NAOJ). Linking forty thousand cores allows ATERUI II to calculate rapidly. ATERUI II has three times better performance than the previous “ATERUI” system. A high-speed network enables astronomers to access ATERUI II from their home institutes. In this year, about 150 researchers will use ATERUI II.

    With its superior computational capability, ATERUI II will tackle problems too difficult for previous computers. For example, ATERUI II is able to calculate the mutual gravitational forces among the 200 billion stars in the Milky Way Galaxy, rather than bunching them into groups of stars the way other simulations do. In this way ATERUI II will generate a full-scale high-resolution model of the Milky Way Galaxy.

    Computational astronomy is still a young discipline compared to observational astronomy, in which researchers use telescopes to observe celestial objects and phenomena, and theoretical astronomy, where researchers describe the Universe in terms of mathematics and physical laws. Thanks to the rapid advancement of computational technology in recent decades, astronomical simulations to recreate celestial objects, phenomena, or even the whole Universe within the computer, have risen up as the third pillar in astronomy.

    “Aterui” is the name of a historical hero who lived in the Mizusawa area, where ATERUI II is located. With his comrades he fought bravely against conquerors 1200 years ago. ATERUI II is nicknamed after this brave hero in hopes that it will boldly confront the formidable enigmas of the Universe. The reimagined tensyo kanji (traditional block style lettering) for “ATERUI II” (阿弖流為 弐) designed by artist Jun Kosaka are written on the housing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:18 am on June 3, 2018 Permalink | Reply
    Tags: , , , , Supercomputing   

    From Science Node: “Full speed ahead” 

    Science Node bloc
    From Science Node

    23 May, 2018
    Kevin Jackson

    US Department of Energy recommits to the exascale race.

    1

    The US was once a leader in supercomputing, having created the first high-performance computer (HPC) in 1964. But as of November 2017, TOP500 ranked Titan, the fastest American-made supercomputer, only fifth on its list of the most powerful machines in the world. In contrast, China holds the first and second spots by a whopping margin.

    ORNL Cray Titan XK7 Supercomputer

    Sunway TaihuLight, China

    Tianhe-2 supercomputer China

    But it now looks like the US Department of Energy (DoE) is ready to commit to taking back those top spots. In a CNN opinion article, Secretary of Energy Rick Perry proclaims that “the future is in supercomputers,” and we at Science Node couldn’t agree more. To get a better understanding of the DoE’s plans, we sat down for a chat with Under Secretary for Science Paul Dabbar.

    Why is it important for the federal government to support HPC rather than leaving it to the private sector?

    A significant amount of the Office of Science and the rest of the DoE has had and will continue to have supercomputing needs. The Office of Science produces tremendous amounts of data like at Argonne, and all of our national labs produce data of increasing volume. Supercomputing is also needed in our National Nuclear Security Administration (NNSA) mission, which fulfills very important modeling needs for Department of Defense (DoD) applications.

    But to Secretary Perry’s point, we’re increasingly seeing a number of private sector organizations building their own supercomputers based on what we had developed and built a few generations ago that are now used for a broad range of commercial purposes.

    At the end of the day, we know that a secondary benefit of this push is that we’re providing the impetus for innovation within supercomputing.

    We assist the broader American economy by helping to support science and technology innovation within supercomputing.

    How are supercomputers used for national security?

    The NNSA arm, which is one of the three major arms of the three Under Secretaries here at the department, is our primary area of support for the nation’s defense. And as various testing treaties came into play over time, having the computing capacity to conduct proper testing and security of our stockpiled weapons was key. And that’s why if you look at our three exascale computers that we’re in the process of executing, two of them are on behalf of the Office of Science and one of them is on behalf of the NNSA.

    One of these three supercomputers is the Aurora exascale machine currently being built at Argonne National Laboratory, which Secretary Perry believes will be finished in 2021. Where did this timeline come from, and why Argonne?

    Argonne National Laboratory ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    There was a group put together across different areas of DoE, primarily the Office of Science and NNSA. When we decided to execute on building the next wave of top global supercomputers, an internal consortium named the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) was formed.

    That consortium developed exactly how to fund the technologies, how to issue requests, and what the target capabilities for the machines should be. The 2021 timeline was based on the CORAL group, the labs, and the consortium in conjunction with the Department of Energy headquarters here, the Office of Advanced Computing, and ultimately talking with the suppliers.

    The reason Argonne was selected for the first machine was that they already have a leadership computing facility there. They have a long history of other machines of previous generations, and they were already in the process of building out an exascale machine. So they were already looking at architecture issues, talking with Intel and others on what could be accomplished, and taking a look at how they can build on what they already had in terms of their capabilities and physical plant and user facilities.

    Why now? What’s motivating the push for HPC excellence at this precise moment?

    A lot of this is driven by where the technology is and where the capabilities are for suppliers and the broader HPC market. We’re part of a constant dialogue with the Nvidias, Intels, IBMs, and Crays of the world in what we think is possible in terms of the next step in supercomputing.

    Why now? The technology is available now, and the need is there for us considering the large user facilities coming online across the whole of the national lab complex and the need for stronger computing power.

    The history of science, going back to the late 1800s and early 1900s, was about competition along strings of types of research, whether it was chemistry or physics. If you take any of the areas of science, including high-performance computing, anything that’s being done by anyone out there along any of these strings causes us all to move us along. However, we at the DoE believe America must and should be in the lead of scientific advances across all different areas, and certainly in the area of computing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:48 pm on May 30, 2018 Permalink | Reply
    Tags: , , , OLCF Titan supercomputer, , Supercomputers Provide New Window Into the Life and Death of a Neutron, Supercomputing   

    From Lawrence Berkeley National Lab: “Supercomputers Provide New Window Into the Life and Death of a Neutron” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    May 30, 2018
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    Berkeley Lab-led research team simulates sliver of the universe to tackle subatomic-scale physics problem.

    1
    In this illustration, the grid in the background represents the computational lattice that theoretical physicists used to calculate a particle property known as nucleon axial coupling. This property determines how a W boson (white wavy line) interacts with one of the quarks in a neutron (large transparent sphere in foreground), emitting an electron (large arrow) and antineutrino (dotted arrow) in a process called beta decay. This process transforms the neutron into a proton (distant transparent sphere). (Credit: Evan Berkowitz/Jülich Research Center, Lawrence Livermore National Laboratory)

    Experiments that measure the lifetime of neutrons reveal a perplexing and unresolved discrepancy. While this lifetime has been measured to a precision within 1 percent using different techniques, apparent conflicts in the measurements offer the exciting possibility of learning about as-yet undiscovered physics.

    Now, a team led by scientists in the Nuclear Science Division at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has enlisted powerful supercomputers to calculate a quantity known as the “nucleon axial coupling,” or gA – which is central to our understanding of a neutron’s lifetime – with an unprecedented precision. Their method offers a clear path to further improvements that may help to resolve the experimental discrepancy.

    To achieve their results, the researchers created a microscopic slice of a simulated universe to provide a window into the subatomic world. Their study was published online May 30 in the journal Nature.

    The nucleon axial coupling is more exactly defined as the strength at which one component (known as the axial component) of the “weak current” of the Standard Model of particle physics couples to the neutron. The weak current is given by one of the four known fundamental forces of the universe and is responsible for radioactive beta decay – the process by which a neutron decays to a proton, an electron, and a neutrino.

    In addition to measurements of the neutron lifetime, precise measurements of neutron beta decay are also used to probe new physics beyond the Standard Model. Nuclear physicists seek to resolve the lifetime discrepancy and augment with experimental results by determining gA more precisely.

    The researchers turned to quantum chromodynamics (QCD), a cornerstone of the Standard Model that describes how quarks and gluons interact with each other. Quarks and gluons are the fundamental building blocks for larger particles, such as neutrons and protons. The dynamics of these interactions determine the mass of the neutron and proton, and also the value of gA.

    But sorting through QCD’s inherent complexity to produce these quantities requires the aid of massive supercomputers. In the latest study, researchers applied a numeric simulation known as lattice QCD, which represents QCD on a finite grid.

    While a type of mirror-flip symmetry in particle interactions called parity (like swapping your right and left hands) is respected by the interactions of QCD, and the axial component of the weak current flips parity – parity is not respected by nature (analogously, most of us are right-handed). And because nature breaks this symmetry, the value of gA can only be determined through experimental measurements or theoretical predictions with lattice QCD.

    The team’s new theoretical determination of gA is based on a simulation of a tiny piece of the universe – the size of a few neutrons in each direction. They simulated a neutron transitioning to a proton inside this tiny section of the universe, in order to predict what happens in nature.

    The model universe contains one neutron amid a sea of quark-antiquark pairs that are bustling under the surface of the apparent emptiness of free space.

    2
    André Walker-Loud, a staff scientist at Berkeley Lab, led the study that calculated a property central to understanding the lifetime of neutrons. (Credit: Marilyn Chung/Berkeley Lab)

    “Calculating gA was supposed to be one of the simple benchmark calculations that could be used to demonstrate that lattice QCD can be utilized for basic nuclear physics research, and for precision tests that look for new physics in nuclear physics backgrounds,” said André Walker-Loud, a staff scientist in Berkeley Lab’s Nuclear Science Division who led the new study. “It turned out to be an exceptionally difficult quantity to determine.”

    This is because lattice QCD calculations are complicated by exceptionally noisy statistical results that had thwarted major progress in reducing uncertainties in previous gA calculations. Some researchers had previously estimated that it would require the next generation of the nation’s most advanced supercomputers to achieve a 2 percent precision for gA by around 2020.

    The team participating in the latest study developed a way to improve their calculations of gA using an unconventional approach and supercomputers at Oak Ridge National Laboratory (Oak Ridge Lab) and Lawrence Livermore National Laboratory (Livermore Lab), The Vulcan IBM Blue Gene/Q system.

    LLNL Vulcan IBM Blue GeneQ system supercomputer

    The study involved scientists from more than a dozen institutions, including researchers from UC Berkeley and several other Department of Energy national labs.

    Chia Cheng “Jason” Chang, the lead author of the publication and a postdoctoral researcher in Berkeley Lab’s Nuclear Science Division for the duration of this work, said, “Past calculations were all performed amidst this more noisy environment,” which clouded the results they were seeking. Chang has also joined the Interdisciplinary Theoretical and Mathematical Sciences Program at RIKEN in Japan as a research scientist.

    Walker-Loud added, “We found a way to extract gA earlier in time, before the noise ‘explodes’ in your face.”

    Chang said, “We now have a purely theoretical prediction of the lifetime of the neutron, and it is the first time we can predict the lifetime of the neutron to be consistent with experiments.”

    “This was an intense 2 1/2-year project that only came together because of the great team of people working on it,” Walker-Loud said.

    This latest calculation also places tighter constraints on a branch of physics theories that stretch beyond the Standard Model – constraints that exceed those set by powerful particle collider experiments at CERN’s Large Hadron Collider. But the calculations aren’t yet precise enough to determine if new physics have been hiding in the gA and neutron lifetime measurements.

    Chang and Walker-Loud noted that the main limitation to improving upon the precision of their calculations is in supplying more computing power.

    “We don’t have to change the technique we’re using to get the precision necessary,” Walker-Loud said.

    The latest work builds upon decades of research and computational resources by the lattice QCD community. In particular, the research team relied upon QCD data generated by the MILC Collaboration; an open source software library for lattice QCD called Chroma, developed by the USQCD collaboration; and QUDA, a highly optimized open source software library for lattice QCD calculations.

    ORNL Cray Titan XK7 Supercomputer

    The team drew heavily upon the power of Titan, a supercomputer at Oak Ridge Lab equipped with graphics processing units, or GPUs, in addition to more conventional central processing units, or CPUs. GPUs have evolved from their early use in accelerating video game graphics to current applications in evaluating large arrays for tackling complicated algorithms pertinent to many fields of science.

    The axial coupling calculations used about 184 million “Titan hours” of computing power – it would take a single laptop computer with a large memory about 600,000 years to complete the same calculations.

    As the researchers worked through their analysis of this massive set of numerical data, they realized that more refinements were needed to reduce the uncertainty in their calculations.

    The team was assisted by the Oak Ridge Leadership Computing Facility staff to efficiently utilize their 64 million Titan-hour allocation, and they also turned to the Multiprogrammatic and Institutional Computing program at Livermore Lab, which gave them more computing time to resolve their calculations and reduce their uncertainty margin to just under 1 percent.

    “Establishing a new way to calculate gA has been a huge rollercoaster,” Walker-Loud said.

    With more statistics from more powerful supercomputers, the research team hopes to drive the uncertainty margin down to about 0.3 percent. “That’s where we can actually begin to discriminate between the results from the two different experimental methods of measuring the neutron lifetime,” Chang said. “That’s always the most exciting part: When the theory has something to say about the experiment.”

    He added, “With improvements, we hope that we can calculate things that are difficult or even impossible to measure in experiments.”

    Already, the team has applied for time on a next-generation supercomputer at Oak Ridge Lab called Summit, which would greatly speed up the calculations.

    ORNL IBM Summit supercomputer depiction

    In addition to researchers at Berkeley Lab and UC Berkeley, the science team also included researchers from University of North Carolina, RIKEN BNL Research Center at Brookhaven National Laboratory, Lawrence Livermore National Laboratory, the Jülich Research Center in Germany, the University of Liverpool in the U.K., the College of William & Mary, Rutgers University, the University of Washington, the University of Glasgow in the U.K., NVIDIA Corp., and Thomas Jefferson National Accelerator Facility.

    One of the study participants is a scientist at the National Energy Research Scientific Computing Center (NERSC).

    NERSC

    NERSC Cray XC40 Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    The Titan supercomputer is a part of the Oak Ridge Leadership Computing Facility (OLCF). NERSC and OLCF are DOE Office of Science User Facilities.

    The work was supported by Laboratory Directed Research and Development programs at Berkeley Lab, the U.S. Department of Energy’s Office of Science, the Nuclear Physics Double Beta Decay Topical Collaboration, the DOE Early Career Award Program, the NVIDIA Corporation, the Joint Sino-German Research Projects of the German Research Foundation and National Natural Science Foundation of China, RIKEN in Japan, the Leverhulme Trust, the National Science Foundation’s Kavli Institute for Theoretical Physics, DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, and the Lawrence Livermore National Laboratory Multiprogrammatic and Institutional Computing program through a Tier 1 Grand Challenge award.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    stem

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 1:07 pm on May 18, 2018 Permalink | Reply
    Tags: China's Sunway TaihuLight- the world's fastest supercomputer, , Supercomputing   

    From Science Node: “What puts the super in supercomputer?” 

    Science Node bloc
    From Science Node

    1
    No image caption or credit.

    The secret behind supercomputing? More of everything.

    14 May, 2018
    Kevin Jackson

    We’ve come a long way since MITS developed the first personal computer in 1974, which was sold as a kit that required the customer to assemble the machine themselves. Jump ahead to 2018, and around 77% of Americans currently own a smartphone, and nearly half of the global population uses the internet.


    Superpowering science. Faster processing speeds, extra memory, and super-sized storage capacity are what make supercomputers the tools of choice for many researchers.

    The devices we keep at home and in our pockets are pretty advanced compared to the technology of the past, but they can’t hold a candle to the raw power of a supercomputer.

    The capabilities of the HPC machines we talk about so often here at Science Node can be hard to conceptualize. That’s why we’re going to lay it all out for you and explain how supercomputers differ from the laptop on your desk, and just what it is these machines need all that extra performance for.

    The need for speed

    Computer performance is measured in FLOPS, which stands for floating-point operations per second. The more FLOPS a computer can process, the more powerful it is.

    2
    You’ve come a long way, baby. The first personal computer, the Altair 8800, was sold in 1974 as a mail-order kit that users had to assemble themselves.

    For example, look to the Intel Core i9 Extreme Edition processor designed for desktop computers. It has 18 cores, or processing units that take in tasks and complete them based on received instructions.

    This single chip is capable of one trillion floating point operations per second (i.e., 1 teraFLOP)—as fast as a supercomputer from 1998. You don’t need that kind of performance to check email and surf the web, but it’s great for hardcore gamers, livestreaming, and virtual reality.

    Modern supercomputers use similar chips, memory, and storage as personal computers, but instead of a few processors they have tens of thousands. What distinguishes supercomputers is scale.

    China’s Sunway TaihuLight, which is currently the fastest supercomputer in the world, boasts 10,648,600 cores with a maximum performance of more than 93,014.6 teraFLOPS.

    Sunway TaihuLight, China

    Theoretically, the Sunway TaihuLight is capable of reaching 125,436 teraFLOPS of performance—more than 125 thousand times faster than the Intel Core i9 Extreme Edition processor. And it ‘only’ cost around ¥1.8 billion ($270 million), compared to the Intel chip’s price tag of $1,999.

    Don’t forget memory

    A computer’s memory holds information while the processor is working on it. When you’re playing Fortnite, your computer’s random-access memory (RAM) stores and updates the speed and direction in which you’re running.

    Most people will get by fine with 8 to 16 GB of RAM. Hardcore gamers generally find that 32GB of RAM is enough, but computer aficionados that run virtual machines and perform other high-end computing tasks at home or at work will sometimes build machines with 64GB or more of RAM.

    ____________________________________________________
    What is a supercomputer used for?

    Climate modeling and weather forecasts
    Computational fluid dynamics
    Genome analysis
    Artificial intelligence (AI) and predictive analytics
    Astronomy and space exploration
    ____________________________________________________

    The Sunway TaihuLight once again squashes the competition with around 1,310,600 GB of memory to work with. This means the machine can hold and process an enormous amount of data at the same time, which allows for large-scale simulations of complex events, such as the devastating 1976 earthquake in Tangshan.

    Even a smaller supercomputer, such as the San Diego Supercomputer Center’s Comet, has 247 terabytes of memory—nearly 4000 times that of a well-equipped laptop.

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    Major multitasking

    Another advantage of supercomputers is their ability to excel at parallel computing, which is when two or more processors run simultaneously and divide the workload of a task, reducing the time it takes to complete.

    Personal computers have limited parallel ability. But since the 1990s, most supercomputers have used massively parallel processing, in which thousands of processors attack a problem simultaneously. In theory this is great, but there can be problems.

    Someone (or something) has to decide how the task will be broken up and shared among the processors. But some complex problems don’t divide easily. One task may be processed quickly, but then must wait on a task that’s processed more slowly. The practical, rather than theoretical, speeds of supercomputers depend on this kind of task management.

    Super powers for super projects

    You might now be looking at your computer in disappointment, but the reality is that unless you’re studying volcanoes or sequencing the human genome, you simply don’t need that kind of power.

    The truth is, many supercomputers are shared resources, processing data and solving equations for multiple teams of researchers at the same time. It’s rare for a scientist to use a supercomputer’s entire capacity just for one project.

    So while a top-of-the-line machine like the Sunway TaihuLight leaves your laptop in the dirt, take heart that personal computers are getting faster all the time. But then, so are supercomputers. With each step forward in speed and performance, HPC technology helps us unlock the mysteries of the universe around us.

    Read more:

    The 5 fastest supercomputers in the world
    The race to exascale
    3 reasons why quantum computing is closer than ever

    See the full article here .

    Please help promote STEM in your local schools.
    stem

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 12:45 pm on May 7, 2018 Permalink | Reply
    Tags: Faces of Summit: Preparing to Launch, , , Supercomputing   

    From Oak Ridge National Laboratory: “Faces of Summit: Preparing to Launch” 

    i1

    From Oak Ridge National Laboratory

    OLCF

    5.1.18
    Katie Elyce Jones

    1
    HPC Support Specialist Chris Fuson in the Summit computing center. No image credit.

    OLCF’s Chris Fuson works with Summit vendors and OLCF team members to ready Summit’s batch scheduler and job launcher.

    The Faces of Summit series shares stories of people working to stand up America’s next top supercomputer for open science, the Oak Ridge Leadership Computing Facility’s Summit. The next-generation machine is scheduled to come online in 2018.

    ORNL IBM Summit Supercomputer

    ORNL IBM Summit Supercomputer

    At the Oak Ridge Leadership Computing Facility (OLCF), supercomputing staff and users are already talking about what kinds of science problems they will be able to solve once they “get on Summit.”

    But before they run their science applications on the 200-petaflop IBM AC922 supercomputer later this year, they will have to go through the system’s batch scheduler and job launcher.

    “The batch scheduler and job launcher control access to the compute resources on the new machine,” said Chris Fuson, OLCF high-performance computing (HPC) support specialist. “As a user, you will need to understand these resources to utilize the system effectively.”

    A staff member in the User Assistance and Outreach (UAO) Group, Fuson has worked on five flagship supercomputers at OLCF—Cheetah, Phoenix, Jaguar, Titan, and now Summit.

    [Cheetah, Phoenix, no images available.]

    ORNL OCLF Jaguar Cray Linux supercomputer

    ORNL Cray XK7 Titan Supercomputer

    With a background in programming and computer science, Fuson said he likes to focus on solving the unexpected issues that come up during installation and testing, such as fixing bugs or adding new features to help users navigate the system.

    Fuson can often be found standing at his desk listening to background music while he sorts through new tasks, user requests, and technical issues related to job scheduling.

    “As the systems change and evolve, the detective work involved in helping users solve problems as they run on a new machine keeps it interesting,” he said.

    Of course, the goal is to make the transition to a new system as smooth as possible for users. While still responding to day-to-day tasks related to the OLCF’s current supercomputer, Titan, Fuson and the UAO group also work with IBM to learn, incorporate, and document the IBM Load Sharing Facility (LSF) batch scheduler and the parallel job launcher jsrun for Summit. LSF allocates Summit resources, and jsrun launches jobs on the compute nodes.

    “The new launcher provides similar functionality to other parallel job launchers, such as aprun and mpirun, but requires users to take a slightly different approach in determining how to request and lay out resources for a job,” Fuson said.

    IBM developed jsrun to meet the unique computing needs of two CORAL partners, the US Department of Energy’s (DOE’s) Oak Ridge and Lawrence Livermore National Laboratories.

    “We relayed our workload and scheduling requirements to IBM,” Fuson said. “For example, as a leadership computing facility, we provide priority for large jobs in the batch queue. We work with LSF developers to incorporate our center’s policy requirements and diverse workload needs into the existing scheduler.”

    OLCF Center for Accelerated Application Readiness team members, who are optimizing application codes for Summit, have tested LSF and jsrun on Summitdev, an early access system with IBM processers one generation away from Summit’s Power9 processors.

    “Early users are already providing feedback,” Fuson said. “There’s a lot of work that goes into getting these pieces polished. At first, it is always a struggle as we work toward production, but things will begin to fall into place.”

    To prepare all facility users for scheduling on Summit, Fuson is also developing user documentation and training. In February, he introduced users to jsrun on the monthly User Conference Call for the OLCF, a DOE Office of Science User Facility at ORNL.

    “Right now, Summit is a big focus,” he said. “We’ve invested time in learning these new tools and testing them in the Summit environment.”

    And what about during his free time when Summit is not the focus? Fuson spends his off-hours scheduling as well. “My hobby is taxiing my kids around town between practices,” he joked.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 8:55 am on May 3, 2018 Permalink | Reply
    Tags: , , , Pausey Supercomputer Centre - Perth Australia, Supercomputing   

    From CSIROscope: “Supercharging science with the Pawsey Centre” 

    CSIRO bloc

    CSIROscope

    3 May 2018
    Aditi Subramanya

    1
    The Pawsey Supercomputing Centre in Perth, one of Australia’s two peak high-performance computing facilities, is critical infrastructure for enabling research across a wide range of disciplines.

    Galaxy Cray XC30 Series Supercomputer at Pawsey Supercomputer Centre Perth Australia

    Magnus Cray XC40 supercomputer at Pawsey Supercomputer Centre Perth Australia

    When we created Australia’s first digital computer, CSIRAC, back in 1949, it was an incredibly powerful machine for its time, and only the fifth of its kind in the world. Things have changed a lot since then – your standard smartphone now has around 7 million times the processing power of CSIRAC – but we’re still lucky enough to benefit from some pretty incredible supercomputer support for science.

    Together with the four universities in Western Australia we run the Pawsey Supercomputing Centre in Perth, one of Australia’s two peak high-performance computing facilities. The team at Pawsey has its eyes set on enabling research to improve our nation’s health, expand what we know of the Universe, and even safeguarding our water supplies.

    Last week the Australian Government awarded $70 million to the Centre to replace its existing infrastructure. So, what exactly does this mean?

    Pawsey Centre supercomputers makes science possible

    Each year Pawsey is used by 1500 researchers from all over Australia, working on hundreds of scientific projects. They use the supercomputers to speed up their scientific outcomes, and sometimes even to make them possible. A medical researcher, for example, carrying out genetic analysis can use Pawsey’s current systems to do a year’s worth of work in five hours. Pawsey is also used to model the movement of the ocean, and its impacts on structures designed to harness the energy of waves. This helps improve designs, reduce costs, and minimise risks.

    2
    An artist’s impression of an underwater scene. Three large round floating structures attached to the bottom by cables, two scuba divers near the surface.
    Researchers from Carnegie Clean Energy and their partner, the University of Western Australia’s Centre for Offshore Foundation Systems, used the Pawsey Centre to simulate the environments their wave farms face in real-world climates. Image: Carnegie Clean Energy.

    And it’s being used by astronomers who need to process huge volumes of data generated by our own Australian Square Kilometre Array Pathfinder, and the Murchison Widefield Array, radio telescopes.

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    SKA Murchison Widefield Array, Boolardy station in outback Western Australia, at the Murchison Radio-astronomy Observatory (MRO)

    This new funding injection will give Pawsey the ability to continue investing in new technologies. But more so, it enables the Centre to drive innovation and accelerate discoveries in medical science, astronomy, geoscience, marine science, chemistry, food, agriculture and more. That translates into supercomputers and data storage systems that enable Australian researchers to solve our biggest challenges and embrace our biggest opportunities.

    The Pawsey Centre is a joint venture between CSIRO and four partner universities (Curtin University, Edith Cowan University, Murdoch University, and the University of Western Australia), supported by the Australian Government and Western Australian Government. It is one of several science-ready national facilities that we manage for use by Australian and international researchers.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 11:21 am on May 2, 2018 Permalink | Reply
    Tags: , , , , , , Supercomputing   

    From Argonne National Laboratory ALCF: “ALCF supercomputers advance earthquake modeling efforts” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    May 1, 2018
    John Spizzirri

    Southern California defines cool. The perfect climes of San Diego, the glitz of Hollywood, the magic of Disneyland. The geology is pretty spectacular, as well.

    “Southern California is a prime natural laboratory to study active earthquake processes,” says Tom Jordan, a professor in the Department of Earth Sciences at the University of Southern California (USC). “The desert allows you to observe the fault system very nicely.”

    The fault system to which he is referring is the San Andreas, among the more famous fault systems in the world. With roots deep in Mexico, it scars California from the Salton Sea in the south to Cape Mendocino in the north, where it then takes a westerly dive into the Pacific.

    Situated as it is at the heart of the San Andreas Fault System, Southern California does make an ideal location to study earthquakes. That it is home to nearly 24-million people makes for a more urgent reason to study them.

    1
    San Andreas Fault System. Aerial photo of San Andreas Fault looking northwest onto the Carrizo Plain with Soda Lake visible at the upper left. John Wiley User:Jw4nvcSanta Barbara, California

    2
    USGS diagram of San Andreas Fault. http://nationalatlas.gov/articles/geology/features/sanandreas.html

    Jordan and a team from the Southern California Earthquake Center (SCEC) are using the supercomputing resources of the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility, to advance modeling for the study of earthquake risk and how to reduce it.

    Headquartered at USC, the center is one of the largest collaborations in geoscience, engaging over 70 research institutions and 1,000 investigators from around the world.

    The team relies on a century’s worth of data from instrumental records as well as regional and seismic national hazard models to develop new tools for understanding earthquake hazards. Working with the ALCF, they have used this information to improve their earthquake rupture simulator, RSQSim.

    RSQ is a reference to rate- and state-dependent friction in earthquakes — a friction law that can be used to study the nucleation, or initiation, of earthquakes. RSQSim models both nucleation and rupture processes to understand how earthquakes transfer stress to other faults.

    ALCF staff were instrumental in adapting the code to Mira, the facility’s 10-petaflops supercomputer, to allow for the larger simulations required to model earthquake behaviors in very complex fault systems, like San Andreas, and which led to the team’s biggest discovery.

    Shake, rattle, and code

    The SCEC, in partnership with the U.S. Geological Survey, had already developed the Uniform California Earthquake Rupture Forecast (UCERF), an empirically based model that integrates theory, geologic information, and geodetic data, like GPS displacements, to determine spatial relationships between faults and slippage rates of the tectonic plates that created those faults.

    Though more traditional, the newest version, UCERF3, is considered the best representation of California earthquake ruptures, but the picture it portrays is still not as accurate as researchers would hope.

    “We know a lot about how big earthquakes can be, how frequently they occur, and where they occur, but we cannot predict them precisely in time,” notes Jordan.

    The team turned to Mira to run RSQSim to determine whether they could achieve more accurate results more quickly. A physics-based code, RSQSim produces long-term synthetic earthquake catalogs that comprise dates, times, locations, and magnitudes for predicted events.

    Using simulation, researchers impose stresses upon some representation of a fault system, which changes the stress throughout much of the system and thus changes the way future earthquakes occur. Trying to model these powerful stress-mediated interactions is particularly difficult with complex systems and faults like San Andreas.

    “We just let the system evolve and create earthquake catalogs for a hundred thousand or a million years. It’s like throwing a grain of sand in a set of cogs to see what happens,” explains Christine Goulet, a team member and executive science director for special projects with SCEC.

    The end result is a more detailed picture of the possible hazard, which forecasts a sequence of earthquakes of various magnitudes expected to occur on the San Andreas Fault over a given time range.

    The group tried to calibrate RSQSim’s numerous parameters to replicate UCERF3, but eventually decided to run the code with its default parameters. While the initial intent was to evaluate the magnitude of differences between the models, they discovered, instead, that both models agreed closely on their forecasts of future seismologic activity.

    “So it was an a-ha moment. Eureka,” recalls Goulet. “The results were a surprise because the group had thought carefully about optimizing the parameters. The decision not to change them from their default values made for very nice results.”

    The researchers noted that the mutual validation of the two approaches could prove extremely productive in further assessing seismic hazard estimates and their uncertainties.

    Information derived from the simulations will help the team compute the strong ground motions generated by faulting that occurs at the surface — the characteristic shaking that is synonymous with earthquakes. To do this, the team couples the earthquake rupture forecasts, UCERF and RSQSim, with different models that represent the way waves propagate through the system. Called ground motion prediction equations, these are standard equations used by engineers to calculate the shaking levels from earthquakes of different sizes and locations.

    One of those models is the dynamic rupture and wave propagation code Waveqlab3D (Finite Difference Quake and Wave Laboratory 3D), which is the focus of the SCEC team’s current ALCF allocation.

    “These experiments show that the physics-based model RSQSim can replicate the seismic hazard estimates derived from the empirical model UCERF3, but with far fewer statistical assumptions,” notes Jordan. “The agreement gives us more confidence that the seismic hazard models for California are consistent with what we know about earthquake physics. We can now begin to use these physics to improve the hazard models.”

    This project was awarded computing time and resources at the ALCF through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The team’s research is also supported by the National Science Foundation, the U.S. Geological Survey, and the W.M. Keck Foundation.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: