Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:15 am on September 23, 2016 Permalink | Reply
    Tags: , , , Science | Business, , Supercomputing   

    From SKA via Science Business: “Square Kilometre Array prepares for the ultimate big data challenge” 

    SKA Square Kilometer Array

    SKA

    1

    Science | Business

    22 September 2016
    Éanna Kelly

    The world’s most powerful radio telescope will collect more information each day than the entire internet. Major advances in computing are required to handle this data, but it can be done, says Bernie Fanaroff, strategic advisor for the SKA

    The Square Kilometre Array (SKA), the world’s most powerful telescope, will be ready from day one to gather an unprecedented volume of data from the sky, even if the supporting technical infrastructure is yet to be built.

    “We’ll be ready – the technology is getting there,” Bernie Fanaroff, strategic advisor for the most expensive and sensitive radio astronomy project in the world, told Science|Business.

    Construction of the SKA is due to begin in 2018 and finish sometime in the middle of the next decade. Data acquisition will begin in 2020, requiring a level of processing power and data management know-how that outstretches current capabilities.

    Astronomers estimate that the project will generate 35,000-DVDs-worth of data every second. This is equivalent to “the whole world wide web every day,” said Fanaroff.

    The project is investing in machine learning and artificial intelligence software tools to enable the data analysis. In advance of construction of the vast telescope – which will consist of some 250,000 radio antennas split between sites in Australia and South Africa – SKA already employs more than 400 engineers and technicians in infrastructure, fibre optics and data collection.

    The project is also working with IBM, which recently opened a new R&D centre in Johannesburg, on a new supercomputer. SKA will have two supercomputers to process its data, one based in Cape Town and one in Perth, Australia.

    Recently, elements of the software under development were tested on the world’s second fastest supercomputer, the Tianhe-2, located in the National Supercomputer Centre in Guangzhou, China. It is estimated a supercomputer with three times the power of Tianhe-2 will need to be built in the next decade to cope with all the SKA data.

    In addition to the analysis, the project requires large off-site data warehouses. These will house storage devices custom-built in South Africa. “There were too many bells and whistles with the stuff commercial providers were offering us. It was far too expensive, so we’ve designed our own servers which are cheaper,” said Fanaroff.

    Fanaroff was formerly director of SKA, retiring at the end of 2015, but remaining as a strategic advisor to the project. He was in Brussels this week to explore how African institutions could gain access to the European Commission’s new Europe-wide science cloud, tentatively scheduled to go live in 2020.

    Ten countries are members of the SKA, which has its headquarters at Manchester University’s Jodrell Bank Observatory, home of the world’s third largest fully-steerable radio telescope. The bulk of SKA’s funding has come from South Africa, Australia and the UK.

    Currently its legal status is as a British registered company, but Fanaroff says the plan is to create an intergovernmental arrangement similar to CERN. “The project needs a treaty to lock in funding,” he said.

    Early success

    On SKA’s website is a list of five untold secrets of the cosmos, which the telescope will explore. These include how the very first stars and galaxies formed just after the Big Bang.

    However, Fanaroff, believes the Eureka moment will be something nobody could have imagined. “It’ll make its name, like every telescope does, by discovering an unknown, unknown,” he said.

    A first taste of the SKA’s potential arrived in July through the MeerKAT telescope, which will form part of the SKA. MeerKAT will eventually consist of 64 dishes, but the power of the 16 already installed has surpassed Fanaroff’s expectations.

    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA
    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    The telescope revealed over a thousand previously unknown galaxies. “Two things were remarkable: when we switched it on, people told us it was going to take a long time to work. But it collected very good images from day one. Also, our radio receivers worked four times better than specified,” he said. Some 500 scientists have already booked time on the array.

    Researchers with the Breakthrough Listen project, a search for intelligent life funded by Russian billionaire Yuri Milner, would also like a slot, Fanaroff said. Their hunt is exciting and a good example of the sort of bold mission for which SKA will be built. “It’s high-risk, high-reward territory. If you search for aliens and you find nothing, you end your career with no publications. But on the other hand you could be involved in one of the biggest discoveries ever,” said Fanaroff.

    Golden age

    SKA has helped put South Africa’s scientific establishment in the shop window says Fanaroff, referring to the recent Nature Index, which indicates the country’s scientists are publishing record levels of high-quality research, mostly in astronomy. “It’s the start of a golden age,” Fanaroff predicted.

    Not that the SKA does not have its critics. With so much public funding going to the telescope, “Some scientists were a little bit bitter at the beginning,” Fanaroff said. “But that has faded with the global interest from science and industry we’re attracting. The SKA can go on to be a platform for all science in Africa, not just astronomy.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    SKA Banner

    SKA CSIRO  Pathfinder Telescope
    SKA ASKAP Pathefinder Telescope

    SKA Meerkat telescope
    SKA Meerkat Telescope

    SKA Murchison Widefield Array
    SKA Murchison Wide Field Array

    About SKA

    The Square Kilometre Array will be the world’s largest and most sensitive radio telescope. The total collecting area will be approximately one square kilometre giving 50 times the sensitivity, and 10 000 times the survey speed, of the best current-day telescopes. The SKA will be built in Southern Africa and in Australia. Thousands of receptors will extend to distances of 3 000 km from the central regions. The SKA will address fundamental unanswered questions about our Universe including how the first stars and galaxies formed after the Big Bang, how dark energy is accelerating the expansion of the Universe, the role of magnetism in the cosmos, the nature of gravity, and the search for life beyond Earth. Construction of phase one of the SKA is scheduled to start in 2016. The SKA Organisation, with its headquarters at Jodrell Bank Observatory, near Manchester, UK, was established in December 2011 as a not-for-profit company in order to formalise relationships between the international partners and centralise the leadership of the project.

    The Square Kilometre Array (SKA) project is an international effort to build the world’s largest radio telescope, led by SKA Organisation. The SKA will conduct transformational science to improve our understanding of the Universe and the laws of fundamental physics, monitoring the sky in unprecedented detail and mapping it hundreds of times faster than any current facility.

    Already supported by 10 member countries – Australia, Canada, China, India, Italy, New Zealand, South Africa, Sweden, The Netherlands and the United Kingdom – SKA Organisation has brought together some of the world’s finest scientists, engineers and policy makers and more than 100 companies and research institutions across 20 countries in the design and development of the telescope. Construction of the SKA is set to start in 2018, with early science observations in 2020.

     
  • richardmitnick 9:32 am on September 19, 2016 Permalink | Reply
    Tags: , Epilepsy, , , Supercomputing   

    From Science Node: “Stalking epilepsy” 

    Science Node bloc
    Science Node

    15 Sep, 2016
    Lance Farrell

    1
    Courtesy Enzo Varriale. (CC BY-ND 2.0)

    Scientists in Italy may have found a brain activity marker that forecasts epilepsy development. All it took was some big computers and tiny mice.

    By the time you finish reading this story, an untold number of people will have had a stroke, or have suffered a traumatic brain injury, or perhaps have been exposed to a toxic chemical agent. These events occur every day, millions of times each year. Of these victims, nearly 1 million go on to develop epilepsy.

    These events — stroke, brain injury, toxic exposure, among others — are some of the known causes of epilepsy (also known as epileptogenic events), but not all who suffer from them develop epilepsy. Scientists today struggle to identify people who will develop epilepsy following the exposure to risk factors.

    Even if identification were possible, there are no treatments available to prevent the emergence of epilepsy. The development of such therapeutics is a holy grail of epilepsy research, since this would reduce the incidence of epilepsy by about 40 percent.

    The development of anti-epileptogenic treatments awaits identification of a so-called epileptogenic marker – that is, a measurable event which occurs specifically only during the development of epilepsy, when seizures have yet to become clinically evident.

    An European Grid Infrastructure (EGI) collaboration, led by Massimo Rizzi at the Mario Negri Institute for Pharmacological Research, appears to have pinpointed just such a marker. All it took was some heavy-duty grid computing and a handful of mice.

    Epilepsies

    Epilepsy comes in many varieties, and is characterized as a seizure-inducing condition of the brain. These seizures result from the simultaneous signaling of multiple neurons. Considered chronic, this neurological disorder afflicts some 65 million people internationally.

    2
    Squadra di calcolo. Scientists credit a recent breakthrough in epilepsy research to the computational power provided by the Italian National Institute of Nuclear Physics (INFN), a key component of the Italian Grid Infrastructure (IGI) and European Grid Infrastructure (EGI). Courtesy INFN.

    Incurable at present, epileptic seizures are controllable to a large extent, typically with pharmacological agents. Changes in diet can lower seizures, as can electrical devices and surgeries. According the US National Institute of Health (NIH), annual US epilepsy-related costs are estimated at $15.5 billion.

    Scientists Rizzi and his colleagues thought an alteration in brain electrical activity following the exposure to a risk factor might be a smart place to look for an epileptogenic marker. If this marker could be located, it then could be exploited to develop treatments that prevent the emergence of epilepsy.

    Of mice and men

    To search for the marker, Rizzi’s team focused their attention on an animal model of epilepsy. Mice developed epilepsy after exposure to a cerebral insult that mimics the effects of risk factors as they would occur in humans.

    Examining the brain electrical activity of these mice, Rizzi’s team combed through 32,000 epidural electrocorticograms (ECoG), 12 seconds/4800 data points at a time, for up to an hour preceding the first epileptic seizure.

    Each swath of ECoGs were run through the recurrence quantification analysis (RQA), a powerful mathematical tool specifically designed for the investigation of non-linear complex dynamics embedded in time-series readings such as the ECoG.

    3
    Thinking of a mouse. Brain activity of a mouse that developed epilepsy following exposure to an infusion of albumin.

    When the dust had settled, nearly 400,000 seconds of ECoGs revealed a telling pattern. The scientists found that high rates of dynamic intermittency accompany the development of epilepsy. In other words, the ECoGs of mice developing epilepsy from the induced trauma would rapidly alter between nearly periodic and then irregular behavior of brain electrical activity.

    Noting this signal, researchers applied an experimental anti-epileptogenic treatment that successfully reduced the rate of occurrence of this complex oscillation pattern. Identification of the complex oscillation and its arrest under treatment led Rizzi and his team to confidently assert that high rates of dynamic intermittency can be considered as a marker of epileptogenesis. Their research was recently published in Scientific Reports.

    Tools of the trade

    Rizzi’s team made good use of the computational and storage resources at the Italian National Institute of Nuclear Physics (INFN). The INFN is a key component in the Italian Grid Infrastructure (IGI), which is integrated into the larger EGI, Europe’s leading grid computing infrastructure.

    “The time required to accomplish calculations of these datasets would have taken more than two months by an ordinary PC, instead of a little more than two days using grid computing,” says Rizzi. “Considering also the preliminary settings of analytical conditions and validation tests of results, almost two years of calculations were collapsed into a couple of months by high throughput computing technology.”

    From here, Rizzi hands off to pre-clinical researchers who can begin to develop interventions that will reduce and hopefully eliminate the emergence of epilepsy after exposure to risk factors. This knowledge holds out promise for use in the development of anti-epileptogenic therapies.

    “This insight will help us reduce the incidence of epilepsy by approximately 40 percent,” Rizzi estimates. “Our future aim is to exploit our finding in order to improve the development of therapeutics. High throughput computer technology will keep on playing a fundamental role by significantly speeding up this field of research.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:52 am on September 17, 2016 Permalink | Reply
    Tags: , , Oakley Cluster, Ohio Supercomputer Center, Supercomputing   

    From Ohio Supercomputer Center: “Ohio Supercomputer Center gets super-charged Owens Cluster up and running – PHOTOS” 

    1
    Oakley supercomputer

    Ohio Supercomputer Center

    2

    Columbus Business First

    Sep 15, 2016
    Carrie Ghose

    The Ohio Supercomputer Center has started bringing online its $7 million Owens Cluster, named for Olympian and Ohio State University alum Jesse Owens. It has 25 times the computing power and 450 times the memory of the 8-year-old Glenn Cluster it replaces in the same floor space.

    And it cost $1 million less.

    3
    Owens Cluster

    “Supercomputers have gotten as big as they’re going to get,” said David Hudak, OSC’s interim executive director. “They’re getting denser now. … The thing about a supercomputer is it’s built from the same components as laptops or desktops or servers.”

    The “super” comes from the connections between those components and the software coordinating them, getting computing cores to work together like a single machine at exponentially faster speeds.

    Dell Inc. won the bid process to build Owens, based on criteria including speed, memory, bandwidth, interconnection speed, disk speeds and processor architecture. Companies such as HP, IBM Corp. and Cray Inc. also build supercomputers. The project comes out of OSC’s $12 million capital appropriation in the two-year budget.

    The machines come blank, so the center must install its operating systems and software that makes the thing work, from its own or vendor sites.

    The installation this summer came during regular quarterly down times for maintenance, so users weren’t caught off guard. When a portion of Owens was fired up in late August, a few users were brought on. The center can only turn one a portion at a time until the cooling system is completely installed.

    When its fully running by early November, researchers can do in hours what took days, allowing more adjustments and iterations of experiments. About 10 percent of use is by private industry for data analytics and product simulations.

    “When they’re given more capacity, they dial up the fidelity of their experiments,” Hudak said. “It’s about being able to do these simulations at much higher (detail and speed).”

    The retired cluster was named for astronaut and U.S. Sen. John Glenn; one cabinet still is plugged in until Owens is fully up. A few retired Glenn cabinets were labeled as museum pieces to explain the parts of a supercomputer and are on display at a few colleges and JobsOhio. A few hundred of its processing chips were turned into refrigerator magnets. The state will try to sell the bulk of the 2008 equipment for parts or recycling.

    OSC’s office and the actual computer are in different buildings. The data center inside the State of Ohio Computing Center on west campus of Ohio State University uses 1.1 megawatts of power, more than half of that for cooling.

    Behold, Moore’s Law in action, sort of, via the progression of OSC’s equipment:

    Glenn Cluster (decommissioned): 2008, $8 million, 5,000 processors, speed of 40 teraflops (trillion “floating-point operations per second”).
    Oakley: 2012, $4.5 million, 8,000 processors, 150 teraflops.
    Ruby: 2014, $1.5 million, 4,800 processors (at one-third the floor space of Glenn and half that of Oakley), 140 teraflops.
    Owens: 2016, $7 million, 23,500 processors, 800 teraflops.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Ohio Supercomputer Center empowers a wide array of groundbreaking innovation and economic development activities in the fields of bioscience, advanced materials, data exploitation and other areas of state focus by providing a powerful high performance computing, research and educational cyberinfrastructure for a diverse statewide/regional constituency.

    The Ohio Supercomputer Center partners strategically with Ohio researchers — especially following the 2002 establishment of a focused research support program — in developing competitive, collaborative proposals to regional, national, and international funding organizations to solve some of the world’s most challenging scientific and engineering problems.

    The Ohio Supercomputer Center leads strategic research activities of vital interest to the State of Ohio, the nation and the world community, leveraging the exceptional skills and knowledge of an in-house research staff specializing in the fields of supercomputing, computational science, data management, biomedical applications and a host of emerging disciplines.

     
  • richardmitnick 8:25 am on September 9, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From PNNL: “Advanced Computing, Mathematics and Data Research Highlights” 

    PNNL BLOC
    PNNL Lab

    September 2016

    Global Arrays Gets an Update from PNNL and Intel Corp.

    Scientists Jeff Daily, Abhinav Vishnu, and Bruce Palmer, all from the ACMD Division High Performance Computing group at PNNL, served as the core team for a new release of the Global Arrays (GA) toolkit, known as Version 5.5. GA 5.5 provides additional support and bug fixes for the parallel Partitioned Global Address Space (PGAS) programing model.

    GA 5.5 incorporates support for libfabric (https://ofiwg.github.io/libfabric/), which helps meet performance and scalability requirements of high-performance applications, such as PGAS programming models (like GA), Message Passing Interface (MPI) libraries, and enterprise applications running in tightly coupled network environments. The updates to GA 5.5 resulted from a coordinated effort between the GA team and Intel Corp. Along with incorporating support for libfabric, the update added native support for the Intel Omni-Path high-performance communication architecture and applied numerous bug fixes since the previous GA 5.4 release to both Version 5.5 and the ga-5-4 release branch of GA’s subversion repository.

    Originally developed in the late 1990s at PNNL, the GA toolkit offers diverse libraries employed within many applications, including quantum chemistry and molecular dynamics codes (notably, NWChem), as well as those used for computational fluid dynamics, atmospheric sciences, astrophysics, and bioinformatics.

    “This was a significant effort from Intel to work with us on the libfabric, and eventual Intel Omni-Path, support,” Daily explained. “Had we not refactored our Global Arrays one-sided communication library, ComEx, a few years ago to make it easier to port to new systems, this would not have been possible. Now that our code is much easier to integrate with, we envision more collaborations like this in the future.”

    Download information for GA 5.5. and the GA subversion repository is available here.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1

     
  • richardmitnick 12:28 pm on September 4, 2016 Permalink | Reply
    Tags: ASCRDiscovery, , Supercomputing   

    From DOE: “Packaging a wallop” 

    DOE Main

    Department of Energy

    ASCRDiscovery

    August 2016
    No writer credit found

    Lawrence Livermore National Laboratory’s time-saving HPC tool eases the way for next era of scientific simulations.

    1
    Technicians prepare the first row of cabinets for the pre-exascale Trinity supercomputer at Los Alamos National Laboratory, where a team from Lawrence Livermore National Laboratory deployed its new Spack software packaging tool. Photo courtesy of Los Alamos National Laboratory.

    From climate-change predictions to models of the expanding universe, simulations help scientists understand complex physical phenomena. But simulations aren’t easy to deploy. Computational models comprise millions of lines of code and rely on many separate software packages. For the largest codes, configuring and linking these packages can require weeks of full-time effort.

    Recently, a Lawrence Livermore National Laboratory (LLNL) team deployed a multiphysics code with 47 libraries – software packages that today’s HPC programs rely on – on Trinity, the Cray XC30 supercomputer being assembled at Los Alamos National Laboratory. A code that would have taken six weeks to deploy on a new machine required just a day and a half during an early-access period on part of Trinity, thanks to a new tool that automates the hardest parts of the process.

    LANL Cray XC30 Trinity supercomputer
    LANL Cray XC30 Trinity supercomputer

    This leap in efficiency was achieved using the Spack package manager. Package management tools are used frequently to deploy web applications and desktop software, but they haven’t been widely used to deploy high-performance computing (HPC) applications. Few package managers handle the complexities of an HPC environment and application developers frequently resort to building by hand. But as HPC systems and software become ever more complex, automation will be critical to keep things running smoothly on future exascale machines, capable of one million trillion calculations per second. These systems are expected to have an even more complicated software ecosystem.

    “Spack is like an app store for HPC,” says Todd Gamblin, its creator and lead developer. “It’s a bit more complicated than that, but it simplifies life for users in a similar way. Spack allows users to easily find the packages they want, it automates the installation process, and it allows contributors to easily share their own build recipes with others.” Gamblin is a computer scientist in LLNL’s Center for Applied Scientific Computing and works with the Development Environment Group at Livermore Computing. Spack was developed with support from LLNL’s Advanced Simulation and Computing program.

    Spack’s success relies on contributions from its burgeoning open-source community. To date, 71 scientists at more than 20 organizations are helping expand Spack’s growing repository of software packages, which number more than 500 so far. Besides LLNL, participating organizations include six national laboratories – Argonne, Brookhaven, Fermilab, Lawrence Berkeley (through the National Energy Research Scientific Computing Center), Los Alamos, Oak Ridge and Sandia – plus NASA, CERN and many other institutions worldwide.

    Spack is more than a repository for sharing applications. In the iPhone and Android app stores, users download pre-built programs that work out of the box. HPC applications often must be built directly on the supercomputer, letting programmers customize them for maximum speed. “You get better performance when you can optimize for both the host operating system and the specific machine you’re running on,” Gamblin says. Spack automates the process of fine-tuning an application and its libraries over many iterations, allowing users to quickly build many custom versions of codes and rapidly converge on a fast one.

    2
    Applications can share libraries when the applications are compatible with the same versions of their libraries (top). But if one application is updated and another is not, the first application won’t work with the second. Spack (bottom) allows multiple versions to coexist on the same system; here, for example, it simply builds a new version of the physics library and installs it alongside the old one. Schematic courtesy of Lawrence Livermore National Laboratory.

    Each new version of a large code may require rebuilding 70 or more libraries, also called dependencies. Traditional package managers typically allow installation of only one version of a package, to be shared by all installed software. This can be overly restrictive for HPC, where codes are constantly changed but must continue to work together. Picture two applications that share two dependencies: one for math and another for physics. They can share because the applications are compatible with the same versions of their dependencies. Suppose that application 2 is updated, and now requires version 2.0 of the physics library, but application 1 still only works with version 1.0. In a typical package manager, this would cause a conflict, because the two versions of the physics package cannot be installed at once. Spack allows multiple versions to coexist on the same system and simply builds a new version of the physics library and installs it alongside the old one.

    This four-package example is simple, Gamblin notes, but imagine a similar scenario with 70 packages, each with conflicting requirements. Most application users are concerned with generating scientific results, not with configuring software. With Spack, they needn’t have detailed knowledge of all packages and their versions, let alone where to find the optimal version of each, to begin the build. Instead, Spack handles the details behind the scenes and ensures that dependencies are built and linked with their proper relationships. It’s like selecting a CD player and finding it’s already connected to a compatible amplifier, speakers and headphones.

    Gamblin and his colleagues call Spack’s dependency configuration process concretization – filling in “the details to make an abstract specification concrete,” Gamblin explains. “Most people, when they say they want to build something, they have a very abstract idea of what they want to build. The main complexity of building software is all the details that arise when you try to hook different packages together.”

    During concretization, the package manager runs many checks, flagging inconsistencies among packages, such as conflicting versions. Spack also compares the user’s expectations against the properties of the actual codes and their versions and calls out and helps to resolve any mismatches. These automated checks save untold hours of frustration, avoiding cases in which a package wouldn’t have run properly.

    The complexity of building modern HPC software leads some scientists to avoid using libraries in their codes. They opt instead to write complex algorithms themselves, Gamblin says. This is time consuming and can lead to sub-optimal performance or incorrect implementations. Package management simplifies the process of sharing code, reducing redundant effort and increasing software reuse.

    Most important, Spack enables users to focus on the science they set out to do. “Users really want to be able to install an application and get it working quickly,” Gamblin says. “They’re trying to do science, and Spack frees them from the meta-problem of building and configuring the code.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
  • richardmitnick 3:33 pm on August 16, 2016 Permalink | Reply
    Tags: Energy Department to invest $16 million in computer design of materials, , Supercomputing,   

    From ORNL: “Energy Department to invest $16 million in computer design of materials” 

    i1

    Oak Ridge National Laboratory

    August 16, 2016
    Dawn Levy, Communications
    levyd@ornl.gov
    865.576.6448

    1
    Paul Kent of Oak Ridge National Laboratory directs the Center for Predictive Simulation of Functional Materials. No image credit.

    The U.S. Department of Energy announced today that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers.

    Two four-year projects—one team led by DOE’s Oak Ridge National Laboratory (ORNL), the other team led by DOE’s Lawrence Berkeley National Laboratory (LBNL)—will take advantage of superfast computers at DOE national laboratories by developing software to design fundamentally new functional materials destined to revolutionize applications in alternative and renewable energy, electronics, and a wide range of other fields. The research teams include experts from universities and other national labs.

    The new grants—part of DOE’s Computational Materials Sciences (CMS) program begun in 2015 as part of the U.S. Materials Genome Initiative—reflect the enormous recent growth in computing power and the increasing capability of high-performance computers to model and simulate the behavior of matter at the atomic and molecular scales.

    The teams are expected to develop sophisticated and user-friendly open-source software that captures the essential physics of relevant systems and can be used by the broader research community and by industry to accelerate the design of new functional materials.

    “Given the importance of materials to virtually all technologies, computational materials science is a critical area in which the United States needs to be competitive in the twenty-first century and beyond through global leadership in innovation,” said Cherry Murray, director of DOE’s Office of Science, which is funding the research. “These projects will both harness DOE existing high-performance computing capabilities and help pave the way toward ever-more sophisticated software for future generations of machines.”

    “ORNL researchers will partner with scientists from national labs and universities to develop software to accurately predict the properties of quantum materials with novel magnetism, optical properties and exotic quantum phases that make them well-suited to energy applications,” said Paul Kent of ORNL, director of the Center for Predictive Simulation of Functional Materials, which includes partners from Argonne, Lawrence Livermore, Oak Ridge and Sandia National Laboratories and North Carolina State University and the University of California–Berkeley. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source [ANL/APS], Spallation Neutron Source and the Nanoscale Science Research Centers.”

    ANL APS
    ANL/APS

    ORNL Spallation Neutron Source
    ORNL Spallation Neutron Source

    Said the center’s thrust leader for prediction and validation, Olle Heinonen, “At Argonne, our expertise in combining state-of-the-art, oxide molecular beam epitaxy growth of new materials with characterization at the Advanced Photon Source and the Center for Nanoscale Materials will enable us to offer new and precise insight into the complex properties important to materials design. We are excited to bring our particular capabilities in materials, as well as expertise in software, to the center so that the labs can comprehensively tackle this challenge.”

    Researchers are expected to make use of the 30-petaflop/s Cori supercomputer now being installed at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, the 27-petaflop/s Titan computer at the Oak Ridge Leadership Computing Facility (OLCF) and the 10-petaflop/s Mira computer at Argonne Leadership Computing Facility (ALCF).

    NERSC CRAY Cori supercomputer
    NERSC CRAY Cori supercomputer

    ORNL Cray Titan Supercomputer
    ORNL Cray Titan Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    OLCF, ALCF and NERSC are all DOE Office of Science User Facilities. One petaflop/s is1015 or a million times a billion floating-point operations per second.

    In addition, a new generation of machines is scheduled for deployment between 2016 and 2019 that will take peak performance as high as 200 petaflops. Ultimately the software produced by these projects is expected to evolve to run on exascale machines, capable of 1,000 petaflops and projected for deployment in the mid-2020s.

    LLNL IBM Sierra supercomputer
    LLNL IBM Sierra supercomputer

    ORNL IBM Summit supercomputer depiction
    ORNL IBM Summit supercomputer

    ANL Cray Aurora supercomputer
    ANL Cray Aurora supercomputer

    Research will combine theory and software development with experimental validation, drawing on the resources of multiple DOE Office of Science User Facilities, including the Advanced Light Source [ALS] at LBNL, the Advanced Photon Source at Argonne National Laboratory (ANL), the Spallation Neutron Source at ORNL, and several of the five Nanoscience Research Centers across the DOE National Laboratory complex.

    LBL ALS interior
    LBL/ALS

    The new research projects will begin in Fiscal Year 2016. They expand the ongoing CMS research effort, which began in FY 2015 with three initial projects, led respectively by ANL, Brookhaven National Laboratory and the University of Southern California.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 1:45 pm on August 16, 2016 Permalink | Reply
    Tags: , Big PanDA, , , , , , Supercomputing   

    From BNL: “Big PanDA Tackles Big Data for Physics and Other Future Extreme Scale Scientific Applications” 

    Brookhaven Lab

    August 16, 2016
    Karen McNulty Walsh
    kmcnulty@bnl.gov
    (631) 344-8350
    Peter Genzer
    (631) 344-3174
    genzer@bnl.gov

    1
    A workload management system developed by a team including physicists from Brookhaven National Laboratory taps into unused processing time on the Titan supercomputer at the Oak Ridge Leadership Computing Facility to tackle complex physics problems. New funding will help the group extend this approach, giving scientists in other data-intensive fields access to valuable supercomputing resources.

    A billion times per second, particles zooming through the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, smash into one another at nearly the speed of light, emitting subatomic debris that could help unravel the secrets of the universe.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Collecting the data from those collisions and making it accessible to more than 6000 scientists in 45 countries, each potentially wanting to slice and analyze it in their own unique ways, is a monumental challenge that pushes the limits of the Worldwide LHC Computing Grid (WLCG), the current infrastructure for handling the LHC’s computing needs. With the move to higher collision energies at the LHC, the demand just keeps growing.

    To help meet this unprecedented demand and supplement the WLCG, a group of scientists working at U.S. Department of Energy (DOE) national laboratories and collaborating universities has developed a way to fit some of the LHC simulations that demand high computing power into untapped pockets of available computing time on one of the nation’s most powerful supercomputers—similar to the way tiny pebbles can fill the empty spaces between larger rocks in a jar. The group—from DOE’s Brookhaven National Laboratory, Oak Ridge National Laboratory (ORNL), University of Texas at Arlington, Rutgers University, and University of Tennessee, Knoxville—just received $2.1 million in funding for 2016-2017 from DOE’s Advanced Scientific Computing Research (ASCR) program to enhance this “workload management system,” known as Big PanDA, so it can help handle the LHC data demands and be used as a general workload management service at DOE’s Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility at ORNL.

    “The implementation of these ideas in an operational-scale demonstration project at OLCF could potentially increase the use of available resources at this Leadership Computing Facility by five to ten percent,” said Brookhaven physicist Alexei Klimentov, a leader on the project. “Mobilizing these previously unusable supercomputing capabilities, valued at millions of dollars per year, could quickly and effectively enable cutting-edge science in many data-intensive fields.”

    Proof-of-concept tests using the Titan supercomputer at Oak Ridge National Laboratory have been highly successful. This Leadership Computing Facility typically handles large jobs that are fit together to maximize its use. But even when fully subscribed, some 10 percent of Titan’s computing capacity might be sitting idle—too small to take on another substantial “leadership class” job, but just right for handling smaller chunks of number crunching. The Big PanDA (for Production and Distributed Analysis) system takes advantage of these unused pockets by breaking up complex data analysis jobs and simulations for the LHC’s ATLAS and ALICE experiments and “feeding” them into the “spaces” between the leadership computing jobs.

    CERN/ATLAS detector
    CERN/ATLAS detector

    AliceDetectorLarge
    CERN/Alice Detector
    When enough capacity is available to run a new big job, the smaller chunks get kicked out and reinserted to fill in any remaining idle time.

    “Our team has managed to access opportunistic cycles available on Titan with no measurable negative effect on the supercomputer’s ability to handle its usual workload,” Klimentov said. He and his collaborators estimate that up to 30 million core hours or more per month may be harvested using the Big PanDA approach. From January through July of 2016, ATLAS detector simulation jobs ran for 32.7 million core hours on Titan, using only opportunistic, backfill resources. The results of the supercomputing calculations are shipped to and stored at the RHIC & ATLAS Computing Facility, a Tier 1 center for the WLCG located at Brookhaven Lab, so they can be made available to ATLAS researchers across the U.S. and around the globe.

    The goal now is to translate the success of the Big PanDA project into operational advances that will enhance how the OLCF handles all of its data-intensive computing jobs. This approach will provide an important model for future exascale computing, increasing the coherence between the technology base used for high-performance, scalable modeling and simulation and that used for data-analytic computing.

    “This is a novel and unique approach to workload management that could run on all current and future leadership computing facilities,” Klimentov said.

    Specifically, the new funding will help the team develop a production scale operational demonstration of the PanDA workflow within the OLCF computational and data resources; integrate OLCF and other leadership facilities with the Grid and Clouds; and help high-energy and nuclear physicists at ATLAS and ALICE—experiments that expect to collect 10 to 100 times more data during the next 3 to 5 years—achieve scientific breakthroughs at times of peak LHC demand.

    As a unifying workload management system, Big PanDA will also help integrate Grid, leadership-class supercomputers, and Cloud computing into a heterogeneous computing architecture accessible to scientists all over the world as a step toward a global cyberinfrastructure.

    “The integration of heterogeneous computing centers into a single federated distributed cyberinfrastructure will allow more efficient utilization of computing and disk resources for a wide range of scientific applications,” said Klimentov, noting how the idea mirrors Aristotle’s assertion that “the whole is greater than the sum of its parts.”

    This project is supported by the DOE Office of Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 12:44 pm on August 13, 2016 Permalink | Reply
    Tags: , Extreme Science and Engineering Discovery Environment (XSEDE), , , Supercomputing   

    From Science Node: “Opening the spigot at XSEDE” 

    Science Node bloc
    Science Node

    09 Aug, 2016
    Ken Chiacchia

    A boost from sequencing technologies and computational tools is in store for scientists studying how cells change which of their genes are active.

    Researchers using the Extreme Science and Engineering Discovery Environment (XSEDE) collaboration of supercomputing centers have reported advances in reconstructing cells’ transcriptomes — the genes activated by ‘transcribing’ them from DNA into RNA.

    The work aims to clarify the best practices in assembling transcriptomes, which ultimately can aid researchers throughout the biomedical sciences.

    1
    Digital detectives. Researchers from Texas A&M are using XSEDE resources to manage the data from transcriptome assembly. Studying transcriptomes will offer critical clues of how cells change their behavior in response to disease processes.

    “It’s crucial to determine the important factors that affect transcriptome reconstruction,” says Noushin Ghaffari of AgriLife Genomics and Bioinformatics, at Texas A&M University. “This work will particularly help generate more reliable resources for scientists studying non-model species” — species not previously well studied.

    Ghaffari is principal investigator in an ongoing project whose preliminary findings and computational aspects were presented at the XSEDE16 conference in Miami in July. She is leading a team of students and supercomputing experts from Texas A&M, Indiana University, and the Pittsburgh Supercomputing Center (PSC).

    The scientists sought to improve the quality and efficiency of assembling transcriptomes, and they tested their work on two real data sets from the Sequencing Quality Control Consortium (SEQC) RNA-Seq data: One of cancer cell lines and one of brain tissues from 23 human donors.

    What’s in a transcriptome?

    The transcriptome of a cell at a given moment changes as it reacts to its environment. Transcriptomes offer critical clues of how cells change their behavior in response to disease processes like cancer, or normal bodily signals like hormones.

    Assembling a transcriptome is a big undertaking with current technology, though. Scientists must start with samples containing tens or hundreds of thousands of RNA molecules that are each thousands of RNA ‘base units’ long. Trouble is, most of the current high-speed sequencing technologies can only read a couple hundred bases at one time.

    So researchers must first chemically cut the RNA into small pieces, sequence it, remove RNA not directing cell activity, and then match the overlapping fragments to reassemble the original RNA molecules.

    Harder still, they must identify and correct sequencing mistakes, and deal with repetitive sequences that make the origin and number of repetitions of a given RNA sequence unclear.

    While software tools exist to undertake all of these tasks, Ghaffari’s report was the most comprehensive yet to examine a variety of factors that affect assembly speed and accuracy when these tools are combined in a start-to-finish workflow.

    Heavy lifting

    The most comprehensive study of its kind, the report used data from SEQC to assemble a transcriptome, incorporating many quality control steps to ensure results were accurate. The process required vast amounts of computer memory, made possible by PSC’s high-memory supercomputers Blacklight, Greenfield, and now the new Bridges system’s 3-terabyte ‘large memory nodes.’

    2
    Blacklight supercomputer at the Pittsburgh Supercomputing Center.

    3
    Bridges HPE/Intel supercomputer

    4
    Bridges, a new PSC supercomputer, is designed for unprecedented flexibility and ease of use. It will include database and web servers to support gateways, collaboration, and powerful data management functions. Courtesy Pittsburgh Supercomputing Center.

    “As part of this work, we are running some of the largest transcriptome assemblies ever done,” says coauthor Philip Blood of PSC, an expert in XSEDE’s Extended Collaborative Support Service. “Our effort focused on running all these big data sets many different ways to see what factors are important in getting the best quality. Doing this required the large memory nodes on Bridges, and a lot of technical expertise to manage the complexities of the workflow.”

    During the study, the team concentrated on optimizing the speed of data movement from storage to memory to the processors and back.

    They also incorporated new verification steps to avoid perplexing errors that arise when wrangling big data through complex pipelines. Future work will include the incorporation of ‘checkpoints’ — storing the computations regularly so that work is not lost if a software error happens.

    Ultimately, Blood adds, the scientists would like to put the all the steps of the process into an automated workflow that will make it easy for other biomedical researchers to replicate.

    The work promises a better understanding of how living organisms respond to disease, environment and evolutionary changes, the scientists reported.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:34 am on July 23, 2016 Permalink | Reply
    Tags: , , PPPL and Princeton join high-performance software project, Supercomputing   

    From PPPL: “PPPL and Princeton join high-performance software project” 


    PPPL

    July 22, 2016
    John Greenwald

    1
    Co-principal investigators William Tang and Bei Wang. (Photo by Elle Starkman/Office of Communications)

    Princeton University and the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing code, or software package. Supporting this development is the Intel Parallel Computing Center (IPCC) Program, which provides funding to universities and laboratories to improve high-performance software capabilities for a wide range of disciplines.

    The project updates the GTC-Princeton (GTC-P) code, which was originally developed for fusion research applications at PPPL and has evolved into highly portable software that is deployed on supercomputers worldwide. The National Science Foundation (NSF) strongly supported advances in the code from 2011 through 2014 through the “G8” international extreme scale computing program, which represented the United States and seven other highly industrialized countries during that period.

    New activity

    Heading the new IPCC activity for the University’s Princeton Institute for Computational Science & Engineering (PICSciE) is William Tang, a PPPL physicist and PICSciE principal investigator (PI). Working with Tang is Co-PI Bei Wang, Associate Research Scholar at PICSciE, who leads this accelerated modernization effort. Joining them in the project are Co-PIs Carlos Rosales of the NSF’s Texas Advanced Computing Center at the University of Texas at Austin and Khaled Ibrahim of the Lawrence Berkeley National Laboratory.

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF
    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    The current GTC-P code has advanced understanding of turbulence and confinement of the superhot plasma that fuels fusion reactions in doughnut-shaped facilities called tokamaks.

    PPPL NSTXII
    PPPL NSTX tokamak

    Understanding and controlling fusion fuel turbulence is a grand challenge of fusion science, and great progress has been made in recent years. It can determine how effectively a fusion reactor will contain energy generated by fusion reactions, and thus can strongly influence the eventual economic attractiveness of a fusion energy system. Further progress on the code will enable researchers to study conditions that arise as tokamaks increase in size to the enlarged dimensions of ITER — the flagship international fusion experiment under construction in France.

    ITER Tokamak
    ITER tokamak

    Access to Intel computer clusters

    Through the IPCC, Intel will provide access to systems for exploring the modernization of the code. Included will be clusters equipped with the most recent Intel “Knights Landing” (KNL) central processing chips.

    The upgrade will become part of the parent GTC code, which is led by Prof. Zhihong Lin of the University of California, Irvine, with Tang as co-PI. That code is also being modernized and will be proposed, together with GTC-P, to be included in the early science portfolio for the Aurora supercomputer.

    3
    Cray Aurora supercomputer to be built for ANL

    Aurora will begin operations at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility at Argonne National Laboratory, in 2019. Powering Aurora will be Intel “Knights Hill” processing chips.

    Last year, the GTC and GTC-P codes were selected to be developed as an early science project designed for the Summit supercomputer that will be deployed at Oak Ridge Leadership Computing Facility, also a DOE Office of Science User Facility, at Oak Ridge National Laboratory in 2018.

    4
    IBM Summit supercomputer

    That modernization project differs from the one to be proposed for Aurora because Summit is being built around architecture powered by NVIDIA Volta graphical processing units and IBM Power 9 central processing chips.

    Moreover, the code planned for Summit will be designed to run on the Aurora platform as well.

    Boost U.S. computing power

    The two new machines will boost U.S. computing power far beyond Titan, the current leading U.S. supercomputer at Oak Ridge that can perform 27 quadrillion — or million billion — calculations per second. Summit and Aurora plan to perform some 200 quadrillion and 180 quadrillion calculations per second, respectively. Said Tang: “These new machines hold tremendous promise for helping to accelerate scientific discovery in many application domains, including fusion, that are of vital importance to the country.”

    PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov (link is external).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 11:16 am on July 8, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From Oak Ridge: “New 200-petaflop supercomputer to succeed Titan at ORNL” 

    i1

    Oak Ridge National Laboratory

    1
    Depiction of ORNL IBM Summit supercomputer

    A new 200-petaflop supercomputer will succeed Titan at Oak Ridge National Laboratory, and it could be available to scientists and researchers in 2018, a spokesperson said this week.

    The new IBM supercomputer, named Summit, could about double the computing power of what is now the world’s fastest machine, a Chinese system named Sunway TaihuLight, according to a seminannual list of the world’s top supercomputers released in June.

    Sunway TaihuLight is capable of 93 petaflops, according to the list, the TOP500 list. A petaflop is one quadrillion calculations per second. That’s 1,000 trillion calculations per second.

    Summit, which is expected to start operating at ORNL early in 2018, is one of three supercomputers that the U.S. Department of Energy expects to exceed 100 petaflops at three U.S. Department of Energy laboratories in 2018. The three planned systems are:

    the 200-petaflop Summit at ORNL, which is expected to be available to users in early 2018;

    a 150-petaflop machine known as Sierra at Lawrence Livermore National Laboratory near San Francisco in mid-2018;

    3
    IBM Sierra supercomputer depiction

    and
    a 180-petaflop supercomputer called Aurora at Argonne National Laboratory in Chicago in late 2018.

    4
    Cray Aurora supercomputer depiction

    “High performance computing remains an integral priority for the Department of Energy,” DOE Under Secretary Lynn Orr said. “Since 1993, our national supercomputing capabilities have grown exponentially by a factor of 300,000 to produce today’s machines like Titan at Oak Ridge National Lab. DOE has continually supported many of the world’s fastest, most powerful super-computers, and shared its facilities with universities and businesses ranging from auto manufacturers to pharmaceutical companies, enabling unimaginable economic benefits and leaps in science and technology, including the development of new materials for batteries and near zero-friction lubricants.”

    The supercomputers have also allowed the United States to maintain a safe, secure, and effective nuclear weapon stockpile, said Orr, DOE under secretary for science and energy.

    “DOE continues to lead in software and real world applications important to both science and industry,” he said. “Investments such as these continue to play a crucial role in U.S. economic competitiveness, scientific discovery, and national security.”

    At 200 petaflops, Summit would have at least five times as much power as ORNL’s 27-petaflop Titan. That system was the world’s fastest in November 2012 and recently achieved 17.59 petaflops on a test used by the TOP500 list that was released in June.

    Titan is used for research in areas such as materials research, nuclear energy, combustion, and climate science.

    “For several years, Titan has been the most scientifically productive in the world, allowing academic, government, and industry partners to do remarkable research in a variety of scientific fields,” ORNL spokesperson Morgan McCorkle said.

    Summit will be installed in a building close to Titan. Titan will continue operating while Summit is built and begins operating, McCorkle said.

    “That will ensure that scientific users have access to computing resources during the transition,” she said.

    Titan will then be decommissioned, McCorkle said.

    She said the total contract value for the new Summit supercomputer with all options and maintenance is $280 million. The U.S. Department of Energy is funding the project.

    McCorkle said the Oak Ridge Leadership Computing Facility at ORNL has been working with IBM, Nvidia, and Mellanox since 2014 to develop Summit.

    Like Titan, a Cray system, Summit will be part of the Oak Ridge Leadership Computing Facility, or OLCF. Researchers from around the world will be able to submit proposals to use the computer for a wide range of scientific applications, McCorkle said.

    She said the delivery of Summit will start at ORNL next year. Summit will be a hybrid computing system that uses traditional central processing units, or CPUs, and graphic processing units, or GPUs, which were first created for computer games.

    “We’re already scaling applications that will allow Summit to deliver an order of magnitude more science with at least 200 petaflops of compute power,” McCorkle said. “Early in 2018, users from around the world will have access to this resource.”

    Summit will have more than five times the computational power of Titan’s 18,688 nodes, using only about 3,400 nodes. Each Summit node will have IBM POWER9 CPUs and NVIDIA Volta GPUs connected with NVIDIA’s high-speed NVLinks and a huge amount of memory, according to the OLCF.

    Titan is also a hybrid system that combines CPUs with GPUs. That combination allowed the more powerful Titan to fit into the same space as Jaguar, an earlier supercomputer at ORNL, while using only slightly more electricity. That’s important because supercomputers can consume megawatts of power.

    China now has the top two supercomputers. Sunway TaihuLight was capable of 93 petaflops, and Tianhe-2, an Intel-based system ranked number two in the world, achieved 33.86 petaflops, according to the June version of the TOP500 list.

    But as planned, all three of the new DOE supercomputers would be more powerful than the top two Chinese systems.

    However, DOE officials said it’s not just about the hardware.

    “The strength of the U.S. program lies not just in hardware capability, but also in the ability to develop software that harnesses high-performance computing for real-world scientific and industrial applications,” DOE said. “American scientists have used DOE supercomputing capability to improve the performance of solar cells, to design new materials for batteries, to model the melting of ice sheets, to help optimize land use for biofuel crops, to model supernova explosions, to develop a near zero-fiction lubricant, and to improve laser radiation treatments for cancer, among countless other applications.

    Extensive work is already under way to prepare software and “real-world applications” to ensure that the new machines bring an immediate benefit to American science and industry, DOE said.

    “Investments such as these continue to play a crucial role in U.S. economic competitiveness, scientific discovery, and national security,” the department said.

    DOE said its supercomputers have more than 8,000 active users each year from universities, national laboratories, and industry.

    Among the supercomputer uses that DOE cited:

    Pratt and Whitney used the Argonne Leadership Computing Facility to improve the fuel efficiency of its Pure Power turbine engines.
    Boeing used the Oak Ridge Leadership Computing Facility to study the flow of debris to improve the safety of a thrust reverser for its new 787 Dreamliner.
    General Motors used the Oak Ridge Leadership Computing Facility to accelerate research on thermoelectric materials to help increase vehicle fuel efficiency.
    Proctor and Gamble used the Argonne Leadership Computing Facility to learn more about the molecular mechanisms of bubbles—important to the design of a wide range of consumer products.
    General Electric used the Oak Ridge Leadership Computing Facility to improve the efficiency of its world-leading turbines for electricity-generation.
    Navistar, NASA, the U.S. Air Force, and other industry leaders collaborated with scientists from Lawrence Livermore National Lab to develop technologies that increase semi-truck fuel efficiency by 17 percent.

    Though it was once the top supercomputer, Titan was bumped to number two behind Tianhe-2 in June 2013. It dropped to number three this June.

    As big as a basketball court, Titan is 10 times faster than Jaguar, the computer system it replaced. Jaguar, which was capable of about 2.5 petaflops, had ranked as the world’s fastest computer in November 2009 and June 2010.

    The new top supercomputer, Sunway TaihuLight, was developed by the National Research Center of Parallel Computer Engineering and Technology, or NRCPC, and installed at the National Supercomputing Center in Wuxi, China.

    Tianhe-2 was developed by China’s National University of Defense Technology.

    In the United States, DOE said its Office of Science and National Nuclear Security Administration are collaborating with other U.S. agencies, industry, and academia to pursue the goals of what is known as the National Strategic Computing Initiative:

    accelerating the delivery of “exascale” computing;
    increasing the coherence between the technology base used for modeling and simulation and that for data analytic computing;
    charting a path forward to a post-Moore’s Law era; and
    building the overall capacity and capability of an enduring national high-performance computing ecosystem.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: