Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:06 am on November 29, 2016 Permalink | Reply
    Tags: , , Supercomputing   

    From INVERSE: “Japan Reveals Plan to Build the World’s Fastest Supercomputer” 

    INVERSE

    INVERSE

    November 25, 2016
    Mike Brown

    Japan is about to try and build the fastest computer the world has ever known. The Japanese ministry of economy, trade and industry has decided to spend 19.5 billion yen ($173 million) on creating the fastest supercomputer known to the public. The machine will be used to propel Japan into a new era of technological advancement, aiding research into autonomous cars, renewable energy, robots and artificial intelligence (A.I.).

    “As far as we know, there is nothing out there that is as fast,” Satoshi Sekiguchi, director general at Japan’s ‎National Institute of Advanced Industrial Science and Technology, said in a report published Friday.

    The computer is currently called ACBI, which stands for A.I. Bridging Cloud Infrastructure. Companies have already begun bidding for the project, with bidding set to close December 8. The machine is targeted at achieving 130 petaflops, or 130 quadrillion calculations per second.

    Private companies will be able to tap into ACBI’s power for a fee. The machine is aimed at helping develop deep learning applications, which will be vital for future A.I. advancements. One area where deep learning will be crucial is in autonomous vehicles, as systems will be able to analyze the real-world data collected from a car’s sensors to improve its ability to avoid collisions.

    The move follows plans revealed in September for Japan to lead the way in self-driving map technology. The country is aiming to secure its position as world leaders in technological innovation, and the map project aims to set the global standard for autonomous vehicle road maps by getting a head start on data collection. Self-driving cars will need 3D maps to accurately interpret sensor input data and understand its current position.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:28 pm on November 23, 2016 Permalink | Reply
    Tags: , , , Computerworld, , Supercomputing   

    From ALCF via Computerworld: “U.S. sets plan to build two exascale supercomputers” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    1

    COMPUTERWORLD

    Nov 21, 2016
    Patrick Thibodeau

    2
    ARM

    The U.S believes it will be ready to seek vendor proposals to build two exascale supercomputers — costing roughly $200 million to $300 million each — by 2019.

    The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials.

    But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump’s administration will change directions. The incoming administration is a wild card. Supercomputing wasn’t a topic during the campaign, and Trump’s dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer.

    At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated.”

    Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

    Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project, appeared sanguine. “We believe that an important goal of the exascale computing project is to help economic competitiveness and economic security,” said Messina. “I could imagine that the administration would think that those are important things.”

    Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

    The ongoing exascale research funding (the U.S. budget is $150 million this year) will help with advances in software, memory, processors and other technologies that ultimately filter out to the broader commercial market.

    This is very much a global race, which is something the Trump administration will have to be mindful of. China, Europe and Japan are all developing exascale systems.

    China plans to have an exascale system ready by 2020. These nations see exascale — and the computing advances required to achieve it — as a pathway to challenging America’s tech dominance.

    “I’m not losing sleep over it yet,” said Messina, of the possibility that the incoming Trump administration may have different supercomputing priorities. “Maybe I will in January.”

    The U.S. will award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

    The timing of these exascale systems — ready for 2023 — is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

    The last big performance milestone in supercomputing occurred in 2008 with the development of a petaflop system. An exaflop is a 1,000-petaflop system and building it is challenging because of the limits of Moore’s Law, a 1960s-era observation that noted the number of transistors on a chip doubles about every two years.

    “Now we’re at the point where Moore’s Law is just about to end,” said Messina in an interview. That means the key to building something faster “is by having much more parallelism, and many more pieces. That’s how you get the extra speed.”

    An exascale system will solve a problem 50 times faster than the 20-petaflop systems in use in government labs today.

    Development work has begun on the systems and applications that can utilize hundreds of millions of simultaneous parallel events. “How do you manage it — how do you get it all to work smoothly?” said Messina.

    Another major problem is energy consumption. An exascale machine can be built today using current technology, but such a system would likely need its own power plant. The U.S. wants an exascale system that can operate on 20 megawatts and certainly no more than 30 megawatts.

    Scientists will have to come up with a way “to vastly reduce the amount of energy it takes to do a calculation,” said Messina. The applications and software development are critical because most of the energy is used to move data. And new algorithms will be needed.

    About 500 people are working at universities and national labs on the DOE’s coordinated effort to develop the software and other technologies exascale will need.

    Aside from the cost of building the systems, the U.S. will spend millions funding the preliminary work. Vendors want to maintain the intellectual property of what they develop. If it cost, for instance, $50 million to develop a certain aspect of a system, the U.S. may ask the vendor to pay 40% of that cost if they want to keep the intellectual property.

    A key goal of the U.S. research funding is to avoid creation of one-off technologies that can only be used in these particular exascale systems.

    “We have to be careful,” Terri Quinn, a deputy associate director for HPC at Lawrence Livermore National Laboratory, said at the SC16 panel session. “We don’t want them (vendors) to give us capabilities that are not sustainable in a business market.”

    The work under way will help ensure that the technology research is far enough along to enable the vendors to respond to the 2019 request for proposals.

    Supercomputers can deliver advances in modeling and simulation. Instead of building physical prototypes of something, a supercomputer can allow modeling virtually. This can speed the time it takes something to get to market, whether a new drug or car engine. Increasingly, HPC is used in big data and is helping improve cybersecurity through rapid analysis; artificial intelligence and robotics are other fields with strong HPC demand.

    China will likely beat the U.S. in developing an exascale system, but the real test will be their usefulness.

    Messina said the U.S. approach is to develop an exascale eco-system involving vendors, universities and the government. The hope is that the exascale systems will not only a have a wide range of applications ready for them, but applications that are relatively easy to program. Messina wants to see these systems quickly put to immediate and broad use.

    “Economic competitiveness does matter to a lot of people,” said Messina.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 7:17 am on November 19, 2016 Permalink | Reply
    Tags: , , , HPE Annie, Supercomputing   

    From BNL: “Brookhaven Lab Advances its Computational Science and Data Analysis Capabilities” 

    Brookhaven Lab

    November 18, 2016
    Ariana Tantillo
    atantillo@bnl.gov

    Using leading-edge computer systems and participating in computing standardization groups, Brookhaven will enhance its ability to support data-driven scientific discoveries

    1
    Members of the commissioning team—(from left to right) Imran Latif, David Free, Mark Lukasczyk, Shigeki Misawa, Tejas Rao, Frank Burstein, and Costin Caramarcu—in front of the newly installed institutional computing cluster at Brookhaven Lab’s Scientific Data and Computing Center.

    At the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, scientists are producing vast amounts of scientific data. To rapidly process and interpret these data, scientists require advanced computing capabilities—programming tools, numerical models, data-mining algorithms—as well as a state-of-the-art data, computing, and networking infrastructure.

    History of scientific computing at Brookhaven

    Brookhaven Lab has a long-standing history of providing computing resources for large-scale scientific programs. For more than a decade, scientists have been using data analytics capabilities to interpret results from the STAR and PHENIX experiments at the Relativistic Heavy Ion Collider (RHIC), a DOE Office of Science User Facility at Brookhaven, and the ATLAS experiment at the Large Hadron Collider (LHC) in Europe.

    BNL RHIC Campus
    BNL RHIC Campus

    Brookhaven STAR
    Brookhaven STAR

    Brookhaven Phenix
    Brookhaven Phenix

    CERN/ATLAS detector
    CERN/ATLAS detector

    Each second, millions of particle collisions at RHIC and billions at LHC produce hundreds of petabytes of data—one petabyte is equivalent to approximately 13 years of HDTV video, or nearly 60,000 movies—about the collision events and the emergent particles. More than 50,000 computing cores, 250 computer racks, and 85,000 magnetic storage tapes store, process, and distribute these data that help scientists understand the basic forces that shaped the early universe. Brookhaven’s tape archive for storing data files is the largest one in the United States and the fourth largest worldwide. As the U.S. ATLAS Tier 1 computing center—the largest one worldwide—Brookhaven provides about 25 percent of the total computing and storage capacity for LHC’s ATLAS experiment, receiving and delivering approximately two hundred terabytes of data (picture 62 million photos) to more than 100 data centers around the world each day.

    3
    Physicist Srinivasan Rajagopalan with hardware located at Brookhaven Lab that is used to support the ATLAS particle physics experiment at the Large Hadron Collider of CERN, the European Organization for Nuclear Research.

    “This capability to deliver large amounts of computational power makes Brookhaven one of the largest high-throughput computing resources in the country,” said Kerstin Kleese van Dam, director of Brookhaven’s Computational Science Initiative (CSI), which was launched in 2014 to consolidate the lab’s data-centric activities under one umbrella.

    Brookhaven Lab has a more recent history in operating high-performance computing clusters specifically designed for applications involving heavy numerical calculations. In particular, Brookhaven has been home to some of the most powerful supercomputers, including three generations of supercomputers from the New York State–funded IBM Blue Gene series. One generation, New York Blue/L, debuted at number five on the June 2007 top 500 list of the world’s fastest computers. With these high-performance computers, scientists have made calculations critical to research in biology, medicine, materials science, nanoscience, and climate science.

    4
    New York Blue/L is a massively parallel supercomputer that Brookhaven Lab acquired in 2007. At the time, it was the fifth most powerful supercomputer in the world. It was decommissioned in 2014.

    In addition to having supported high-throughput and high-performance computing, Brookhaven has hosted cloud-based computing services for smaller applications, such as analyzing data from cryo-electron microscopy studies of proteins.

    Advanced tools for solving large, complex scientific problems

    Brookhaven is now revitalizing its capabilities for computational science and data analysis so that scientists can more effectively and efficiently solve scientific problems.

    “All of the experiments going on at Brookhaven’s facilities have undergone a technological revolution to some extent,” explained Kleese van Dam. This is especially true with the state-of-the-art National Synchrotron Light Source II (NSLS-II) and the Center for Functional Nanomaterials (CFN)—both DOE Office of Science User Facilities at Brookhaven—each of which continues to attract more users and experiments.

    BNL NSLS-II Interior
    BNL NSLS II
    BNL NSLS II

    BNL Center for Functional Nanomaterials interior
    bnl-cfn-campus
    BNL Center for Functional Nanomaterials

    “The scientists are detecting more things at faster rates and in greater detail. As a result, we have data rates that are so big that no human could possibly make sense of all the generated data—unless they had something like 500 years to do so!” Kleese van Dam continued.

    In addition to analyzing data from experimental user facilities such as NSLS-II, CFN, RHIC, and ATLAS, scientists run numerical models and computer simulations of the behavior of complex systems. For example, they use models based on quantum chromodynamics theory to predict how elementary particles called quarks and gluons interact. They can then compare these predictions with the interactions experimentally studied at RHIC, when the particles are released after ions are smashed together at nearly the speed of light. Other models include those for materials science—to study materials with unique properties such as strong electron interactions that lead to superconductivity, magnetic ordering, and other phenomena—and for chemistry—to calculate the structures and properties of catalysts and other molecules involved in chemical processes.

    To support these computationally intensive tasks, Brookhaven recently installed at its Scientific Data and Computing Center a new institutional computing system from Hewlett Packard Enterprise (HPE). This institutional cluster—a set of computers that work together as a single integrated computing resource—initially consists of more than 100 compute nodes with processing, storage, and networking capabilities.

    4
    BNL Annie HPE supercomputer

    Each node includes both central processing units (CPUs)—the general-purpose processors commonly referred to as the “brains” of the computer—and graphics processing units (GPUs)—processors that are optimized to perform specific calculations. The nodes have error-correcting code memory, a type of data storage that detects and corrects memory corruption caused, for example, by voltage fluctuations on the computer’s motherboard. This error-correcting capability is critical to ensuring the reliability of Brookhaven’s scientific data, which are stored in different places on multiple hard disks so that reading and writing of the data can be done more efficiently and securely. A “file system” software separates the data into groups called “files” that are named so they can be easily found, similar to how paper documents are sorted and put into labeled file folders. Communication between nodes is enabled by a network that can send and receive data at 100 gigabytes per second—a data rate fast enough to copy a Blu-ray disc in mere seconds—with less than microseconds between each unit of data transferred.

    The institutional computing cluster will support a range of high-profile projects, including near-real-time data analysis at the CFN and NSLS-II. This analysis will help scientists understand the structures of biological proteins, the real-time operation of batteries, and other complex problems.

    This cluster will also be used for exascale numerical model development efforts, such as for the new Center for Computational Design of Strongly Correlated Materials and Theoretical Spectroscopy. Led by Brookhaven Lab and Rutgers University with partners from the University of Tennessee and DOE’s Ames Laboratory, this center is developing next-generation methods and software to accurately describe electronic correlations in high-temperature superconductors and other complex materials and a companion database to predict targeted properties with energy-related application to thermoelectric materials. Brookhaven scientists collaborating on two exascale computing application projects that were recently awarded full funding by DOE—“NWChemEx: Tackling Chemical, Materials and Biomolecular Challenges in the Exascale Era” and “Exascale Lattice Gauge Theory Opportunities and Requirements for Nuclear and High Energy Physics”—will also access the institutional cluster.

    “As the complexity of a material increases, more computer resources are required to efficiently perform quantum mechanical calculations that help us understand the material’s properties. For example, we may want to sort through all the different ways that lithium ions can enter a battery electrode, determining the capacity of the resulting battery and the voltage it can support,” explained Mark Hybertsen, leader of the CFN’s Theory and Computation Group. “With the computing capacity of the new cluster, we’ll be able to conduct in-depth research using complicated models of structures involving more than 100 atoms to understand how catalyzed reactions and battery electrodes work.”

    5
    This figure shows the computer-assisted catalyst design for the oxygen reduction reaction (ORR), which is one of the key challenges to advancing the application of fuel cells in clean transportation. Theoretical calculations based on nanoparticle models provide a way to not only speed up this reaction on conventional platinum (Pt) catalysts and enhance their durability, but also to lower the cost of fuel cell production by alloying (combining) Pt catalysts with the less expensive elements nickel (Ni) and gold (Au).

    In the coming year, Brookhaven plans to upgrade the institutional cluster to 200 nodes, which will subsequently be expanded to 300 nodes in the long term.

    “With these additional nodes, we’ll be able to serve a wider user community. Users don’t get just a share of the system; they get to use the whole system at a given time so that they can address very large scientific problems,” Kleese van Dam said.

    Data-driven scientific discovery

    Brookhaven is also building a novel computer architecture test bed for the data analytics community. Using this test bed, CSI scientists will explore different hardware and software, determining which are most important to enabling data-driven scientific discovery.

    Scientists are initially exploring a newly installed Koi Computers system that comprises more than 100 Intel parallel processors for high-performance computing. This system makes use of solid-state drives—storage devices that function like a flash drive or memory stick but reside inside the computer. Unlike traditional hard drives, solid-state drives do not consecutively read and write information by moving a tiny magnet with a motor; instead, the data are directly stored on electronic memory chips. As a result, solid-state drives consume far less power and thus would enable Brookhaven scientists to run computations much more efficiently and cost-effectively.

    “We are enthusiastic to operate these ultimate hardware technologies at the SDCC [Scientific Data and Computing Center] for the benefit of Brookhaven research programs,” said SDCC Director Eric Lançon.

    Next, the team plans to explore architectures and accelerators that could be of particular value to data-intensive applications.

    For data-intensive applications, such as analyzing experimental results using machine learning, the system’s memory input/output (I/O) rate and compute power are important. “A lot of systems have very slow I/O rates, so it’s very time consuming to get the data, only a small portion of which can be worked on at a time,” Kleese van Dam explained. “Right now, scientists collect data during their experiment and take the data home on a hard drive to analyze. In the future, we’d like to provide near-real-time data analysis, which would enable the scientists to optimize their experiments as they are doing them.”

    Participation in computing standards groups

    In conjunction to updating Brookhaven’s high-performance computing infrastructure, CSI scientists are becoming more involved in standardization groups for leading parallel programming models.

    “By participating in these groups, we’re making sure the high-performance computing standards are highly supportive of our scientists’ requirements for their experiments,” said Kleese van Dam.

    Currently, CSI scientists are in the process of applying to join the OpenMP (for Multi-Processing) Architecture Review Board. This nonprofit technology consortium manages the OpenMP application programming interface (API) specification for parallel programming on shared-memory systems—those in which individual processes can communicate and share data by using a common memory.

    In June 2016, Brookhaven became a member of the OpenACC (short for Open Accelerators) consortium. As part of this community of more than 20 research institutions, supercomputing centers, and technology developers, Brookhaven will help determine the future direction of the OpenACC programming standard for parallel computing.

    6
    (From left to right) Robert Riccobono, Nicholas D’Imperio, and Rafael Perez with a NVIDIA Tesla graphics processing unit (GPU) and a Hewlett Packard compute node, where the GPU resides. No image credit.

    The OpenACC API simplifies the programming of computer systems that combine traditional core processors with accelerator devices, such as GPUs and co-processors. The standard describes a set of instructions that identify computationally intensive parts of the code in common programming languages to be offloaded from a primary host processor to an accelerator. Distributing the processing load enables more efficient computations—like the many that Brookhaven scientists require to analyze the data they generate at RHIC, NSLS-II, and CFN.

    “From imaging the complex structures of biological proteins at NSLS-II, to capturing the real-time operation of batteries at CFN, to recreating exotic states of matter at RHIC, scientists are rapidly producing very large and varied datasets,” said Kleese van Dam. “The scientists need sufficient computational resources for interpreting these data and extracting key information to make scientific discoveries that could lead to the next pharmaceutical drug, longer-lasting battery, or discovery in physics.”

    As an OpenACC member, Brookhaven Lab will help implement the features of the latest C++ programming language standard into OpenACC software. This effort will directly support the new institutional cluster that Brookhaven purchased from HPE.

    “When programming advanced computer systems such as the institutional cluster, scientific software developers face several challenges, one of which is transferring data resident in main system memory to local memory resident on an accelerator such as a GPU,” said computer scientist Nicholas D’Imperio, chair of Brookhaven’s Computational Science Laboratory for advanced algorithm development and optimization. “By contributing to the capabilities of OpenACC, we hope to reduce the complexity inherent in such challenges and enable programming at a higher level of abstraction.”

    A centralized Computational Science Initiative

    These standards development efforts and technology upgrades come at a time when CSI is bringing all of its computer science and applied mathematics research under one roof. In October 2016, CSI staff moved into a building that will accommodate a rapidly growing team and will include collaborative spaces where Brookhaven Lab scientists and facility users can work with CSI experts. The building comprises approximately 60,000 square feet of open space that Brookhaven Lab, with the support of DOE, will develop into a new data center to house its growing data, computing, and networking infrastructure.

    “We look forward to the data-driven scientific discoveries that will come from these collaborations and the use of the new computing technology,” said Kleese van Dam.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 11:48 pm on November 18, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From CSIRO: “CSIRO seeks new petaflop computer to power research innovation” 

    CSIRO bloc

    Commonwealth Scientific and Industrial Research Organisation

    14 November 2016
    Andrew Warren

    CSIRO has commenced the search for the next generation of high-performance computer to replace their current Bragg accelerator cluster.

    1
    Bragg accelerator cluster, by Xenon Systems of Melbourne and is located at a data centre in Canberra, Australia.

    In addition to being a key partner and investor in Australia’s national peak computing facilities, the science agency is a global leader in scientific computing and an early pioneer of graphics processing unit-accelerated computing in Australia.

    Bragg debuted in the Top500 list of supercomputers in the world in 2012 at number 156, then in 2014 it achieved number 7 on the Green500 – a ranking of the performance to energy efficiency of the world’s supercomputers.

    Bragg’s replacement will be capable of ‘petaflop’ speeds, significantly exceeding the existing computer’s performance.

    It will boost CSIRO’s already impressive high-performance computing (HPC) capability and is expected to rank highly on the Green500.

    CSIRO’s acting Deputy Chief Information Officer, Scientific Computing Angus Macoustra said this replacement computer would be essential to maintaining CSIRO’s ability to solve many of the most important emerging science problems.

    “It’s an integral part of our strategy working alongside national peak computing facilities to build Australian HPC capacity to accelerate great science and innovation,” Mr Macoustra said.

    The cluster will power a new generation of ground-breaking scientific research, including data analysis, modelling, and simulation in a variety of science domains, such as biophysics, material science, molecular modelling, marine science, geochemical modelling, computational fluid dynamics, and more recently, artificial intelligence and data analytics using deep learning.

    The tender for the new machine is calling for a ‘heterogeneous’ system combining traditional central processing units with coprocessors to accelerate both the machine’s performance and energy efficiency.

    The successful bidder will be asked to deliver and support the system for three years within a $4m proposed budget.

    The tender process is currently open for submission at AusTender, and will close on Monday 19 December 2016.

    The winning system is expected to be up and running during the first half of 2017.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 11:58 am on October 12, 2016 Permalink | Reply
    Tags: "Oak Ridge Scientists Are Writing Code That Not Even The World's Fastest Computers Can Run (Yet), Department of Energy’s Exascale Computing Project, , , Summit supercomputer, Supercomputing   

    From ORNL via Nashville Public Radio: “Oak Ridge Scientists Are Writing Code That Not Even The World’s Fastest Computers Can Run (Yet)” 

    i1

    Oak Ridge National Laboratory

    1

    Nashville Public Radio

    Oct 10, 2016
    Emily Siner

    2
    The current supercomputer at Oak Ridge National Lab, Titan, will be replaced by what could be the fastest computer in the world, Summit — and even that won’t even be fast enough for some of the programs that are being written at the lab. Oak Ridge National Laboratory, U.S. Dept. of Energy

    ORNL IBM Summit supercomputer depiction
    ORNL IBM Summit supercomputer depiction

    Scientists at Oak Ridge National Laboratory are starting to build applications for a supercomputer that might not go live for another seven years.

    The lab recently received more than $5 million from the Department of Energy to start developing several longterm projects.

    Thomas Evans’s research is among those funded, and it’s a daunting task: His team is trying to predict how small sections of particles inside a nuclear reactor will behave over a long period time.

    The more precisely they can simulate nuclear reactors on a computer, the better engineers can build them in real life.

    “Analysts can use that [data] to design facilities, experiments and working engineering platforms,” Evans says.

    But these very elaborate simulations that Evans is creating take so much computing power that they cannot run on Oak Ridge’s current supercomputer, Titan — nor will it be able to run on the lab’s new supercomputer, Summit, which could be the fastest in the world when it goes live in two years.

    So Evans is thinking ahead, he says, “to ultimately harness the power of the next generation — technically two generations from now — of supercomputing.

    “And of course, the challenge is, that machine doesn’t exist yet.”

    The current estimate is that this exascale computer, as it’s called, will be several times faster than Summit and go live around 2023. And it could very well take that long for Evans’s team to write code for it.

    The machine won’t just be faster, Evans says. It’s also going to work in a totally new way, which changes how applications are written.

    “In other words, I can’t take a simulation code that we’ve been using now and just drop it in the new machine and expect it to work,” he says.

    The computer will not necessarily be housed at Oak Ridge, but Tennessee researchers are playing a major role in the Department of Energy’s Exascale Computing Project. In addition to Evans’ nuclear reactor project, scientists at Oak Ridge will be leading the development of two other applications, including one that will simulate complex 3D printing. They’ll also assist in developing nine other projects.

    Doug Kothe, who leads the lab’s exascale application development, says the goal is not just to think ahead to 2023. The code that the researchers write should be able run on any supercomputer built in next several decades, he says.

    Despite the difficulty, working on incredibly fast computers is also an exciting prospect, Kothe says.

    “For a lot of very inquisitive scientists who love challenges, it’s just a way cool toy that you can’t resist.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 9:10 am on September 30, 2016 Permalink | Reply
    Tags: , MAESTRO code for supercomputing, OLCF, OLCF Team Resolves Performance Bottleneck in OpenACC Code, , , Supercomputing   

    From ORNL: “OLCF Team Resolves Performance Bottleneck in OpenACC Code” 

    i1

    Oak Ridge National Laboratory

    1

    Oak Ridge Leadership Computing Facility

    September 28, 2016
    Elizabeth Rosenthal

    2
    By improving its MAESTRO code, a team led by Michael Zingale of Stony Brook University is modeling astrophysical phenomena with improved fidelity. Pictured above, a three-dimensional simulation of Type I x-ray bursts, a recurring explosive event triggered by the buildup of hydrogen and helium on the surface of a neutron star. No image caption.

    For any high-performance computing code, the best performance is both highly effective and highly efficient, using little power but producing high-quality results. However, performance bottlenecks can arise within these codes, which can hinder projects and require researchers to search for the underlying problem.

    A team at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy (DOE) Office of Science User Facility located at DOE’s Oak Ridge National Laboratory, recently addressed a performance bottleneck in one portion of an OLCF user’s application. Because of its efforts, the user’s team saw a sixfold performance improvement in the code. Team members for this project include Frank Winkler (OLCF), Oscar Hernandez (OLCF), Adam Jacobs (Stony Brook University), Jeff Larkin (NVIDIA), and Robert Dietrich (Dresden University of Technology).

    “If the code runs faster, then you need less power. Everything is better, more efficient,” said Winkler, performance tools specialist at the OLCF. “That’s why we have performance analysis tools.”

    Known as MAESTRO, the astrophysics code in question models the burning of exploding stars and other stellar phenomena. Such modeling is possible because of the code’s OpenACC configuration, an approach meant to simplify the programming of CPU and GPU systems. The OLCF team worked specifically with the piece of the algorithm that models the physics of nuclear burning.

    Initially that portion of MAESTRO did not perform as well as expected because the GPUs could not quickly access the data. To remedy the situation the team used diagnostic analysis tools to discover the reason for the delay. Winkler explained that Score-P, a performance measurement tool, traces the application, whereas VAMPIR, a performance visualization tool, conceptualizes the trace file, allowing users to see a timeline of activity within a code.

    “When you trace the code, you record each significant event in sequence,” Winkler said.

    By analyzing the results the team found that although data moving from CPUs to GPUs performed adequately, the code was significantly slower when sending data from GPUs to CPUs. Larkin, an NVIDIA software engineer, suggested using a compiler flag—custom instructions that modify how programming commands are expressed in code—to store data in a more convenient location for the GPUs, which resulted in the code’s dramatic speedup.

    Jacobs, an astrophysicist working on a PhD at Stony Brook, brought the OpenACC code to the OLCF in June to get expert assistance. Jacobs is a member of a research group led by Michael Zingale, also of Stony Brook.

    During the week Jacobs spent at the OLCF, the team ran MAESTRO on the Titan supercomputer, the OLCF’s flagship hybrid system.

    ORNL Cray Titan Supercomputer
    ORNL Cray Titan Supercomputer

    By leveraging tools like Score-P and VAMPIR on this system, the team employed problem-solving skills and computational analysis to resolve the bottleneck—and did so after just a week of working with the code. Both Winkler and Jacobs stressed that their rapid success depended on collaboration; the individuals involved, as well as the OLCF, provided the necessary knowledge and resources to reach a mutually beneficial outcome.

    “We are working with technology in a way that was not possible a year ago,” Jacobs said. “I am so grateful that the OLCF hosted me and gave me their time and experience.”

    Because of these improvements, the MAESTRO code can run the latest nuclear burning models faster and perform higher-level physics than before—capabilities that are vital to computational astrophysicists’ investigation of astronomical events like supernovas and x-ray bursts.

    “There are two main benefits to this performance improvement,” Jacobs said. “First, your code is now getting to a solution faster, and second, you can now spend a similar amount of time working on something much more complicated.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 10:15 am on September 23, 2016 Permalink | Reply
    Tags: , , , Science | Business, , Supercomputing   

    From SKA via Science Business: “Square Kilometre Array prepares for the ultimate big data challenge” 

    SKA Square Kilometer Array

    SKA

    1

    Science | Business

    22 September 2016
    Éanna Kelly

    The world’s most powerful radio telescope will collect more information each day than the entire internet. Major advances in computing are required to handle this data, but it can be done, says Bernie Fanaroff, strategic advisor for the SKA

    The Square Kilometre Array (SKA), the world’s most powerful telescope, will be ready from day one to gather an unprecedented volume of data from the sky, even if the supporting technical infrastructure is yet to be built.

    “We’ll be ready – the technology is getting there,” Bernie Fanaroff, strategic advisor for the most expensive and sensitive radio astronomy project in the world, told Science|Business.

    Construction of the SKA is due to begin in 2018 and finish sometime in the middle of the next decade. Data acquisition will begin in 2020, requiring a level of processing power and data management know-how that outstretches current capabilities.

    Astronomers estimate that the project will generate 35,000-DVDs-worth of data every second. This is equivalent to “the whole world wide web every day,” said Fanaroff.

    The project is investing in machine learning and artificial intelligence software tools to enable the data analysis. In advance of construction of the vast telescope – which will consist of some 250,000 radio antennas split between sites in Australia and South Africa – SKA already employs more than 400 engineers and technicians in infrastructure, fibre optics and data collection.

    The project is also working with IBM, which recently opened a new R&D centre in Johannesburg, on a new supercomputer. SKA will have two supercomputers to process its data, one based in Cape Town and one in Perth, Australia.

    Recently, elements of the software under development were tested on the world’s second fastest supercomputer, the Tianhe-2, located in the National Supercomputer Centre in Guangzhou, China. It is estimated a supercomputer with three times the power of Tianhe-2 will need to be built in the next decade to cope with all the SKA data.

    In addition to the analysis, the project requires large off-site data warehouses. These will house storage devices custom-built in South Africa. “There were too many bells and whistles with the stuff commercial providers were offering us. It was far too expensive, so we’ve designed our own servers which are cheaper,” said Fanaroff.

    Fanaroff was formerly director of SKA, retiring at the end of 2015, but remaining as a strategic advisor to the project. He was in Brussels this week to explore how African institutions could gain access to the European Commission’s new Europe-wide science cloud, tentatively scheduled to go live in 2020.

    Ten countries are members of the SKA, which has its headquarters at Manchester University’s Jodrell Bank Observatory, home of the world’s third largest fully-steerable radio telescope. The bulk of SKA’s funding has come from South Africa, Australia and the UK.

    Currently its legal status is as a British registered company, but Fanaroff says the plan is to create an intergovernmental arrangement similar to CERN. “The project needs a treaty to lock in funding,” he said.

    Early success

    On SKA’s website is a list of five untold secrets of the cosmos, which the telescope will explore. These include how the very first stars and galaxies formed just after the Big Bang.

    However, Fanaroff, believes the Eureka moment will be something nobody could have imagined. “It’ll make its name, like every telescope does, by discovering an unknown, unknown,” he said.

    A first taste of the SKA’s potential arrived in July through the MeerKAT telescope, which will form part of the SKA. MeerKAT will eventually consist of 64 dishes, but the power of the 16 already installed has surpassed Fanaroff’s expectations.

    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA
    SKA Meerkat telescope, 90 km outside the small Northern Cape town of Carnarvon, SA

    The telescope revealed over a thousand previously unknown galaxies. “Two things were remarkable: when we switched it on, people told us it was going to take a long time to work. But it collected very good images from day one. Also, our radio receivers worked four times better than specified,” he said. Some 500 scientists have already booked time on the array.

    Researchers with the Breakthrough Listen project, a search for intelligent life funded by Russian billionaire Yuri Milner, would also like a slot, Fanaroff said. Their hunt is exciting and a good example of the sort of bold mission for which SKA will be built. “It’s high-risk, high-reward territory. If you search for aliens and you find nothing, you end your career with no publications. But on the other hand you could be involved in one of the biggest discoveries ever,” said Fanaroff.

    Golden age

    SKA has helped put South Africa’s scientific establishment in the shop window says Fanaroff, referring to the recent Nature Index, which indicates the country’s scientists are publishing record levels of high-quality research, mostly in astronomy. “It’s the start of a golden age,” Fanaroff predicted.

    Not that the SKA does not have its critics. With so much public funding going to the telescope, “Some scientists were a little bit bitter at the beginning,” Fanaroff said. “But that has faded with the global interest from science and industry we’re attracting. The SKA can go on to be a platform for all science in Africa, not just astronomy.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    SKA Banner

    SKA CSIRO  Pathfinder Telescope
    SKA ASKAP Pathefinder Telescope

    SKA Meerkat telescope
    SKA Meerkat Telescope

    SKA Murchison Widefield Array
    SKA Murchison Wide Field Array

    About SKA

    The Square Kilometre Array will be the world’s largest and most sensitive radio telescope. The total collecting area will be approximately one square kilometre giving 50 times the sensitivity, and 10 000 times the survey speed, of the best current-day telescopes. The SKA will be built in Southern Africa and in Australia. Thousands of receptors will extend to distances of 3 000 km from the central regions. The SKA will address fundamental unanswered questions about our Universe including how the first stars and galaxies formed after the Big Bang, how dark energy is accelerating the expansion of the Universe, the role of magnetism in the cosmos, the nature of gravity, and the search for life beyond Earth. Construction of phase one of the SKA is scheduled to start in 2016. The SKA Organisation, with its headquarters at Jodrell Bank Observatory, near Manchester, UK, was established in December 2011 as a not-for-profit company in order to formalise relationships between the international partners and centralise the leadership of the project.

    The Square Kilometre Array (SKA) project is an international effort to build the world’s largest radio telescope, led by SKA Organisation. The SKA will conduct transformational science to improve our understanding of the Universe and the laws of fundamental physics, monitoring the sky in unprecedented detail and mapping it hundreds of times faster than any current facility.

    Already supported by 10 member countries – Australia, Canada, China, India, Italy, New Zealand, South Africa, Sweden, The Netherlands and the United Kingdom – SKA Organisation has brought together some of the world’s finest scientists, engineers and policy makers and more than 100 companies and research institutions across 20 countries in the design and development of the telescope. Construction of the SKA is set to start in 2018, with early science observations in 2020.

     
  • richardmitnick 9:32 am on September 19, 2016 Permalink | Reply
    Tags: , Epilepsy, , , Supercomputing   

    From Science Node: “Stalking epilepsy” 

    Science Node bloc
    Science Node

    15 Sep, 2016
    Lance Farrell

    1
    Courtesy Enzo Varriale. (CC BY-ND 2.0)

    Scientists in Italy may have found a brain activity marker that forecasts epilepsy development. All it took was some big computers and tiny mice.

    By the time you finish reading this story, an untold number of people will have had a stroke, or have suffered a traumatic brain injury, or perhaps have been exposed to a toxic chemical agent. These events occur every day, millions of times each year. Of these victims, nearly 1 million go on to develop epilepsy.

    These events — stroke, brain injury, toxic exposure, among others — are some of the known causes of epilepsy (also known as epileptogenic events), but not all who suffer from them develop epilepsy. Scientists today struggle to identify people who will develop epilepsy following the exposure to risk factors.

    Even if identification were possible, there are no treatments available to prevent the emergence of epilepsy. The development of such therapeutics is a holy grail of epilepsy research, since this would reduce the incidence of epilepsy by about 40 percent.

    The development of anti-epileptogenic treatments awaits identification of a so-called epileptogenic marker – that is, a measurable event which occurs specifically only during the development of epilepsy, when seizures have yet to become clinically evident.

    An European Grid Infrastructure (EGI) collaboration, led by Massimo Rizzi at the Mario Negri Institute for Pharmacological Research, appears to have pinpointed just such a marker. All it took was some heavy-duty grid computing and a handful of mice.

    Epilepsies

    Epilepsy comes in many varieties, and is characterized as a seizure-inducing condition of the brain. These seizures result from the simultaneous signaling of multiple neurons. Considered chronic, this neurological disorder afflicts some 65 million people internationally.

    2
    Squadra di calcolo. Scientists credit a recent breakthrough in epilepsy research to the computational power provided by the Italian National Institute of Nuclear Physics (INFN), a key component of the Italian Grid Infrastructure (IGI) and European Grid Infrastructure (EGI). Courtesy INFN.

    Incurable at present, epileptic seizures are controllable to a large extent, typically with pharmacological agents. Changes in diet can lower seizures, as can electrical devices and surgeries. According the US National Institute of Health (NIH), annual US epilepsy-related costs are estimated at $15.5 billion.

    Scientists Rizzi and his colleagues thought an alteration in brain electrical activity following the exposure to a risk factor might be a smart place to look for an epileptogenic marker. If this marker could be located, it then could be exploited to develop treatments that prevent the emergence of epilepsy.

    Of mice and men

    To search for the marker, Rizzi’s team focused their attention on an animal model of epilepsy. Mice developed epilepsy after exposure to a cerebral insult that mimics the effects of risk factors as they would occur in humans.

    Examining the brain electrical activity of these mice, Rizzi’s team combed through 32,000 epidural electrocorticograms (ECoG), 12 seconds/4800 data points at a time, for up to an hour preceding the first epileptic seizure.

    Each swath of ECoGs were run through the recurrence quantification analysis (RQA), a powerful mathematical tool specifically designed for the investigation of non-linear complex dynamics embedded in time-series readings such as the ECoG.

    3
    Thinking of a mouse. Brain activity of a mouse that developed epilepsy following exposure to an infusion of albumin.

    When the dust had settled, nearly 400,000 seconds of ECoGs revealed a telling pattern. The scientists found that high rates of dynamic intermittency accompany the development of epilepsy. In other words, the ECoGs of mice developing epilepsy from the induced trauma would rapidly alter between nearly periodic and then irregular behavior of brain electrical activity.

    Noting this signal, researchers applied an experimental anti-epileptogenic treatment that successfully reduced the rate of occurrence of this complex oscillation pattern. Identification of the complex oscillation and its arrest under treatment led Rizzi and his team to confidently assert that high rates of dynamic intermittency can be considered as a marker of epileptogenesis. Their research was recently published in Scientific Reports.

    Tools of the trade

    Rizzi’s team made good use of the computational and storage resources at the Italian National Institute of Nuclear Physics (INFN). The INFN is a key component in the Italian Grid Infrastructure (IGI), which is integrated into the larger EGI, Europe’s leading grid computing infrastructure.

    “The time required to accomplish calculations of these datasets would have taken more than two months by an ordinary PC, instead of a little more than two days using grid computing,” says Rizzi. “Considering also the preliminary settings of analytical conditions and validation tests of results, almost two years of calculations were collapsed into a couple of months by high throughput computing technology.”

    From here, Rizzi hands off to pre-clinical researchers who can begin to develop interventions that will reduce and hopefully eliminate the emergence of epilepsy after exposure to risk factors. This knowledge holds out promise for use in the development of anti-epileptogenic therapies.

    “This insight will help us reduce the incidence of epilepsy by approximately 40 percent,” Rizzi estimates. “Our future aim is to exploit our finding in order to improve the development of therapeutics. High throughput computer technology will keep on playing a fundamental role by significantly speeding up this field of research.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:52 am on September 17, 2016 Permalink | Reply
    Tags: , , Oakley Cluster, Ohio Supercomputer Center, Supercomputing   

    From Ohio Supercomputer Center: “Ohio Supercomputer Center gets super-charged Owens Cluster up and running – PHOTOS” 

    1
    Oakley supercomputer

    Ohio Supercomputer Center

    2

    Columbus Business First

    Sep 15, 2016
    Carrie Ghose

    The Ohio Supercomputer Center has started bringing online its $7 million Owens Cluster, named for Olympian and Ohio State University alum Jesse Owens. It has 25 times the computing power and 450 times the memory of the 8-year-old Glenn Cluster it replaces in the same floor space.

    And it cost $1 million less.

    3
    Owens Cluster

    “Supercomputers have gotten as big as they’re going to get,” said David Hudak, OSC’s interim executive director. “They’re getting denser now. … The thing about a supercomputer is it’s built from the same components as laptops or desktops or servers.”

    The “super” comes from the connections between those components and the software coordinating them, getting computing cores to work together like a single machine at exponentially faster speeds.

    Dell Inc. won the bid process to build Owens, based on criteria including speed, memory, bandwidth, interconnection speed, disk speeds and processor architecture. Companies such as HP, IBM Corp. and Cray Inc. also build supercomputers. The project comes out of OSC’s $12 million capital appropriation in the two-year budget.

    The machines come blank, so the center must install its operating systems and software that makes the thing work, from its own or vendor sites.

    The installation this summer came during regular quarterly down times for maintenance, so users weren’t caught off guard. When a portion of Owens was fired up in late August, a few users were brought on. The center can only turn one a portion at a time until the cooling system is completely installed.

    When its fully running by early November, researchers can do in hours what took days, allowing more adjustments and iterations of experiments. About 10 percent of use is by private industry for data analytics and product simulations.

    “When they’re given more capacity, they dial up the fidelity of their experiments,” Hudak said. “It’s about being able to do these simulations at much higher (detail and speed).”

    The retired cluster was named for astronaut and U.S. Sen. John Glenn; one cabinet still is plugged in until Owens is fully up. A few retired Glenn cabinets were labeled as museum pieces to explain the parts of a supercomputer and are on display at a few colleges and JobsOhio. A few hundred of its processing chips were turned into refrigerator magnets. The state will try to sell the bulk of the 2008 equipment for parts or recycling.

    OSC’s office and the actual computer are in different buildings. The data center inside the State of Ohio Computing Center on west campus of Ohio State University uses 1.1 megawatts of power, more than half of that for cooling.

    Behold, Moore’s Law in action, sort of, via the progression of OSC’s equipment:

    Glenn Cluster (decommissioned): 2008, $8 million, 5,000 processors, speed of 40 teraflops (trillion “floating-point operations per second”).
    Oakley: 2012, $4.5 million, 8,000 processors, 150 teraflops.
    Ruby: 2014, $1.5 million, 4,800 processors (at one-third the floor space of Glenn and half that of Oakley), 140 teraflops.
    Owens: 2016, $7 million, 23,500 processors, 800 teraflops.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Ohio Supercomputer Center empowers a wide array of groundbreaking innovation and economic development activities in the fields of bioscience, advanced materials, data exploitation and other areas of state focus by providing a powerful high performance computing, research and educational cyberinfrastructure for a diverse statewide/regional constituency.

    The Ohio Supercomputer Center partners strategically with Ohio researchers — especially following the 2002 establishment of a focused research support program — in developing competitive, collaborative proposals to regional, national, and international funding organizations to solve some of the world’s most challenging scientific and engineering problems.

    The Ohio Supercomputer Center leads strategic research activities of vital interest to the State of Ohio, the nation and the world community, leveraging the exceptional skills and knowledge of an in-house research staff specializing in the fields of supercomputing, computational science, data management, biomedical applications and a host of emerging disciplines.

     
  • richardmitnick 8:25 am on September 9, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From PNNL: “Advanced Computing, Mathematics and Data Research Highlights” 

    PNNL BLOC
    PNNL Lab

    September 2016

    Global Arrays Gets an Update from PNNL and Intel Corp.

    Scientists Jeff Daily, Abhinav Vishnu, and Bruce Palmer, all from the ACMD Division High Performance Computing group at PNNL, served as the core team for a new release of the Global Arrays (GA) toolkit, known as Version 5.5. GA 5.5 provides additional support and bug fixes for the parallel Partitioned Global Address Space (PGAS) programing model.

    GA 5.5 incorporates support for libfabric (https://ofiwg.github.io/libfabric/), which helps meet performance and scalability requirements of high-performance applications, such as PGAS programming models (like GA), Message Passing Interface (MPI) libraries, and enterprise applications running in tightly coupled network environments. The updates to GA 5.5 resulted from a coordinated effort between the GA team and Intel Corp. Along with incorporating support for libfabric, the update added native support for the Intel Omni-Path high-performance communication architecture and applied numerous bug fixes since the previous GA 5.4 release to both Version 5.5 and the ga-5-4 release branch of GA’s subversion repository.

    Originally developed in the late 1990s at PNNL, the GA toolkit offers diverse libraries employed within many applications, including quantum chemistry and molecular dynamics codes (notably, NWChem), as well as those used for computational fluid dynamics, atmospheric sciences, astrophysics, and bioinformatics.

    “This was a significant effort from Intel to work with us on the libfabric, and eventual Intel Omni-Path, support,” Daily explained. “Had we not refactored our Global Arrays one-sided communication library, ComEx, a few years ago to make it easier to port to new systems, this would not have been possible. Now that our code is much easier to integrate with, we envision more collaborations like this in the future.”

    Download information for GA 5.5. and the GA subversion repository is available here.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: