Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:15 am on August 9, 2019 Permalink | Reply
    Tags: , Cray Shasta for US Air Force, , Supercomputing   

    From insideHPC: “Cray Shasta Supercomputer to power weather forecasting for U.S. Air Force” 

    From insideHPC

    August 9, 2019

    Today Cray announced that their first Shasta supercomputing system for operational weather forecasting and meteorology will be acquired by the Air Force Life Cycle Management Center in partnership with Oak Ridge National Laboratory. The powerful high-performance computing capabilities of the new system, named HPC11, will enable higher fidelity weather forecasts for U.S. Air Force and Army operations worldwide. The contract is valued at $25 million.

    1

    “We’re excited with our Oak Ridge National Laboratory strategic partner’s selection of Cray to provide Air Force Weather’s next high performance computing system,” said Steven Wert, Program Executive Officer Digital, Air Force Life Cycle Management Center at Hanscom Air Force Base in Massachusetts, and a member of the Senior Executive Service. “The system’s performance will be a significant increase over the existing HPC capability and will provide Air Force Weather operators with the ability to run the next generation of high-resolution, global and regional models, and satisfy existing and emerging warfighter needs for environmental impacts to operations planning.”

    Oak Ridge National Laboratory (ORNL) has a history of deploying the world’s most powerful supercomputers and through this partnership, will provide supercomputing-as-a-service on the HPC11 Shasta system to the Air Force 557th Weather Wing. The 557th Weather Wing develops and provides comprehensive terrestrial and space weather information to the U.S. Air Force and Army. The new system will feature the revolutionary Cray Slingshot interconnect, with features to better support time-critical numerical weather prediction workloads, and will enhance the Air Force’s capabilities to create improved weather forecasts and weather threat assessments so that Air Force missions can be carried out more effectively.

    “The HPC11 system will be the first Shasta delivery to the production weather segment, and we’re proud to share this milestone with ORNL and the Air Force,” said Peter Ungaro, president and CEO at Cray. “The years of innovation behind Shasta and Slingshot and the success of prior generations of Cray systems continue to demonstrate Cray’s ability to support demanding 24/7 operations like weather forecasting. This is a great example of the upcoming Exascale Era bringing a new set of technologies to bear on challenging problems and empowering the Air Force to more effectively execute on its important mission.”


    In this video, Cray CTO Steve Scott announces Slingshot, the company’s new high-speed, purpose-built supercomputing interconnect, and introduces its many ground-breaking features.

    HPC11 will be ORNL’s first Cray Shasta system, as well as the first supercomputing system with 2nd Gen AMD EPYC™ processors for use in operational weather forecasting. HPC11 will join the 85% bastion of weather centers that rely on Cray, and will feature eight Shasta cabinets in a dual-hall configuration.

    “We are incredibly excited to continue our strategic collaboration with Cray to deliver the first Shasta supercomputer to the U.S. Air Force, helping to improve the fidelity of weather forecasts for U.S. military operations around the globe,” said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Systems Group, AMD. “The 2nd Gen AMD EPYC processors provide exceptional performance in highly complex workloads, a necessary component to power critical weather prediction workloads and deliver more accurate forecasts.”

    The system is expected to be delivered in Q4 2019 and accepted in early 2020.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:31 pm on August 8, 2019 Permalink | Reply
    Tags: ALLOT-See.Control.Secure, , , , Lenovo’s ThinkSystem SR635 and SR655, Supercomputing   

    From insideHPC: “Lenovo Launches Single Socket Servers with AMD EPYC 7002 Series” 

    From insideHPC

    2

    Today Lenovo introduced the Lenovo ThinkSystem SR635 and SR655 server platforms, two of the industry’s most powerful single-socket servers.

    3
    Lenovo ThinkSystem SR635 & SR655 Servers with AMD’s EPYC ‘Rome’ CPUs

    5

    As businesses are tasked with doing more with less, the new Lenovo solutions provide the performance of a dual-socket server at the cost of a single-socket. These new additions to Lenovo’s expansive server portfolio are powered by next-generation AMD EPYC 7002 Series processors and were designed specifically to handle customers’ evolving, data-intensive workloads such as video security, software-defined storage and network intelligence, as well as support for virtualized and edge environments. The result is a solution that packs power along with efficiency for customers who place a premium on balancing throughput and security with easy scalability.

    “Lenovo’s integration of next generation I/O and processing technology gives Allot the ability to manage more network bandwidth at higher speeds, allowing us to pull actionable insights from increasingly heavy traffic without any degradation in performance,” said Mark Shteiman, AVP of Product Management with Allot.

    2

    “The communication service providers and enterprises we support need to see and react to their network needs in real time. We evaluated Lenovo’s ThinkSystem SR635 and SR655 server platform prototypes and were immediately impressed.”

    Organizations are juggling business priorities with tight budgets. Lenovo’s new ThinkSystem SR635 and SR655 server platforms not only allow customers to run more workloads on fewer servers, but also offer up to 73 percent savings on potential software licensing, empowering users to accelerate emerging workloads more efficiently. Additionally, customers can realize a reduction in total cost of ownership (TCO) by up to 46 percent. Further supporting these advances in workload efficiency and TCO savings are the ThinkSystem SR635 and SR655 servers’ world records for energy efficiency. The net result of all these enhancements is better price for performance.

    The Lenovo ThinkSystem SR635 and SR655 provide more throughput, lower latency and higher core density, as well as the largest NVMe drive capacity of any single-socket on the market. Beyond that, the new 2nd Gen AMD EPYC processor based systems also provide a solid opportunity for the enablement of additional hyperconverged infrastructure solutions. This gives Lenovo the ability to offer customers VX and other certified nodes and appliances for simple deployment, management, and scalability.

    Unleashing Smarter Networks with Data Intelligence

    Many customers have been eagerly anticipating these systems due to their ability to handle data-intensive workloads, including Allot, a leading global provider of innovative network intelligence and security solutions for communications service providers (CSPs) and enterprises. They require solutions that can turn network, application, usage and security data into actionable intelligence that make their customers’ networks smarter and their users more secure. The market dictates that those solutions be able to match their ever-evolving needs and help them to address new pain points that surface as IT demands continue to change.

    ‘Lenovo’s integration of next generation I/O and processing technology gives Allot the ability to manage more network bandwidth at higher speeds, allowing us to pull actionable insights from increasingly heavy traffic without any degradation in performance,” said Mark Shteiman, AVP of Product Management with Allot. “The communication service providers and enterprises we support need to see and react to their network needs in real time. We evaluated Lenovo’s ThinkSystem SR635 and SR655 server platform prototypes and were immediately impressed.”

    The new Lenovo ThinkSystem SR635 and SR655 solutions are now available through Lenovo sales representatives and channel partners across the globe.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:02 pm on August 8, 2019 Permalink | Reply
    Tags: , , Army Research Lab (ARL), , ERDC-U.S. Army Engineering and Research Development Center, , Supercomputing   

    From insideHPC: “AMD to Power Two Cray CS500 Systems at Army Research Centers” 

    From insideHPC

    August 8, 2019

    Today Cray announced that the U.S. Department of Defense (DOD) has selected two Cray CS500 systems for its High Performance Computing Modernization Program (HPCMP) annual technology procurement known as TI-18.

    3

    The Army Research Lab (ARL) and the U.S. Army Engineering and Research Development Center (ERDC) will each deploy a Cray CS500 to help serve the U.S. through accelerated research in science and technology.

    2
    Cray CS500

    The two contracts are valued at more than $46M and the CS500 systems are expected to be delivered to ARL and ERDC in the fourth quarter of 2019.

    “We’re proud to continue to support the DOD and its advanced use of high-performance computing in providing ARL and ERDC new systems for their research programs,” said Peter Ungaro, CEO at Cray. “We’re looking forward to continued collaboration with the DOD in leveraging the capabilities of these new systems to achieve their important mission objectives.”

    Cray has a long history of delivering high-performance computing technologies to ARL and ERDC and continues to play a vital role in helping the organizations deliver on their missions to ensure the U.S. remains a leader in science. Both organizations’ CS500 systems will be equipped with 2nd Gen AMD EPYC processors and NVIDIA Tensor Core GPUs, and will provide access to high-performance capabilities and resources that make it possible for researchers, scientists and engineers across the Department of Defense to better understand insights and enable new discoveries across diverse research disciplines to address the Department’s most challenging problems.

    “We are truly proud to partner with Cray to create the world’s most powerful supercomputing platforms. To be selected to help accelerate scientific research and discovery is a testament to our commitment to datacenter innovation,” said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Solutions Business Group, AMD. “By leveraging breakthrough CPU performance and robust feature set of the 2nd Gen AMD EPYC processors with Cray CS500 supercomputers, the DOD has a tremendous opportunity to grow its computing capabilities and deliver on its missions.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:18 pm on August 5, 2019 Permalink | Reply
    Tags: "Fermilab’s HEPCloud goes live", , , , , , , Supercomputing   

    From Fermi National Accelerator Lab: “Fermilab’s HEPCloud goes live” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    August 5, 2019
    Marcia Teckenbrock

    To meet the evolving needs of high-energy physics experiments, the underlying computing infrastructure must also evolve. Say hi to HEPCloud, the new, flexible way of meeting the peak computing demands of high-energy physics experiments using supercomputers, commercial services and other resources.

    Five years ago, Fermilab scientific computing experts began addressing the computing resource requirements for research occurring today and in the next decade. Back then, in 2014, some of Fermilab’s neutrino programs were just starting up. Looking further into future, plans were under way for two big projects. One was Fermilab’s participation in the future High-Luminosity Large Hadron Collider at the European laboratory CERN.

    The other was the expansion of the Fermilab-hosted neutrino program, including the international Deep Underground Neutrino Experiment. All of these programs would be accompanied by unprecedented data demands.

    To meet these demands, the experts had to change the way they did business.

    HEPCloud, the flagship project pioneered by Fermilab, changes the computing landscape because it employs an elastic computing model. Tested successfully over the last couple of years, it officially went into production as a service for Fermilab researchers this spring.

    2
    Scientists on Fermilab’s NOvA experiment were able to execute around 2 million hardware threads at a supercomputer [NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science the Office of Science’s National Energy Research Scientific Computing Center.] And scientists on CMS experiment have been running workflows using HEPCloud at NERSC as a pilot project. Photo: Roy Kaltschmidt, Lawrence Berkeley National Laboratory]

    Experiments currently have some fixed computing capacity that meets, but doesn’t overshoot, its everyday needs. For times of peak demand, HEPCloud enables elasticity, allowing experiments to rent computing resources from other sources, such as supercomputers and commercial clouds, and manages them to satisfy peak demand. The prior method was to purchase local resources that on a day-to-day basis, overshoot the needs. In this new way, HEPCloud reduces the costs of providing computing capacity.

    “Traditionally, we would buy enough computers for peak capacity and put them in our local data center to cover our needs,” said Fermilab scientist Panagiotis Spentzouris, former HEPCloud project sponsor and a driving force behind HEPCloud. “However, the needs of experiments are not steady. They have peaks and valleys, so you want an elastic facility.”

    In addition, HEPCloud optimizes resource usage across all types, whether these resources are on site at Fermilab, on a grid such as Open Science Grid, in a cloud such as Amazon or Google, or at supercomputing centers like those run by the DOE Office of Science Advanced Scientific Computing Research program (ASCR). And it provides a uniform interface for scientists to easily access these resources without needing expert knowledge about where and how best to run their jobs.

    The idea to create a virtual facility to extend Fermilab’s computing resources began in 2014, when Spentzouris and Fermilab scientist Lothar Bauerdick began exploring ways to best provide resources for experiments at CERN’s Large Hadron Collider. The idea was to provide those resources based on the overall experiment needs rather than a certain amount of horsepower. After many planning sessions with computing experts from the CMS experiment at the LHC and beyond, and after a long period of hammering out the idea, a scientific facility called “One Facility” was born. DOE Associate Director of Science for High Energy Physics Jim Siegrist coined the name “HEPCloud” — a computing cloud for high-energy physics — during a general discussion about a solution for LHC computing demands. But interest beyond high-energy physics was also significant. DOE Associate Director of Science for Advanced Scientific Computing Research Barbara Helland was interested in HEPCloud for its relevancy to other Office of Science computing needs.

    3
    The CMS detector at CERN collects data from particle collisions at the Large Hadron Collider. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud. Photo: CERN

    The project was a collaborative one. In addition to many individuals at Fermilab, Miron Livny at the University of Wisconsin-Madison contributed to the design, enabling HEPCloud to use the workload management system known as Condor (now HTCondor), which is used for all of the lab’s current grid activities.

    Since its inception, HEPCloud has achieved several milestones as it moved through the several development phases leading up to production. The project team first demonstrated the use of cloud computing on a significant scale in February 2016, when the CMS experiment used HEPCloud to achieve about 60,000 cores on the Amazon cloud, AWS. In November 2016, CMS again used HEPCloud to run 160,000 cores using Google Cloud Services , doubling the total size of the LHC’s computing worldwide. Most recently in May 2018, NOvA scientists were able to execute around 2 million hardware threads at a supercomputer the Office of Science’s National Energy Research Scientific Computing Center (NERSC), increasing both the scale and the amount of resources provided. During these activities, the experiments were executing and benefiting from real physics workflows. NOvA was even able to report significant scientific results at the Neutrino 2018 conference in Germany, one of the most attended conferences in neutrino physics.

    CMS has been running workflows using HEPCloud at NERSC as a pilot project. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud.

    Next, HEPCloud project members will work to expand the reach of HEPCloud even further, enabling experiments to use the leadership-class supercomputing facilities run by ASCR at Argonne National Laboratory and Oak Ridge National Laboratory.

    Fermilab experts are working to see that, eventually, all Fermilab experiments be configured to use these extended computing resources.

    This work is supported by the DOE Office of Science.

    See the full here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 12:01 pm on August 5, 2019 Permalink | Reply
    Tags: "Large cosmological simulation to run on Mira", , , , , , , Supercomputing   

    From Argonne Leadership Computing Facility: “Large cosmological simulation to run on Mira” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    An extremely large cosmological simulation—among the five most extensive ever conducted—is set to run on Mira this fall and exemplifies the scope of problems addressed on the leadership-class supercomputer at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Argonne physicist and computational scientist Katrin Heitmann leads the project. Heitmann was among the first to leverage Mira’s capabilities when, in 2013, the IBM Blue Gene/Q system went online at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Among the largest cosmological simulations ever performed at the time, the Outer Rim Simulation she and her colleagues carried out enabled further scientific research for many years.

    For the new effort, Heitmann has been allocated approximately 800 million core-hours to perform a simulation that reflects cutting-edge observational advances from satellites and telescopes and will form the basis for sky maps used by numerous surveys. Evolving a massive number of particles, the simulation is designed to help resolve mysteries of dark energy and dark matter.

    “By transforming this simulation into a synthetic sky that closely mimics observational data at different wavelengths, this work can enable a large number of science projects throughout the research community,” Heitmann said. “But it presents us with a big challenge.” That is, in order to generate synthetic skies across different wavelengths, the team must extract relevant information and perform analysis either on the fly or after the fact in post-processing. Post-processing requires the storage of massive amounts of data—so much, in fact, that merely reading the data becomes extremely computationally expensive.

    Since Mira was launched, Heitmann and her team have implemented in their Hardware/Hybrid Accelerated Cosmology Code (HACC) more sophisticated analysis tools for on-the-fly processing. “Moreover, compared to the Outer Rim Simulation, we’ve effected three major improvements,” she said. “First, our cosmological model has been updated so that we can run a simulation with the best possible observational inputs. Second, as we’re aiming for a full-machine run, volume will be increased, leading to better statistics. Most importantly, we set up several new analysis routines that will allow us to generate synthetic skies for a wide range of surveys, in turn allowing us to study a wide range of science problems.”

    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing predictive tools and aid the development of new models, impacting both ongoing and upcoming cosmological surveys, including the Dark Energy Spectroscopic Instrument (DESI), the Large Synoptic Survey Telescope (LSST), SPHEREx, and the “Stage-4” ground-based cosmic microwave background experiment (CMB-S4).

    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018


    NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

    LSST

    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.


    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

    NASA’s SPHEREx Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer depiction

    4

    The value of the simulation derives from its tremendous volume (which is necessary to cover substantial portions of survey areas) and from attaining levels of mass and force resolution sufficient to capture the small structures that host faint galaxies.

    The volume and resolution pose steep computational requirements, and because they are not easily met, few large-scale cosmological simulations are carried out. Contributing to the difficulty of their execution is the fact that the memory footprints of supercomputers have not advanced proportionally with processing speed in the years since Mira’s introduction. This makes that system, despite its relative age, rather optimal for a large-scale campaign when harnessed in full.

    “A calculation of this scale is just a glimpse at what the exascale resources in development now will be capable of in 2021/22,” said Katherine Riley, ALCF Director of Science. “The research community will be taking advantage of this work for a very long time.”

    Funding for the simulation is provided by DOE’s High Energy Physics program. Use of ALCF computing resources is supported by DOE’s Advanced Scientific Computing Research program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:31 pm on August 1, 2019 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Lawrence Berkeley National Lab: “Is your Supercomputer Stumped? There May Be a Quantum Solution” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    August 1, 2019
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    Berkeley Lab-led team solves a tough math problem with quantum computing.

    1
    (Credit: iStock/metamorworks)

    Some math problems are so complicated that they can bog down even the world’s most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

    A new study led by a physicist at Lawrence Berkeley National Laboratory (Berkeley Lab), published in the journal Scientific Reports, details how a quantum computing technique called “quantum annealing” can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

    Seeking a quantum solution to really big problems

    “No quantum annealing algorithm exists for the problems that we are trying to solve,” said Chia Cheng “Jason” Chang, a RIKEN iTHEMS fellow in Berkeley Lab’s Nuclear Science Division and a research scientist at RIKEN, a scientific institute in Japan.

    “The problems we are looking at are really, really big,” said Chang, who led the international team behind the study, published in the Scientific Reports journal. “The idea here is that the quantum annealer can evaluate a large number of variables at the same time and return the right solution in the end.”

    The same problem-solving algorithm that Chang devised for the latest study, and that is available to the public via open-source code, could potentially be adapted and scaled for use in systems engineering and operations research, for example, or in other industry applications.

    Classical algebra with a quantum computer

    “We are cooking up small ‘toy’ examples just to develop how an algorithm works. The simplicity of current quantum annealers is that the solution is classical – akin to doing algebra with a quantum computer. You can check and understand what you are doing with a quantum annealer in a straightforward manner, without the massive overhead of verifying the solution classically.”

    Chang’s team used a commercial quantum annealer located in Burnaby, Canada, called the D-Wave 2000Q that features superconducting electronic elements chilled to extreme temperatures to carry out its calculations.

    Access to the D-Wave annealer was provided via the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory (ORNL).

    “These methods will help us test the promise of quantum computers to solve problems in applied mathematics that are important to the U.S. Department of Energy’s scientific computing mission,” said Travis Humble, director of ORNL’s Quantum Computing Institute.

    Quantum data: A one, a zero, or both at the same time

    There are currently two of these machines in operation that are available to the public. They work by applying a common rule in physics: Systems in physics tend to seek out their lowest-energy state. For example, in a series of steep hills and deep valleys, a person traversing this terrain would tend to end up in the deepest valley, as it takes a lot of energy to climb out of it and the least amount of energy to settle in this valley.

    The annealer applies this rule to calculations. In a typical computer, memory is stored in a series of bits that are occupied by either one or a zero. But quantum computing introduces a new paradigm in calculations: quantum bits, or qubits. With qubits, information can exist as either a one, a zero, or both at the same time. This trait makes quantum computers better suited to solving some problems with a very large number of possible variables that must be considered for a solution.

    Each of the qubits used in the latest study ultimately produces a result of either a one or a zero by applying the lowest-energy-state rule, and researchers tested the algorithm using up to 30 logical qubits.

    The algorithm that Chang developed to run on the quantum annealer can solve polynomial equations, which are equations that can have both numbers and variables and are set to add up to zero. A variable can represent any number in a large range of numbers.

    When there are ‘fewer but very dense calculations’

    Berkeley Lab and neighboring UC Berkeley have become a hotbed for R&D in the emerging field of quantum information science, and last year announced the formation of a partnership called Berkeley Quantum to advance this field.

    3
    Berkeley Quantum

    Chang said that the quantum annealing approach used in the study, also known as adiabatic quantum computing, “works well for fewer but very dense calculations,” and that the technique appealed to him because the rules of quantum mechanics are familiar to him as a physicist.

    The data output from the annealer was a series of solutions for the equations sorted into columns and rows. This data was then mapped into a representation of the annealer’s qubits, Chang explained, and the bulk of the algorithm was designed to properly account for the strength of the interaction between the annealer’s qubits. “We repeated the process thousands of times” to help validate the results, he said.

    “Solving the system classically using this approach would take an exponentially long time to complete, but verifying the solution was very quick” with the annealer, he said, because it was solving a classical problem with a single solution. If the problem was quantum in nature, the solution would be expected to be different every time you measure it.

    Some math problems are so complicated that they can bog down even the world’s most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

    A new study led by a physicist at Lawrence Berkeley National Laboratory (Berkeley Lab), published in the journal Scientific Reports, details how a quantum computing technique called “quantum annealing” can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

    Real-world applications for a quantum algorithm

    As quantum computers are equipped with more qubits that allow them to solve more complex problems more quickly, they can also potentially lead to energy savings by reducing the use of far larger supercomputers that could take far longer to solve the same problems.

    The quantum approach brings within reach direct and verifiable solutions to problems involving “nonlinear” systems – in which the outcome of an equation does not match up proportionately to the input values. Nonlinear equations are problematic because they may appear more unpredictable or chaotic than other “linear” problems that are far more straightforward and solvable.

    Chang sought the help of quantum-computing experts in quantum computing both in the U.S. and in Japan to develop the successfully tested algorithm. He said he is hopeful the algorithm will ultimately prove useful to calculations that can test how subatomic quarks behave and interact with other subatomic particles in the nuclei of atoms.

    While it will be an exciting next step to work to apply the algorithm to solve nuclear physics problems, “This algorithm is much more general than just for nuclear science,” Chang noted. “It would be exciting to find new ways to use these new computers.”

    The Oak Ridge Leadership Computing Facility is a DOE Office of Science User Facility.

    Researchers from Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and the RIKEN Computational Materials Science Research Team also participated in the study.

    The study was supported by the U.S. Department of Energy Office of Science; and by Oak Ridge National Laboratory and its Laboratory Directed Research and Development funds. The Oak Ridge Leadership Computing Facility is supported by the DOE Office of Science’s Advanced Scientific Computing Research program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

     
  • richardmitnick 1:08 pm on July 28, 2019 Permalink | Reply
    Tags: A second milestone would be the creation of fault-tolerant quantum computers., , , But a number of other groups have the potential to achieve quantum supremacy soon including those at IBM; IonQ; Rigetti; and Harvard University., By many accounts Google is knocking on the door of quantum supremacy and could demonstrate it before the end of the year., Circuit size is determined by the number of qubits you start with. Manipulations in a quantum computer are performed using “gates”., Engineers need to be able to build quantum circuits of at least a certain minimum size—and so far they can’t., Extended Church-Turing thesis: Quantum supremacy would be the first experimental violation of that principle and so would usher computer science into a whole new world, If the error rate is too high quantum computers lose their advantage over classical ones., If you run your qubits through 10 gates you’d say your circuit has “depth” 10., Ion traps have a contrasting set of strengths and weaknesses., Let’s consider a circuit that acts on 50 qubits. As the qubits go through the circuit the states of the qubits become intertwined- entangled- in what’s called a quantum superposition., , , Supercomputing, Superconducting quantum circuits have the advantage of being made out of a solid-state material., The most crucial one is the error that accumulates in a computation each time the circuit performs a gate operation., The problem quantum engineers now face is that as the number of qubits and gates increases so does the error rate., There are many sources of error in a quantum circuit.   

    From Nautilus: “Quantum Supremacy Is Coming: Here’s What You Should Know” 

    Nautilus

    From Nautilus

    July 2019
    Kevin Hartnett

    1
    Graham Carlow

    IBM iconic image of Quantum computer

    Researchers are getting close to building a quantum computer that can perform tasks a classical computer can’t. Here’s what the milestone will mean.

    Quantum computers will never fully replace “classical” ones like the device you’re reading this article on. They won’t run web browsers, help with your taxes, or stream the latest video from Netflix.

    3
    Lenovo ThinkPad X1 Yoga (OLED)

    What they will do—what’s long been hoped for, at least—will be to offer a fundamentally different way of performing certain calculations. They’ll be able to solve problems that would take a fast classical computer billions of years to perform. They’ll enable the simulation of complex quantum systems such as biological molecules, or offer a way to factor incredibly large numbers, thereby breaking long-standing forms of encryption.

    The threshold where quantum computers cross from being interesting research projects to doing things that no classical computer can do is called “quantum supremacy.” Many people believe that Google’s quantum computing project will achieve it later this year. In anticipation of that event, we’ve created this guide for the quantum-computing curious. It provides the information you’ll need to understand what quantum supremacy means, and whether it’s really been achieved.

    What is quantum supremacy and why is it important?

    To achieve quantum supremacy, a quantum computer would have to perform any calculation that, for all practical purposes, a classical computer can’t.

    In one sense, the milestone is artificial. The task that will be used to test quantum supremacy is contrived—more of a parlor trick than a useful advance (more on this shortly). For that reason, not all serious efforts to build a quantum computer specifically target quantum supremacy. “Quantum supremacy, we don’t use [the term] at all,” said Robert Sutor, the executive in charge of IBM’s quantum computing strategy. “We don’t care about it at all.”

    But in other ways, quantum supremacy would be a watershed moment in the history of computing. At the most basic level, it could lead to quantum computers that are, in fact, useful for certain practical problems.

    There is historical justification for this view. In the 1990s, the first quantum algorithms solved problems nobody really cared about. But the computer scientists who designed them learned things that they could apply to the development of subsequent algorithms (such as Shor’s algorithm for factoring large numbers) that have enormous practical consequences.

    “I don’t think those algorithms would have existed if the community hadn’t first worked on the question ‘What in principle are quantum computers good at?’ without worrying about use value right away,” said Bill Fefferman, a quantum information scientist at the University of Chicago.

    The quantum computing world hopes that the process will repeat itself now. By building a quantum computer that beats classical computers—even at solving a single useless problem—researchers could learn things that will allow them to build a more broadly useful quantum computer later on.

    “Before supremacy, there is simply zero chance that a quantum computer can do anything interesting,” said Fernando Brandão, a theoretical physicist at the California Institute of Technology and a research fellow at Google. “Supremacy is a necessary milestone.”

    In addition, quantum supremacy would be an earthquake in the field of theoretical computer science. For decades, the field has operated under an assumption called the “extended Church-Turing thesis,” which says that a classical computer can efficiently perform any calculation that any other kind of computer can perform efficiently. Quantum supremacy would be the first experimental violation of that principle and so would usher computer science into a whole new world. “Quantum supremacy would be a fundamental breakthrough in the way we view computation,” said Adam Bouland, a quantum information scientist at the University of California, Berkeley.

    How do you demonstrate quantum supremacy?

    By solving a problem on a quantum computer that a classical computer cannot solve efficiently. The problem could be whatever you want, though it’s generally expected that the first demonstration of quantum supremacy will involve a particular problem known as “random circuit sampling.”

    A simple example of a random sampling problem is a program that simulates the roll of a fair die. Such a program runs correctly when it properly samples from the possible outcomes, producing each of the six numbers on the die one-sixth of the time as you run the program repeatedly.

    In place of a die, this candidate problem for quantum supremacy asks a computer to correctly sample from the possible outputs of a random quantum circuit, which is like a series of actions that can be performed on a set of quantum bits, or qubits. Let’s consider a circuit that acts on 50 qubits. As the qubits go through the circuit, the states of the qubits become intertwined, or entangled, in what’s called a quantum superposition. As a result, at the end of the circuit, the 50 qubits are in a superposition of 250 possible states. If you measure the qubits, the sea of 250 possibilities collapses into a single string of 50 bits. This is like rolling a die, except instead of six possibilities you have 250, or 1 quadrillion, and not all of the possibilities are equally likely to occur.

    Quantum computers, which can exploit purely quantum features such as superpositions and entanglement, should be able to efficiently produce a series of samples from this random circuit that follow the correct distribution. For classical computers, however, there’s no known fast algorithm for generating these samples—so as the range of possible samples increases, classical computers quickly get overwhelmed by the task.

    What’s the holdup?

    As long as quantum circuits remain small, classical computers can keep pace. So to demonstrate quantum supremacy via the random circuit sampling problem, engineers need to be able to build quantum circuits of at least a certain minimum size—and so far, they can’t.

    Circuit size is determined by the number of qubits you start with, combined with the number of times you manipulate those qubits. Manipulations in a quantum computer are performed using “gates,” just as they are in a classical computer. Different kinds of gates transform qubits in different ways—some flip the value of a single qubit, while others combine two qubits in different ways. If you run your qubits through 10 gates, you’d say your circuit has “depth” 10.

    To achieve quantum supremacy, computer scientists estimate a quantum computer would need to solve the random circuit sampling problem for a circuit in the ballpark of 70 to 100 qubits with a depth of around 10. If the circuit is much smaller than that, a classical computer could probably still manage to simulate it — and classical simulation techniques are improving all the time.

    Yet the problem quantum engineers now face is that as the number of qubits and gates increases, so does the error rate. And if the error rate is too high, quantum computers lose their advantage over classical ones.

    There are many sources of error in a quantum circuit.

    At the moment, the best two-qubit quantum gates have an error rate of around 0.5%, meaning that there’s about one error for every 200 operations. This is astronomically higher than the error rate in a standard classical circuit, where there’s about one error every 1017operations. To demonstrate quantum supremacy, engineers are going to have to bring the error rate for two-qubit gates down to around 0.1%.

    How will we know for sure that quantum supremacy has been demonstrated?

    Some milestones are unequivocal. Quantum supremacy is not one of them. “It’s not like a rocket launch or a nuclear explosion, where you just watch and immediately know whether it succeeded,” said Scott Aaronson, a computer scientist at the University of Texas, Austin.

    To verify quantum supremacy, you have to show two things: that a quantum computer performed a calculation fast, and that a classical computer could not efficiently perform the same calculation.

    It’s the second part that’s trickiest. Classical computers often turn out to be better at solving certain kinds of problems than computer scientists expected. Until you’ve proved a classical computer can’t possibly do something efficiently, there’s always the chance that a better, more efficient classical algorithm exists. Proving that such an algorithm doesn’t exist is probably more than most people will need in order to believe a claim of quantum supremacy, but such a claim could still take some time to be accepted.

    How close is anyone to achieving it?

    By many accounts Google is knocking on the door of quantum supremacy and could demonstrate it before the end of the year. (Of course, the same was said in 2017.) But a number of other groups have the potential to achieve quantum supremacy soon, including those at IBM, IonQ, Rigetti and Harvard University.

    These groups are using several distinct approaches to building a quantum computer. Google, IBM and Rigetti perform quantum calculations using superconducting circuits. IonQ uses trapped ions. The Harvard initiative, led by Mikhail Lukin, uses rubidium atoms. Microsoft’s approach, which involves “topological qubits,” seems like more of a long shot.

    Each approach has its pros and cons.

    Superconducting quantum circuits have the advantage of being made out of a solid-state material. They can be built with existing fabrication techniques, and they perform very fast gate operations. In addition, the qubits don’t move around, which can be a problem with other technologies. But they also have to be cooled to extremely low temperatures, and each qubit in a superconducting chip has to be individually calibrated, which makes it hard to scale the technology to the thousands of qubits (or more) that will be needed in a really useful quantum computer.

    Ion traps have a contrasting set of strengths and weaknesses. The individual ions are identical, which helps with fabrication, and ion traps give you more time to perform a calculation before the qubits become overwhelmed with noise from the environment. But the gates used to operate on the ions are very slow (thousands of times slower than superconducting gates) and the individual ions can move around when you don’t want them to.

    At the moment, superconducting quantum circuits seem to be advancing fastest. But there are serious engineering barriers facing all of the different approaches. A major new technological advance will be needed before it’s possible to build the kind of quantum computers people dream of. “I’ve heard it said that quantum computing might need an invention analogous to the transistor—a breakthrough technology that performs nearly flawlessly and which is easily scalable,” Bouland said. “While recent experimental progress has been impressive, my inclination is that this hasn’t been found yet.”

    Say quantum supremacy has been demonstrated. Now what?

    If a quantum computer achieves supremacy for a contrived task like random circuit sampling, the obvious next question is: OK, so when will it will do something useful?

    The usefulness milestone is sometimes referred to as quantum advantage. “Quantum advantage is this idea of saying: For a real use case—like financial services, AI, chemistry—when will you be able to see, and how will you be able to see, that a quantum computer is doing something significantly better than any known classical benchmark?” said Sutor of IBM, which has a number of corporate clients like JPMorgan Chase and Mercedes-Benz who have started exploring applications of IBM’s quantum chips.

    A second milestone would be the creation of fault-tolerant quantum computers. These computers would be able to correct errors within a computation in real time, in principle allowing for error-free quantum calculations. But the leading proposal for creating fault-tolerant quantum computers, known as “surface code,” requires a massive overhead of thousands of error-correcting qubits for each “logical” qubit that the computer uses to actually perform a computation. This puts fault tolerance far beyond the current state of the art in quantum computing. It’s an open question whether quantum computers will need to be fault tolerant before they can really do anything useful. “There are many ideas,” Brandão said, “but nothing is for sure.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 6:52 am on July 22, 2019 Permalink | Reply
    Tags: , , , , , , , QMCPACK, , Supercomputing, The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.   

    From insideHPC: “Supercomputing Complex Materials with QMCPACK” 

    From insideHPC

    July 21, 2019

    In this special guest feature, Scott Gibson from the Exascale Computing Project writes that computer simulations based on quantum mechanics are getting a boost through QMCPACK.

    2

    The theory of quantum mechanics underlies explorations of the behavior of matter and energy in the atomic and subatomic realms. Computer simulations based on quantum mechanics are consequently essential in designing, optimizing, and understanding the properties of materials that have, for example, unusual magnetic or electrical properties. Such materials would have potential for use in highly energy-efficient electrical systems and faster, more capable electronic devices that could vastly improve our quality of life.

    Quantum mechanics-based simulation methods render robust data by describing materials in a truly first-principles manner. This means they calculate electronic structure in the most basic terms and thus can allow speculative study of systems of materials without reference to experiment, unless researchers choose to add parameters. The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.

    An effort within the US Department of Energy’s Exascale Computing Project (ECP) is developing a QMC methods software named QMCPACK to find, predict, and control materials and properties at the quantum level. The ultimate aim is to achieve an unprecedented and systematically improvable accuracy by leveraging the memory and power capabilities of the forthcoming exascale computing systems.

    Greater Accuracy, Versatility, and Performance

    One of the primary objectives of the QMCPACK project is to reduce errors in calculations so that predictions concerning complex materials can be made with greater assurance.

    “We would like to be able to tell our colleagues in experimentation that we have confidence that a certain short list of materials is going to have all the properties that we think they will,” said Paul Kent of Oak Ridge National Laboratory and principal investigator of QMCPACK. “Many ways of cross-checking calculations with experimental data exist today, but we’d like to go further and make predictions where there aren’t experiments yet, such as a new material or where taking a measurement is difficult—for example, in conditions of high pressure or under an intense magnetic field.”

    The methods the QMCPACK team is developing are fully atomistic and material specific. This refers to having the capability to address all of the atoms in the material—whether it be silver, carbon, cerium, or oxygen, for example—compared with more simplified lattice model calculations where the full details of the atoms are not included.

    The team’s current activities are restricted to simpler, bulk-like materials; but exascale computing is expected to greatly widen the range of possibilities.

    “At exascale not only the increase in compute power but also important changes in the memory on the machines will enable us to explore material defects and interfaces, more-complex materials, and many different elements,” Kent said.

    With the software engineering, design, and computational aspects of delivering the science as the main focus, the project plans to improve QMCPACK’s performance by at least 50x. Based on experimentation using a mini-app version of the software, and incorporating new algorithms, the team achieved a 37x improvement on the pre-exascale Summit supercomputer versus the Titan system.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    One Robust Code

    “We’re taking the lessons we’ve learned from developing the mini app and this proof of concept, the 37x, to update the design of the main application to support this high efficiency, high performance for a range of problem sizes,” Kent said. “What’s crucial for us is that we can move to a single version of the code with no internal forks, to have one source supporting all architectures. We will use all the lessons we’ve learned with experimentation to create one version where everything will work everywhere—then it’s just a matter of how fast. Moreover, in the future we will be able to optimize. But at least we won’t have a gap in the feature matrix, and the student who is running QMCPACK will always have all features work.”

    As an open-source and openly developed product, QMCPACK is improving via the help of many contributors. The QMCPACK team recently published the master citation paper for the software’s code; the publication has 48 authors with a variety of affiliations.

    “Developing these large science codes is an enormous effort,” Kent said. “QMCPACK has contributors from ECP researchers, but it also has many past developers. For example, a great deal of development was done for the Knights Landing processor on the Theta supercomputer with Intel. This doubled the performance on all CPU-like architectures.”

    ANL ALCF Theta Cray XC40 supercomputer

    A Synergistic Team

    The QMCPACK project’s collaborative team draws talent from Argonne, Lawrence Livermore, Oak Ridge, and Sandia National Laboratories.




    It also benefits from collaborations with Intel and NVIDIA.

    3

    The composition of the staff is nearly equally divided between scientific domain specialists and people centered on the software engineering and computer science aspects.

    “Bringing all of this expertise together through ECP is what has allowed us to perform the design study, reach the 37x, and improve the architecture,” Kent said. “All the materials we work with have to be doped, which means incorporating additional elements in them. We can’t run those simulations on Titan but are beginning to do so on Summit with improvements we have made as part of our ECP project. We are really looking forward to the opportunities that will open up when the exascale systems are available.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 8:08 am on July 18, 2019 Permalink | Reply
    Tags: , , , Proposed Expanse supercomputer, , Supercomputing   

    From insideHPC: “NSF Funds $10 Million for ‘Expanse’ Supercomputer at SDSC” 

    From insideHPC

    July 17, 2019

    SDSC Triton HP supercomputer

    SDSC Gordon-Simons supercomputer

    SDSC Dell Comet supercomputer

    The San Diego Supercomputer Center (SDSC) at the University of California San Diego, has been awarded a five-year grant from the National Science Foundation (NSF) valued at $10 million to deploy Expanse, a new supercomputer designed to advance research that is increasingly dependent upon heterogeneous and distributed resources.

    1
    Credit: Ben Tolo, SDSC

    “The name of our new system says it all,” said SDSC Director Michael Norman, the Principal Investigator (PI) for Expanse, and a computational astrophysicist. “As a standalone system, Expanse represents a substantial increase in the performance and throughput compared to our highly successful, NSF-funded Comet supercomputer. But with innovations in cloud integration and composable systems, as well as continued support for science gateways and distributed computing via the Open Science Grid, Expanse will allow researchers to push the boundaries of computing and answer questions previously not possible.”

    The NSF award, which runs from October 1, 2020 to September 30, 2025, is valued at $10 million for acquisition and deployment of Expanse. An additional award will be made in the coming months to support Expanse operations and user support.

    Like SDSC’s Comet [above]supercomputer, which is slated to remain in operation through March 2021, Expanse will continue to serve what is referred to as the ‘long tail’ of science. Virtually every discipline, from multi-messenger astronomy, genomics, and the social sciences, as well as more traditional ones such as earth sciences and biology, depend upon these medium-scale, innovative systems for much of their productive computing.

    “Comet’s focus on reliability, throughput, and usability has made it one of the most successful resources for the national community, supporting tens-of-thousands of users across all domains,” said SDSC Deputy Director Shawn Strande, a co-PI and project manager for the new program. “So we took an evolutionary approach with Expanse, assessing community needs, then working with our vendor partners including Dell, Intel, NVIDIA, Mellanox, and Aeon, to design an even better system.”

    Projected to have a peak speed of 5 Petaflop/s, Expanse will about double the performance of Comet with Intel’s next-generation processors and NVIDIA’s GPUs. Expanse will increase throughput of real-world workloads by a factor of at least 1.3 for both CPU and GPU applications relative to Comet, while supporting an even larger and more diverse research community. Expanse’s accelerated compute nodes will provide a much-needed GPU capability to the user community, serving both well-established applications in areas such as molecular dynamics as well as rapidly growing demand for resources to support machine learning and artificial intelligence. A low-latency interconnect based on Mellanox High Data Rate (HDR) InfiniBand will support a fabric topology optimized for jobs of one to a few thousand cores that require medium-scale parallelism.

    Expanse will support the growing diversity in computational and data-intensive workloads with a rich storage environment that includes 12PB of high-performance Lustre, 7PB of object storage, and more than 800TB of NVMe solid state storage.

    “While Expanse will easily support traditional batch-scheduled HPC applications, breakthrough research is increasingly dependent upon carrying out complex workflows that may include near real-time remote sensor data ingestion and big data analysis, interactive data exploration and visualization as well as large-scale computation,” said SDSC Chief Data Science Officer Ilkay Altintas, an Expanse co-PI and the director of SDSC’s Workflows for Data Science (WorDS) Center of Excellence. “One of the key innovations in Expanse is its ability to support so-called composable systems at the continuum of computing with dynamic capabilities. Using tools such as Kubernetes, and workflow software we have developed over the years for projects including the NSF-funded WIFIRE and CHASE-CI programs, Expanse will extend the boundaries of what is possible by integration with the broader computational and data ecosystem.”

    Increasingly, this ecosystem includes public cloud resources. Expanse will feature direct scheduler-integration with the major cloud providers, leveraging high-speed networks to ease data movement to/from the cloud, and opening up new modes of computing made possible by the combination of Expanse’s powerful HPC capabilities and ubiquity of cloud resources and software.

    Like Comet, Expanse will be a key resource within the NSF’s Extreme Science and Engineering Discovery Environment (XSEDE), which comprises the most advanced collection of integrated digital resources and services in the world.

    More details about the program will be available at the SDSC display at the SC19 in Denver.

    “The capabilities and services these awards will enable the research community to explore new computing models and paradigms,” said Manish Parashar, Office Director of NSF’s Office of Advanced Cyberinfrastructure, which funded this award. “These awards complement NSF’s long-standing investment in advanced computational infrastructure, providing much-needed support for the full range of innovative computational- and data-intensive research being conducted across all of science and engineering.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 9:12 am on July 10, 2019 Permalink | Reply
    Tags: , , Globus Data Transfer, , , , Supercomputing   

    From insideHPC: “Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer” 

    From insideHPC

    Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date.

    1

    “Storage is in general a very large problem in our community — the Universe is just very big, so our work can often generate a lot of data,” explained Katrin Heitmann, Argonne physicist and computational scientist and an Oak Ridge National Laboratory Leadership Computing Facility (OLCF) Early Science user.

    “Using Globus to easily move the data around between different storage solutions and institutions for analysis is essential.”

    The data in question was stored on the Summit supercomputer at OLCF, currently the world’s fastest supercomputer according to the Top500 list published June 18, 2019. Globus was used to move the files from disk to tape, a key use case for researchers.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    “Due to its uniqueness, the data is very precious and the analysis will take time,” said Dr. Heitmann. “The first step after the simulations were finished was to make a backup copy of the data to HPSS, so we can move the data back and forth between disk and tape and thus carry out the analysis in steps. We use Globus for this work due to its speed, reliability, and ease of use.”

    “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster, Globus co-founder and director of Argonne’s Data Science and Learning Division. “We tend to take these functions for granted, and yet modern collaborative research would not be possible without them.”

    “Globus has underpinned groundbreaking research for decades. We could not be prouder of our role in helping scientists do their world-changing work, and we’re happy to see projects like this one continue to push the boundaries of what Globus can achieve. Congratulations to Dr. Heitmann and team!”

    “When it comes to data transfer performance, “the most important part is reliability,” says Dr. Heitmann. “It is basically impossible for me as a user to check the very large amounts of data upon arrival after a transfer has finished. The analysis of the data often uses a subset of the data, so it would take quite a while until bad data would be discovered and at that point we might not have the data anymore at the source. So the reliability aspects of Globus are key.”

    “Of course, speed is also important. If the transfers were very slow, given the amount of data we transfer, we would have had a problem. So it’s good to be able to rely on Globus for fast data movement as well. We are also grateful to Oak Ridge for access to Summit and for their excellent setup of data transfer nodes enabling the use of Globus for HPSS transfers. This work would not have been possible otherwise.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: