Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:33 pm on July 13, 2017 Permalink | Reply
    Tags: , Intel “Skylake” Xeon Scalable processors, Supercomputing   

    From HPC Wire: “Intel Skylake: Xeon Goes from Chip to Platform” 

    HPC Wire

    1
    No image caption or credit

    July 13, 2017
    Doug Black

    With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic markets. Skylake will carry Intel’s flag in the fight for leadership in the emerging advanced data center encompassing highly demanding network workloads, cloud computing, real time analytics, virtualized infrastructures, high-performance computing and artificial intelligence.

    Most interesting, Skylake takes a big step toward accommodating what one industry analyst has called “the wild west of technology disaggregation,” life in the post-CPU-centric era.

    “What surprised me most is how much platform goodness Intel brought to the table,” said industry watcher Patrick Moorhead, Moor Insights & Strategy, soon after the launch announcement. “I wasn’t expecting so many enhancements outside of the CPU chip itself.”

    In fact, Moorhead said, Skylake turns Xeon into a platform, one that “consists of CPUs, chipset, internal and external accelerators, SSD flash and software stacks.”

    The successor to the Intel Xeon processor E5 and E7 product lines, Skylake has up to 28 high-performance cores and provides platform features with, according to Intel, significant performance increases, including:

    Artificial Intelligence: Delivers 2.2x higher deep learning training and inference compared to the previous generation, according to Intel, and 113x deep learning performance gains compared to a three-year-old non-optimized server system when combined with software optimizations accelerating delivery of AI-based services.
    Networking: Delivers up to 2.5x increased IPSec forwarding rate for networking applications compared to the previous generation when using Intel QuickAssist and Deep Platform Development Kit.
    Virtualization: Operates up to approximately 4.2x more virtual machines versus a four-year-old system for faster service deployment, server utilization, lower energy costs and space efficiency.
    High Performance Computing: Provides up to a 2x FLOPs/clock improvement with Intel AVX-512 (the 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for the x86 instruction set architecture) as well as integrated Intel Omni-Path Architecture ports, delivering improved compute capability, I/O flexibility and memory bandwidth, Intel said.
    Storage: Processes up to 5x more IOPS while reducing latency by up to 70 percent versus out-of-the-box NVMe SSDs when combined with Intel Optane SSDs and Storage Performance Development Kit, making data more accessible for advanced analytics.

    2
    No image caption or credit.

    Overall, Intel said, Skylake delivers performance increase up to 1.65x versus the previous generation of Intel processors, and up to 5x OLTP warehouse workloads versus the current install base.

    The company also introduced Intel Select Solutions, aimed at simplifying deployment of data center and network infrastructure, with initial solutions delivery on Canonical Ubuntu, Microsoft SQL 16 and VMware vSAN 6.6. Intel said this is an expansion of the Intel Builders ecosystem collaborations and will offer Intel-verified configurations for specific workloads, such as machine learning inference, and is then sold and marketed as a package by OEMs and ODMs under the “Select Solution” sub-brand.

    Intel said Xeon Scalable platform is supported by hundreds of ecosystem of partners, more than 480 Intel builders and 7,000-plus software vendors, including support from Amazon, AT&T, BBVA, Google, Microsoft, Montefiore, Technicolor and Telefonica.

    But it’s Intel’s support for multiple processing architectures that drew the most attention.

    Moorhead said Skylake enables heterogeneous compute in several ways. “First off, Intel provides the host processer, a Xeon, as you can’t boot to an accelerator. Inside of Xeon, they provide accelerators like AVX-512. Inside Xeon SoCs, Intel has added FPGAs. Inside the PCH contains a QAT accelerator. Intel also has PCIe accelerator cards for QAT and FPGAs.”

    In the end, Moorhead said, the Skylark announcement is directed at datacenter managers “who want to run their apps and do inference on the same machines using the new Xeons.” He cited Amazon’s support for this approach, “so it has merit.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 2:16 pm on July 13, 2017 Permalink | Reply
    Tags: , Supercomputing   

    From HPC Wire: “Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface” 

    HPC Wire

    July 13, 2017
    Ken Chiacchia
    Tiffany Jolley

    New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series and timeliness in general, according to Paul Morin, director of the Polar Geospatial Center at the University of Minnesota.

    In the second plenary session of the PEARC conference in New Orleans on July 12, Morin described how access to the DigitalGlobe satellite constellation, the NSF XSEDE network of supercomputing centers and the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign have enabled his group to map Antarctica—an area of 5.4 million square miles, compared with the 3.7 million square miles of the “lower 48” United States—at 1-meter resolution in two years.

    U Illinois Blue Waters Cray supercomputer

    Nine months later, then-president Barack Obama announced a joint White House initiative involving the NSF and the National Geospatial Intelligence Agency (NGIA) in which Morin’s group mapped a similar area in the Arctic including the entire state of Alaska in two years.

    “If I wrote this story in a single proposal I wouldn’t have been able to write [any proposals] afterward,” Morin said. “It’s that absurd.” But the leaps in technology have made what used to be multi-decadal mapping projects—when they could be done at all—into annual events, with even more frequent updates soon to come.

    The inaugural Practice and Experience in Advanced Research Computing (PEARC) conference—with the theme Sustainability, Success and Impact—stresses key objectives for those who manage, develop and use advanced research computing throughout the U.S. and the world. Organizations supporting this new HPC conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Extreme Science and Engineering Development Environment (XSEDE), the Science Gateways Community Institute, the Campus Research Computing (CaRC) Consortium, the Advanced CyberInfrastructure Research and Education Facilitators (ACI-REF) consortium, the National Center for Supercomputing Applications’ Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC) and Internet2.

    Follow the Poop

    One project made possible with the DigitalGlobe constellation—a set of Hubble-like multispectral orbiting telescopes “pointed the other way”—was a University of Minnesota census of emperor penguin populations in Antarctica.

    “What’s the first thing you do if you get access to a bunch of sub-meter-resolution [orbital telescopes covering] Antarctica?” Morin asked. “You point them at penguins.”

    Thanks in part to a lack of predators the birds over-winter on the ice, huddling in colonies for warmth. Historically these colonies were discovered by accident: Morin’s project enabled the first continent-wide survey to find and estimate the population size of all the colonies.

    The researchers realized that they had a relatively easy way to spot the colonies in the DigitalGlobe imagery: Because the penguins eat beta-carotene-rich krill, their excrement stains the ice red.

    “You can identify their location by looking for poo,” Morin said. The project enabled the first complete population count of emperor penguins: 595,000 birds, +14%

    “We started to realize we were onto something,” he added. His group began to wonder if they could leverage the sub-meter-resolution, multispectral, stereo view of the constellation’s WorldView I, II and III satellites to derive the topography of the Antarctic, and later the Arctic. One challenge, he knew, would be finding the computational power to extract topographic data from the stereo images in a reasonable amount of time. He found his answer at the NSF and the NGIA.

    “We proposed to a science agency and a combat support agency that we were going to map the topography of 30 degrees of the globe in 24 months.”

    Blue Waters on the Ice

    Morin and his collaborators found themselves in the middle of a seismic shift in topographic technology.

    “Eight years ago, people were doing [this] from the ground,” with a combination of land-based surveys and accurate but expensive LIDAR mapping from aircraft, he said. These methods made sense in places where population and industrial density made the cost worthwhile. But it had left the Antarctic and Arctic largely unmapped.

    Deriving topographic information from the photographs posed a computational problem well beyond the capabilities of a campus cluster. The group did initial computations at the Ohio Supercomputer Center, but needed to expand for the final data analysis.

    Ohio Super Computer Center

    Ohio Oakley HP supercommputer

    Ohio Ruby HP supercomputer

    Ohio Dell Owens supercompter

    From 2014 to 2015, Morin used XSEDE resources, most notably Gordon at San Diego Supercomputer Center and XSEDE’s Extended Collaborative Support Service to carry out his initial computations.

    SDSC home built Gordon-Simons supercomputer

    XSEDE then helped his group acquire an allocation on Blue Waters, an NSF-funded Cray Inc. system at Indiana and NCSA with 49,000 CPUs and a peak performance of 13.3 petaFLOPS.

    Collecting the equivalent area of California daily, a now-expanded group of subject experts made use of the polar-orbiting satellites and Blue Waters to derive elevation data. They completed a higher-resolution map of Alaska—the earlier version of which had taken the U.S. Geological Survey 50 years—in a year. While the initial images are licensed for U.S. government use only, the group was able to release the resulting topographic data for public use.

    Mapping Change

    Thanks to the one-meter resolution of their initial analysis, the group quickly found they could identify many man-made structures on the surface. They could also spot vegetation changes such as clearcutting. They could even quantify vegetation regrowth after replanting.

    “We’re watching individual trees growing here.”

    Another set of images he showed in his PEARC17 presentation were before-and-after topographic maps of Nuugaatsiaq, Greenland, which was devastated by a tsunami last month. The Greenland government is using the images, which show both human structures and the landslide that caused the 10-meter tsunami, to plan recovery efforts.

    The activity of the regions’ ice sheets was a striking example of the technology’s capabilities.

    “Ice is a mineral that flows,” Morin said, and so the new topographic data offer much more frequent information about ice-sheet changes driven by climate change than previously available. “We not only have an image of the ice but we know exactly how high it is.”

    Morin also showed an image of the Larsen Ice Shelf revealing a crack that had appeared in the glacier. The real news, though, was that the crack—which created an iceberg the size of the big island of Hawaii—was less than 24 hours old. It had appeared sometime after midnight on July 12.

    “We [now] have better topography for Siberia than we have for Montana,” he noted.

    New Directions

    While the large, high-resolution satellites have already transformed the field, innovations are already coming that could create another shift, Morin said.

    “This is not your father’s topography,” he noted. “Everything has changed; everything is time sensitive; everything is on demand.” In an interview later that morning, he added, “XSEDE, Blue Waters and NSF have changed how earth science happens now.”

    One advance won’t require new technology: just a little more time. While the current topographic dataset is at 1-meter resolution, the data can go tighter with more computation. The satellite images actually have a 30-centimeter resolution, which would allow for the project to shift from imaging objects the size of automobiles to those the size of a coffee table.

    At that point, he said, “instead of [just the] presence or absence of trees we’ll be able to tell what species of tree. It doesn’t take recollection of imagery; it just takes reprocessing.”

    The new, massive constellation of CubeSats such as the Planet company’s toaster-sized Dove satellites now being launched promises an even more disruptive advance. A swarm of these satellites will provide much more frequent coverage of the entire Earth’s surface than possible with the large telescopes.

    “The quality isn’t as good, but right now we’re talking about coverage,” Morin said. His group’s work has taken advantage of a system that allows mapping of a major portion of the Earth in a year. “What happens when we have monthly coverage?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 12:20 pm on July 13, 2017 Permalink | Reply
    Tags: Chinese Sunway ThaihuLight supercomputer currently #1 on the TOP500 list of supercomputers, How supercomputers are uniting the US and China, , Supercomputing   

    From Science Node: “How supercomputers are uniting the US and China” 

    Science Node bloc
    Science Node

    12 July 2017
    Tristan Fitzpatrick

    38 years ago, US President Jimmy Carter and China Vice Premier Deng Xiaoping signed the US – China Agreement on Cooperation in Science and Technology, outlining broad opportunities to promote science and technology research.

    Since then the two nations have worked together on a variety of projects, including energy and climate research. Now, however, there is another goal that each country is working towards: The pursuit of exascale computing.

    At the PEARC17 conference in New Orleans, Louisiana, representatives from the high-performance computing communities in the US and China participated in the first international workshop on American and Chinese collaborations in experience and best practice in supercomputing.

    Both countries face the same challenges implementing and managing HPC resources across a large nation-state. The hardware and software technologies are rapidly evolving, the user base is ever-expanding, and the technical requirements for maintaining these large and fast machines is accelerating.

    It would be a major coup for either country’s scientific prowess if exascale computing could be reached, as it’s believed to be the order of processing for the human brain at the neural level. Initiatives like the Human Brain Project consider it to be a hallmark to advance computational power.

    “It’s less like an arms race between the two countries to see who gets there first and more like the Olympics,” says Dan Stanzione, executive director at the Texas Advanced Computing Center (TACC).

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    “We’d like to win and get the gold medal but hearing what China is doing with exascale research is going to help us get closer to this goal.”

    ___________________________________________________________________

    Exascale refers to computing systems that can perform a billion billion calculations per second — at least 50 times faster than the fastest supercomputers in the US.

    ___________________________________________________________________

    Despite the bona fides that would be awarded to whomever achieves the milestone first, TACC data mining and statistics group manager Weijia Xu stresses that collaboration is a greater motivator for both the US and China than just a race to see who gets there first.

    “I don’t think it’s really a competition,” Xu says. “It’s more of a common goal we all want to reach eventually. How you reach the goal is not exactly clear to everyone yet. Furthermore, there are many challenges ahead, such as how systems can be optimized for various applications.”

    The computational resources at China’s disposal could make it a great ally in the pursuit of exascale power. As of June 2017, China has the two fastest supercomputers in the top 500 supercomputers list, followed by five entries from the United States in the top ten.

    1
    Chinese Sunway ThaihuLight supercomputer, currently #1 on the TOP500 list of supercomputers.

    “While China has the top supercomputer in the world, China and the US probably have about fifty percent each of those top 500 machines besides the European countries,” says Si Liu, HPC software tools researcher at TACC. “We really believe if we have some collaboration between the US and China, we could do some great projects together and benefit the whole HPC community.”

    Besides pursuing the elusive exascale goal, Stanzione says the workshop opened up other ideas for how to improve the overall performance of HPC efforts in both nations. Co-located participants spoke on topics ranging from in situ simulations, artificial intelligence, and deep learning, among others.

    “We also ask questions like how do we run HPC systems, what do we run on them, and how it’s going to change in the next few years,” Stanzione says.“It’s a great time to get together and talk about details of processors, speeds, and feeds.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:21 pm on July 8, 2017 Permalink | Reply
    Tags: , , , , , , , , Supercomputing, UCSD Comet supercomputer   

    From Science Node: “Cracking the CRISPR clock” 

    Science Node bloc
    Science Node

    05 Jul, 2017
    Jan Zverina

    SDSC Dell Comet supercomputer

    Capturing the motion of gyrating proteins at time intervals up to one thousand times greater than previous efforts, a team led by University of California, San Diego (UCSD) researchers has identified the myriad structural changes that activate and drive CRISPR-Cas9, the innovative gene-splicing technology that’s transforming the field of genetic engineering.

    By shedding light on the biophysical details governing the mechanics of CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats) activity, the study provides a fundamental framework for designing a more efficient and accurate genome-splicing technology that doesn’t yield ‘off-target’ DNA breaks currently frustrating the potential of the CRISPR-Cas9- system, particularly for clinical uses.


    Shake and bake. Gaussian accelerated molecular dynamics simulations and state-of-the-art supercomputing resources reveal the conformational change of the HNH domain (green) from its inactive to active state. Courtesy Giulia Palermo, McCammon Lab, UC San Diego.

    “Although the CRISPR-Cas9 system is rapidly revolutionizing life sciences toward a facile genome editing technology, structural and mechanistic details underlying its function have remained unknown,” says Giulia Palermo, a postdoctoral scholar with the UC San Diego Department of Pharmacology and lead author of the study [PNAS].

    See the full article here
    .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:39 pm on July 3, 2017 Permalink | Reply
    Tags: , , Argonne's Theta supercomputer goes online, , Supercomputing   

    From ALCF: “Argonne’s Theta supercomputer goes online” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    July 3, 2017
    Laura Wolf

    Theta, a new production supercomputer located at the U.S. Department of Energy’s Argonnne National Laboratory is officially open to the research community. The new machine’s massively parallel, many-core architecture continues Argonne’s leadership computing program towards its future Aurora system.

    Theta was built onsite at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, where it will operate alongside Mira, an IBM Blue Gene/Q supercomputer. Both machines are fully dedicated to supporting a wide range of scientific and engineering research campaigns. Theta, an Intel-Cray system, entered production on July 1.

    The new supercomputer will immediately begin supporting several 2017-2018 DOE Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) projects. The ALCC is a major allocation program that supports scientists from industry, academia, and national laboratories working on advancements in targeted DOE mission areas. Theta will also support projects from the ALCF Data Science Program, ALCF’s discretionary award program, and, eventually, the DOE’s Innovative and Novel Computing Computational Impact on Theory and Experiment (INCITE) program—the major means by which the scientific community gains access to the DOE’s fastest supercomputers dedicated to open science.

    Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

    “Theta’s unique architectural features represent a new and exciting era in simulation science capabilities,” said ALCF Director of Science Katherine Riley. “These same capabilities will also support data-driven and machine-learning problems, which are increasingly becoming significant drivers of large-scale scientific computing.”

    Now that Theta is available as a production resource, researchers can apply for computing time through the facility’s various allocation programs. Although the INCITE and ALCC calls for proposals recently closed, researchers can apply for Director’s Discretionary awards at any time.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 5:54 am on July 1, 2017 Permalink | Reply
    Tags: , , , , Supercomputing   

    From LLNL: “National labs, industry partners prepare for new era of computing through Centers of Excellence” 


    Lawrence Livermore National Laboratory

    June 30, 2017
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    IBM employees and Lab code and application developers held a “Hackathon” event in June to work on coding challenges for a predecessor system to the Sierra supercomputer. Through the ongoing Centers of Excellence (CoE) program, employees from IBM and NVIDIA have been on-site to help LLNL developers transition applications to the Sierra system, which will have a completely different architecture than the Lab has had before. Photo by Jeremy Thomas/LLNL

    The Department of Energy’s (link is external) drive toward the next generation of supercomputers, “exascale” machines capable of more than a quintillion (1018) calculations per second, isn’t simply to boast about having the fastest processing machines on the planet. At Lawrence Livermore National Laboratory (LLNL) and other DOE national labs, these systems will play a vital role in the National Nuclear Security Administration’s (NNSA) core mission of ensuring the nation’s nuclear stockpile in the absence of underground testing.

    The driving force behind faster, more robust computing power is the need for simulation and codes that are higher resolution, increasingly predictive and incorporate more complex physics. It’s an evolution that is changing the way the national labs’ application and code developers are approaching design. To aid in the transition and prepare researchers for pre-exascale and exascale systems, LLNL has brought experts from IBM (link is external) and NVIDIA together with Lab computer scientists in a Center of Excellence (CoE), a co-design strategy born out of the need for vendors and government to work together to optimize emerging supercomputing systems.

    “There are disruptive machines coming down the pike that are changing things out from under us,” said Rob Neely, an LLNL computer scientist and Weapon Simulation & Computing Program coordinator for Computing Environments. “We need a lot of time to prepare; these applications need insight, and who better to help us with that than the companies who will build the machines? The idea is that when a machine gets here, we’re not caught flat-footed. We want to hit the ground running right away.”

    While LLNL’s exascale system isn’t scheduled for delivery until 2023, Sierra, the Laboratory’s pre-exascale system, is on track to begin installation this fall and will begin running science applications at full machine scale by early next spring.

    LLNL IBM Sierra supercomputer

    Built by IBM and NVIDIA, Sierra will have about six times more computing power than LLNL’s current behemoth, Sequoia.

    2
    Sequoia at LLNL

    The Sierra system is unique to the Lab in that it’s made up of two kinds of hardware — IBM CPUs and NVIDIA GPUs — that have different memory locations associated with each type of computing device and a programming model more complex than LLNL scientists have programmed to in the past. In the meantime, Lab scientists are receiving guidance from experts from the two companies, utilizing a small predecessor system that is already running some components and has some of the technological features that Sierra will have.

    LLNL’s Center of Excellence, which began in 2014, involves about a half dozen IBM and NVIDIA personnel on-site, and a number of remote collaborators who work with Lab developers. The team is on hand to answer any questions Lab computer scientists have, educate LLNL personnel to use best practices in coding hybrid systems, develop strategies for optimizations, debug and advise on global code restructuring that often is needed to obtain performance. The CoE is a symbiotic relationship — LLNL scientists get a feel for how Sierra will operate, and IBM and NVIDIA gain better insight into what the Lab’s needs are and what the machines they build are capable of.

    “We see how the systems we design and develop are being used and how effective they can be,” said IBM research staff member Leopold Grinberg, who works on the LLNL site. “You really need to get into the mind of the developer to understand how they use the tools. To sit next to the developers’ seats and let them drive, to observe them, gives us a good idea of what we are doing right and what needs to be improved. Our experts have an intimate knowledge of how the system works, and having them side-by-side with Lab employees is very useful.”

    Sierra, Grinberg explained, will use a completely different system architecture than what has been used before at LLNL. It’s not only faster than any machine the Lab has had, it also has different tools built into the compilers and programming models. In some cases, the changes developers need to make are substantial, requiring restructuring hundreds or thousands of lines of code. Through the CoE, Grinberg said he’s learning more about how the system will be used for production scientific applications.

    “It’s a constant process of learning for everybody,” Grinberg said. “It’s fun, it’s challenging. We gather the knowledge and it’s also our job to distribute it. There’s always some knowledge to be shared. We need to bring the experience we have with heterogenous systems and emerging programming models to the lab, and help people generate updated codes or find out what can be kept as is to optimize the system we build. It’s been very fruitful for both parties.”

    The CoE strategy is additionally being implemented at Oak Ridge National Laboratory, which is bringing in a heterogenous system of its own called Summit. Other CoE programs are in place at Los Alamos and Lawrence Berkeley national laboratories. Each CoE has a similar goal of preparing computational scientists with the tools they will need to utilize pre-exascale and exascale systems. Since Livermore is new to using GPUs for the bulk of computing power, the Sierra architecture places a heavy emphasis on figuring out which sections of a multi-physics application are the most performance-critical, and the code restructuring that must take place to most effectively use the system.

    “Livermore and Oak Ridge scientists are really pushing the boundaries of the scale of these GPU-based systems,” said Max Katz, a solutions architect at NVIDIA who spends four days a week at LLNL as a technical adviser. “Part of our motivation is to understand machine learning and how to make it possible to merge high-performance computing with the applications demanded by industry. The CoE is essential because it’s difficult for any one party to predict how these CPU/GPU systems will behave together. Each one of us brings in expertise and by sharing information, it makes us all more well-rounded. It’s a great opportunity.”

    In fact, the opportunity was so compelling that in 2016 the CoE was augmented with a three-year institutional component (dubbed the Institutional Center of Excellence, or iCE) to ensure that other mission critical efforts at the Laboratory also could participate. This has added nine applications development efforts, including one in data science, and expanded the number of IBM and NVIDIA personnel. By working together cooperatively, many more types of applications can be explored, performance solutions developed and shared among all the greater CoE code teams.

    “At the end of the iCOE project, the real value will be not only that some important institutional applications run well, but that every directorate at LLNL will have trained staff with expertise in using Sierra, and we’ll have documented lessons learned to help train others,” said Bert Still, leader for Application Strategy (Livermore Computing).

    Steve Rennich, a senior HPC developer-technology engineer with NVIDIA, visits the Lab once a week to help LLNL scientists port mission-critical applications optimized for CPUs over to NVIDIA GPUs, which have an order of magnitude greater computing power than CPUs. Besides writing bug-free code, Rennich said, the goal is to improve performance enough to meet the Lab’s considerable computing requirements.

    “The challenge is they’re fairly complex codes so to do it correctly takes a fair amount of attention to detail,” Rennich said. “It’s about making sure the new system can handle as large a model as the Lab needs. These are colossal machines, so when you create applications at this scale, it’s like building a race car. To take advantage of this increase in performance, you need all the pieces to fit and work together.”

    Current plans are to continue the existing Center of Excellence at LLNL at least into 2019, when Sierra is fully operational. Until then, having experts working shoulder-to-shoulder with Lab developers to write code will be a huge benefit to all parties, said LLNL’s Neely, who wants the collaboration to publish their discoveries to share it with the broader computing community.

    “We’re focused on the issue at hand, and moving things toward getting ready for these machines is hugely beneficial,” Neely said. “These are very large applications developed over decades, so ultimately it’s the code teams that need to be ready to take this over. We’ve got to make this work because we need to ensure the safety and performance of the U.S. stockpile in the absence of nuclear testing. We’ve got the right teams and people to pull this off.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 7:53 pm on June 25, 2017 Permalink | Reply
    Tags: , Burmese pythons (as well as other snakes) massively downregulate their metabolic and physiological functions during extended periods of fasting During this time their organs atrophy saving energy Howe, Evolution takes eons but it leaves marks on the genomes of organisms that can be detected with DNA sequencing and analysis, Researchers use Supercomputer to Uncover how Pythons Regenerate Their Organs, Supercomputing, , The Role of Supercomputing in Genomics Research, understanding the mechanisms by which Burmese pythons regenerate their organs including their heart liver kidney and small intestines after feeding, Within 48 hours of feeding Burmese pythons can undergo up to a 44-fold increase in metabolic rate and the mass of their major organs can increase by 40 to 100 percent   

    From UT Austin: “Researchers use Supercomputer to Uncover how Pythons Regenerate Their Organs” 

    U Texas Austin bloc

    University of Texas at Austin

    06/22/2017
    No writer credit found

    1
    A Burmese python superimposed on an analysis of gene expression that uncovers how the species changes in its organs upon feeding.Todd Castoe

    Evolution takes eons, but it leaves marks on the genomes of organisms that can be detected with DNA sequencing and analysis.

    As methods for studying and comparing genetic data improve, scientists are beginning to decode these marks to reconstruct the evolutionary history of species, as well as how variants of genes give rise to unique traits.

    A research team at the University of Texas at Arlington led by assistant professor of biology Todd Castoe has been exploring the genomes of snakes and lizards to answer critical questions about these creatures’ evolutionary history. For instance, how did they develop venom? How do they regenerate their organs? And how do evolutionarily-derived variations in genes lead to variations in how organisms look and function?

    “Some of the most basic questions drive our research. Yet trying to understand the genetic explanations of such questions is surprisingly difficult considering most vertebrate genomes, including our own, are made up of literally billions of DNA bases that can determine how an organism looks and functions,” says Castoe. “Understanding these links between differences in DNA and differences in form and function is central to understanding biology and disease, and investigating these critical links requires massive computing power.”

    To uncover new insights that link variation in DNA with variation in vertebrate form and function, Castoe’s group uses supercomputing and data analysis resources at the Texas Advanced Computing Center or TACC, one of the world’s leading centers for computational discovery.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    Recently, they used TACC’s supercomputers to understand the mechanisms by which Burmese pythons regenerate their organs — including their heart, liver, kidney, and small intestines — after feeding.

    Burmese pythons (as well as other snakes) massively downregulate their metabolic and physiological functions during extended periods of fasting. During this time their organs atrophy, saving energy. However, upon feeding, the size and function of these organs, along with their ability to generate energy, dramatically increase to accommodate digestion.

    Within 48 hours of feeding, Burmese pythons can undergo up to a 44-fold increase in metabolic rate and the mass of their major organs can increase by 40 to 100 percent.

    Writing in BMC Genomics in May 2017, the researchers described their efforts to compare gene expression in pythons that were fasting, one day post-feeding and four days post-feeding. They sequenced pythons in these three states and identified 1,700 genes that were significantly different pre- and post-feeding. They then performed statistical analyses to identify the key drivers of organ regeneration across different types of tissues.

    What they found was that a few sets of genes were influencing the wholesale change of pythons’ internal organ structure. Key proteins, produced and regulated by these important genes, activated a cascade of diverse, tissue-specific signals that led to regenerative organ growth.

    Intriguingly, even mammalian cells have been shown to respond to serum produced by post-feeding pythons, suggesting that the signaling function is conserved across species and could one day be used to improve human health.

    “We’re interested in understanding the molecular basis of this phenomenon to see what genes are regulated related to the feeding response,” says Daren Card, a doctoral student in Castoe’s lab and one of the authors of the study. “Our hope is that we can leverage our understanding of how snakes accomplish organ regeneration to one day help treat human diseases.”

    Making Evolutionary Sense of Secondary Contact

    Castoe and his team used a similar genomic approach to understand gene flow in two closely related species of western rattlesnakes with an intertwined genetic history.

    The two species live on opposite sides of the Continental Divide in Mexico and the U.S. They were separated for thousands of years and evolved in response to different climates and habitat. However, over time their geographic ranges came back together to the point that the rattlesnakes began to crossbreed, leading to hybrids, some of which live in a region between the two distinct climates.

    The work was motivated by a desire to understand what forces generate and maintain distinct species, and how shifts in the ranges of species (for example, due to global change) may impact species and speciation.

    The researchers compared thousands of genes in the rattlesnakes’ nuclear DNA to study genomic differentiation between the two lineages. Their comparisons revealed a relationship between genetic traits that are most important in evolution during isolation and those that are most important during secondary contact, with greater-than-expected overlap between genes in these two scenarios.

    However, they also found regions of the rattlesnake genome that are important in only one of these two scenarios. For example, genes functioning in venom composition and in reproductive differences — distinct traits that are important for adaptation to the local habitat — likely diverged under selection when these species were isolated. They also found other sets of genes that were not originally important for diversification of form and function, that later became important in reducing the viability of hybrids. Overall, their results provide a genome-scale perspective on how speciation might work that can be tested and refined in studies of other species.

    The team published their results in the April 2017 issue of Ecology and Evolution.

    The Role of Supercomputing in Genomics Research

    The studies performed by members of the Castoe lab rely on advanced computing for several aspects of the research. First, they use advanced computing to create genome assemblies — putting millions of small chunks of DNA in the correct order.

    “Vertebrate genomes are typically on the larger side, so it takes a lot of computational power to assemble them,” says Card. “We use TACC a lot for that.”

    Next, the researchers use advanced computing to compare the results among many different samples, from multiple lineages, to identify subtle differences and patterns that would not be distinguishable otherwise.

    Castoe’s lab has their own in-house computers, but they fall short of what is needed to perform all of the studies the group is interested in working on.

    “In terms of genome assemblies and the very intensive analyses we do, accessing larger resources from TACC is advantageous,” Card says. “Certain things benefit substantially from the general output from TACC machines, but they also allow us to run 500 jobs at the same time, which speeds up the research process considerably.”

    A third computer-driven approach lets the team simulate the process of genetic evolution over millions of generations using synthetic biological data to deduce the rules of evolution, and to identify genes that may be important for adaptation.

    For one such project, the team developed a new software tool called GppFst that allows researchers to differentiate genetic drift – a neutral process whereby genes and gene sequences naturally change due to random mating within a population – from genetic variations that are indicative of evolutionary changes caused by natural selection.

    The tool uses simulations to statistically determine which changes are meaningful and can help biologists better understand the processes that underlie genetic variation. They described the tool in the May 2017 issue of Bioinformatics.

    Lab members are able to access TACC resources through a unique initiative, called the University of Texas Research Cyberinfrastructure, which gives researchers from the state’s 14 public universities and health centers access to TACC’s systems and staff expertise.

    “It’s been integral to our research,” said Richard Adams, another doctoral student in Castoe’s group and the developer of GppFst. “We simulate large numbers of different evolutionary scenarios. For each, we want to have hundreds of replicates, which are required to fully vet our conclusions. There’s no way to do that on our in-house systems. It would take 10 to 15 years to finish what we would need to do with our own machines — frankly, it would be impossible without the use of TACC systems.”

    Though the roots of evolutionary biology can be found in field work and close observation, today, the field is deeply tied to computing, since the scale of genetic material — tiny but voluminous — cannot be viewed with the naked eye or put in order by an individual.

    “The massive scale of genomes, together with rapid advances in gathering genome sequence information, has shifted the paradigm for many aspects of life science research,” says Castoe.

    “The bottleneck for discovery is no longer the generation of data, but instead is the analysis of such massive datasets. Data that takes less than a few weeks to generate can easily take years to analyze, and flexible shared supercomputing resources like TACC have become more critical than ever for advancing discovery in our field, and broadly for the life sciences.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Texas Arlington Campus

    In 1839, the Congress of the Republic of Texas ordered that a site be set aside to meet the state’s higher education needs. After a series of delays over the next several decades, the state legislature reinvigorated the project in 1876, calling for the establishment of a “university of the first class.” Austin was selected as the site for the new university in 1881, and construction began on the original Main Building in November 1882. Less than one year later, on Sept. 15, 1883, The University of Texas at Austin opened with one building, eight professors, one proctor, and 221 students — and a mission to change the world. Today, UT Austin is a world-renowned higher education, research, and public service institution serving more than 51,000 students annually through 18 top-ranked colleges and schools.

     
  • richardmitnick 2:06 pm on June 19, 2017 Permalink | Reply
    Tags: , , , Piz Daint, PRACE - Partnership for Advanced Computing in Europe, Supercomputing   

    From ETH: “Piz Daint is a world leader” 

    ETH Zurich bloc

    ETH Zürich

    19.06.2017
    Anna Maltsev
    Felix Würsten

    Cray Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)

    After an extensive hardware upgrade at the end of last year, the CSCS supercomputer Piz Daint is now the most powerful mainframe computer outside Asia. With a peak performance in excess of 20 petaflops, it will enable pioneering research in Switzerland and Europe.

    The Piz Daint supercomputer at the Swiss National Supercomputing Centre (CSCS) has been the most powerful supercomputer in Europe since November 2013. An extensive hardware upgrade at the end of 2016 has now more than tripled its performance. Piz Daint is now the fastest computer outside Asia, with a theoretical peak performance of 25.3 petaflops, as confirmed today at the international ISC High Performance event in Frankfurt. Thanks to its innovative architecture, Piz Daint is also one of the most energy-efficient mainframe computer in the world.

    Strategic approach

    The upgraded supercomputer is an energy-efficient hybrid system consisting of conventional processors (CPUs) and graphics processors (GPUs). The sophisticated system, based on a Cray XC40/XC50, is the result of a long-standing collaboration between CSCS, various hardware manufacturers, computer scientists, mathematicians and other researchers from different disciplines. This successful collaboration was initiated by the national Strategic Plan for High Performance Computing and Networking (HPCN Strategy) that was launched by the ETH Board on behalf of the federal government in 2009.

    Support for research

    Supercomputers are now an integral part of research: in addition to theory and experiments, simulations, data analyses and visualisations now also make key contributions to most research areas. Powerful systems such as Piz Daint are crucial for high-resolution computer-intensive simulations, such as those used in climate or material research, or in the life sciences.

    In data science, which has an increasingly important role and is one of ETH Zürich’s main focus areas, supercomputers enable the analysis of enormous amounts of data. This is an area in which Piz Daint is particularly strong: it is able to analyse the resulting data while the calculations are still ongoing.

    Important for international cooperation

    Piz Daint is also an important element in international research collaborations: as of this spring, CSCS with Piz Daint is one of the main providers of computing power in the Partnership for Advanced Computing in Europe (PRACE). This commitment then benefits Swiss researchers in turn, as CSCS’s participation in PRACE gives them access to various other European supercomputers.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ETH Zurich campus
    ETH Zurich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zurich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zurich, underlining the excellent reputation of the university.

     
  • richardmitnick 9:27 am on June 18, 2017 Permalink | Reply
    Tags: , ECP, , Supercomputing   

    From ECP via LLNL: “On the Path to the Nation’s First Exascale Supercomputers: PathForward” 


    Lawrence Livermore National Laboratory

    1

    ECP

    06/15/17

    Department of Energy Awards Six Research Contracts Totaling $258 Million to Accelerate U.S. Supercomputing Technology.

    Today U.S. Secretary of Energy Rick Perry announced that six leading U.S. technology companies will receive funding from the Department of Energy’s Exascale Computing Project (ECP) as part of its new PathForward program, accelerating the research necessary to deploy the nation’s first exascale supercomputers.

    The awardees will receive funding for research and development to maximize the energy efficiency and overall performance of future large-scale supercomputers, which are critical for U.S. leadership in areas such as national security, manufacturing, industrial competitiveness, and energy and earth sciences. The $258 million in funding will be allocated over a three-year contract period, with companies providing additional funding amounting to at least 40 percent of their total project cost, bringing the total investment to at least $430 million.

    “Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry.

    “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”

    “The PathForward program is critical to the ECP’s co-design process, which brings together expertise from diverse sources to address the four key challenges: parallelism, memory and storage, reliability and energy consumption,” ECP Director Paul Messina said. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand. It is essential that private industry play a role in this work going forward: advances in computer hardware and architecture will contribute to meeting all four challenges.”

    The following U.S. technology companies are the award recipients:

    Advanced Micro Devices (AMD)
    Cray Inc. (CRAY)
    Hewlett Packard Enterprise (HPE)
    International Business Machines (IBM)
    Intel Corp. (Intel)
    NVIDIA Corp. (NVIDIA)

    The Department’s funding for this program is supporting R&D in three areas—hardware technology, software technology, and application development—with the intention of delivering at least one exascale-capable system by 2021.

    Exascale systems will be at least 50 times faster than the nation’s most powerful computers today, and global competition for this technological dominance is fierce. While the U.S. has five of the 10 fastest computers in the world, its most powerful — the Titan system at Oak Ridge National Laboratory — ranks third behind two systems in China. However, the U.S. retains global leadership in the actual application of high performance computing to national security, industry, and science.

    Additional information and attributed quotes from the vendors receiving the PathForward funding can be found here [See below].

    [It is this writer’s opinion that none of this funding should be necessary. These are all for-profit companies which have noting to do but gain from their own investments in this work.]

    Advanced Micro Devices (AMD)

    Sunnyvale, California

    “AMD is excited to extend its long-term computing partnership with the U.S. Government in its PathForward program for exascale computing. We are thrilled to see AMD’s unique blend of high-performance computing and graphics technologies drive the industry forward and enable breakthroughs like exascale computing. This technology collaboration will drive outstanding performance and power-efficiency on applications ranging from scientific computing to machine learning and data analytics. As part of PathForward, AMD will explore processors, memory architectures, and high-speed interconnects to improve the performance, power-efficiency, and programmability of exascale systems. This effort emphasizes an open, standards-based approach to heterogeneous computing as well as co-design with the Exascale Computing Project (ECP) teams to foster innovation and achieve the Department of Energy’s goals for capable exascale systems.”

    — Dr. Lisa Su, president and CEO

    Cray Inc. (CRAY)

    Seattle, Washington

    “At Cray, our focus is on innovation and advancing supercomputing technologies that allow customers to solve their most demanding scientific, engineering, and data-intensive problems. We are honored to play an important role in the Department of Energy’s Exascale Computing Project, as we collaboratively explore new advances in system and node technology and architectures. By pursuing improvements in sustained performance, power efficiency, scalability, and reliability, the ECP’s PathForward program will help make significant advancements towards exascale computing.”

    — Peter Ungaro, president and CEO

    Hewlett Packard Enterprise (HPE)

    Palo Alto, California

    “The U.S. Department of Energy has selected HPE to rethink the fundamental architecture of supercomputers to make exascale computing a reality. This is strong validation of our vision, strategy and execution capabilities as a systems company with deep expertise in Memory-Driven Computing, VLSI, photonics, non-volatile memory, software and systems design. Once operational, these systems will help our customers to accelerate research and development in science and technology.”

    — Mike Vildibill, vice president, Advanced Technology Programs

    Intel Corp. (INTEL)

    Santa Clara, California

    “Intel is investing to offer a balanced portfolio of products for high performance computing that are essential to not only achieving Exascale class computing, but also to drive breakthrough capability across the entire ecosystem. This research with the US Department of Energy focused on advanced computing and I/O technologies will accelerate the deployment of leading HPC solutions that contribute to scientific discovery for economic and societal benefits for the United States and people around the world. These gains will impact many application domains and be realized in traditional high performance simulations as well as data analytics and the rapidly growing field of artificial intelligence.”

    — Al Gara, Intel Fellow, Data Center Group Chief Architect, Exascale Systems

    International Business Machines (IBM)

    Armonk, New York

    “IBM has a roadmap for future Data Centric Systems to deliver enterprise-strength cloud services and on-premise mission-critical application performance for our customers. We are excited to once again work with the DOE and we believe the PathForward program will help accelerate our capabilities to deliver cognitive, flexible, cost-effective and energy efficient exascale-class systems for a wide variety of important workloads.”

    — Michael Rosenfield, vice president of Data Centric Solutions, IBM Research

    NVIDIA Corp. (NVIDIA)

    Santa Clara, California

    “NVIDIA has been researching and developing faster, more efficient GPUs for high performance computing (HPC) for more than a decade. This is our sixth DOE research and development contract, which will help accelerate our efforts to develop highly efficient throughput computing technologies to ensure U.S. leadership in HPC. Our R&D will focus on critical areas including energy-efficient GPU architectures and resilience. We’re particularly proud of the work we’ve been doing to help the DOE achieve exascale performance at a fraction of the power of traditional compute architectures.”

    — Dr. Bill Dally, chief scientist and senior vice president of research

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 2:17 pm on June 13, 2017 Permalink | Reply
    Tags: , PARAM ISHAN CDAC Indian supercomputer, Supercomputing   

    From GEOSPATIAL WORLD: “How US sanctions spurred India to develop high-performance computing” 

    1

    GEOSPATIAL WORLD

    2
    PARAM is a series of supercomputers designed and assembled by the Centre for Development of Advanced Computing (C-DAC), India. The latest machine in the series is the PARAM ISHAN

    The year was 1998. India woke up to the astounding news. While Buddha had smiled in 1974, in 1998 it was Shakti that shook the desert soil of Pokhran. It shook a lot more. The US was livid. Their snooping satellites had been fooled and India had once again knocked on the doors of the Nuclear Club. MTCR sanctions followed and high technology items, called dual-use technologies, were denied to many Indian R&D laboratories, among them ISRO centres. The MTCR was applied so rigorously that a 500MB hard disc for a desktop computer was snatched off the airline loading bay and the supplier got a nasty note from the US embassy. More was to follow.

    Orders for GIS software were accompanied by an undertaking that the software was not to be given or shared with several countries. A long justification of how the software was to be used and by whom was demanded. In spite of this certain module relating to Web, enablement was denied. The software had to be directly delivered to the end user (non-ISRO) even though ISRO was paying for them. A similar situation existed for workstations. One of the most painful episodes was when an order for a workstation for use with GIS matched a similar order from another ISRO centre for a workstation for structural analysis. A US team swooped down, accompanied by a very apologetic team from the Indian agent and demanded to see the physical units and the work being done on them.

    The message was clear. UNIX workstations were immediately junked for off-the-shelf servers and desktops which ran UNIX on Intel Multi CPU servers as these lay outside the MTCR. Even then there were problems. ISRO had taken up the development of an Airborne Synthetic Aperture Radar. ISRO also had an agreement with ESA for the reception and dissemination of ERS 1 and 2 SAR data. The existing computer facility consisting of a mainframe with a vector processor, acquired before the sanctions, was just about to manage SAR processing but it took 12 to 18 hours to process an airborne SAR scene and 8 hours for an ERS-1 SAR scene. The National Centre for Medium Range Weather Forecasting had a Cray but also had a US-appointed gatekeeper who kept a strict watch on who came in and what was being processed, effectively ruling it out. Incidentally, CRAY is now a footnote in the history of computing in India.

    At this point, two other indigenous computers were considered. One was the Flowsolver at National Aeronautical Laboratory and the other a cluster computing facility at the Bhabha Atomic Research Centre. Test runs on both these systems were promising but for the fact that SAR processing is both data and processing intensive. These clusters operated on the principle of scatter-gather, that is, assign chunks of the data to each computer and gather the analyzed results to get the final result. The inter-computer communications, in the case of SAR data, became the bottleneck in achieving the desired processing speed.

    Meanwhile, the Centre for Development of Advanced Computing, C-DAC, a unit under the Department of Electronics had started working on PARAM, a parallel processing computer using transputers. The SAR requirements were of interest to them and a MoU between ISRO and CDAC resulted in a satisfactory solution. Luckily the transputer was an English device and hence not under the US embargo. In fact, India did get support from European suppliers during the sanctions regime. Using PARAM the processing time of an ERS SAR scene could be reduced from 8 hours on a VAX11/780 with a Vector processor to 40 minutes on an eight node PARAM. The entire processing system at NRSA (now NRSC) for ERS and airborne SAR was based on PARAM. PARAM was also used in VSSC for high-performance computing jobs. In time, better solutions emerged which further reduced the processing time. All these solutions did not need any hardware or software under sanctions.

    Were it not for the US sanctions engineers in ISRO and CDAC would not have put in these efforts and missed out on exploring the alternative hardware and software options. In the process, they also became wiser, as did the US about the futility of sanctions in the face of a determined community.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: