Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:42 pm on March 14, 2019 Permalink | Reply
    Tags: , , , Supercomputing   

    From insideHPC: “In a boon for HPC, Founding Members Sign SKA Observatory Treaty” 

    From insideHPC

    March 14, 2019

    1
    The initial signatories of the SKA Observatory Convention. From left to right: UK Ambassdor to Italy Jill Morris, China’s Vice Minister of Science and Technology Jianguo Zhang, Portugal’s Minister for Science, Technology and Higher Education Manuel Heitor, Italian Minister of Education, Universities and Research Marco Bussetti, South Africa’s Minister of Science and Technology Mmamoloko Kubayi-Ngubane, the Netherlands Deputy Director of the Department for Science and Research Policy at the Ministry of Education, Culture and Science Oscar Delnooz, and Australia’s Ambassdor to Italy Greg French (Credit: SKA Organization)

    Earlier this week, countries involved in the Square Kilometre Array (SKA) Project came together in Rome to sign an international treaty establishing the intergovernmental organization that will oversee the delivery of the world’s largest radio telescope.

    SKA Square Kilometer Array

    Ministers, Ambassadors and other high-level representatives from over 15 countries have gathered in the Italian capital for the signature of the treaty which establishes the Square Kilometre Array Observatory (SKAO), the intergovernmental organization (IGO) tasked with delivering and operating the SKA.

    “Today we are particularly honored to sign, right here at the Ministry of Education, University and Research, the Treaty for the establishment of the SKA Observatory” Italian Minister of Education Marco Bussetti who presided over the event, said. “A signature that comes after a long phase of negotiations, in which our country has played a leading role. The Rome Convention testifies the spirit of collaboration that scientific research triggers between countries and people around the world, because science speaks all the languages of the planet and its language connects the whole world. This Treaty – he added – is the moment that marks our present and our future history, the history of science and knowledge of the Universe. The SKA project is the icon of the increasingly strategic role that scientific research has taken on in contemporary society. Research is the engine of innovation and growth: knowledge translates into individual and collective well-being, both social and economic. Participating in the forefront of such an extensive and important international project is a great opportunity for the Italian scientific community, both for the contribution that our many excellences can give and for sharing the big amount of data that SKA will collect and redistribute.”

    Seven countries signed the treaty today, including Australia, China, Italy, The Netherlands, Portugal, South Africa and the United Kingdom. India and Sweden, who also took part in the multilateral negotiations to set up the SKA Observatory IGO, are following further internal processes before signing the treaty. Together, these countries will form the founding members of the new organisation.

    Dr. Catherine Cesarsky, Chair of the SKA Board of Directors, added “Rome wasn’t built in a day. Likewise, designing, building and operating the world’s biggest telescope takes decades of efforts, expertise, innovation, perseverance, and global collaboration. Today we’ve laid the foundations that will enable us to make the SKA a reality.”

    “…SKA will be the largest science facility on the planet, with infrastructure spread across three continents on both hemispheres. Its two networks of hundreds of dishes and thousands of antennas will be distributed over hundreds of kilometres in Australia and South Africa, with the Headquarters in the United Kingdom.”

    SKA South Africa

    Together with facilities like the James Webb Space Telescope, CERN’s Large Hadron Collider, the LIGO gravitational wave detector, the new generation of extremely large optical telescopes and the ITER fusion reactor, the SKA will be one of humanity’s cornerstone physics machines in the 21st century.

    NASA/ESA/CSA Webb Telescope annotated

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    MIT /Caltech Advanced aLigo new bloc


    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    Prof. Philip Diamond, Director-General of the SKA Organization which has led the design of the telescope added: “Like Galileo’s telescope in its time, the SKA will revolutionize how we understand the world around us and our place in it. Today’s historic signature shows a global commitment behind this vision, and opens up the door to generations of ground-breaking discoveries.”

    It will help address fundamental gaps in our understanding of the Universe, enabling astronomers from its participating countries to study gravitational waves and test Einstein’s theory of relativity in extreme environments, investigate the nature of the mysterious fast radio bursts, improve our understanding of the evolution of the Universe over billions of years, map hundreds of millions of galaxies and look for signs of life in the Universe.

    Two of the world’s fastest supercomputers* will be needed to process the unprecedented amounts of data emanating from the telescopes, with some 600 petabytes expected to be stored and distributed worldwide to the science community every year, or the equivalent of over half a million laptops worth of data.

    Close to 700 million euros worth of contracts for the construction of the SKA will start to be awarded from late 2020 to companies and providers in the SKA’s member countries, providing a substantial return on investment for those countries. Spinoffs are also expected to emerge from work to design and build the SKA, with start-ups already being created out of some of the design work and impact reaching far beyond astronomy.


    In this video from the Disruptive Technologies Panel at the HPC User Forum, Peter Braam from Cambridge University presents: Processing 1 EB per Day for the SKA Radio Telescope.

    Over 1,000 engineers and scientists in 20 countries have been involved in designing the SKA over the past five years, with new research programs, educational initiatives and collaborations being created in various countries to train the next generation of scientists and engineers.

    Guests from Canada, France, Malta, New Zealand, the Republic of Korea, Spain and Switzerland were also in attendance to witness the signature and reaffirmed their strong interest in the project. They all confirmed they are making their best efforts to prepare the conditions for a future decision of participation of their respective country in the SKA Observatory.

    The signature concludes three and a half years of negotiations by government representatives and international lawyers, and kicks off the legislative process in the signing countries, which will see SKAO enter into force once five countries including all three hosts have ratified the treaty through their respective legislatures.

    SKAO becomes only the second intergovernmental organization dedicated to astronomy in the world, after the European Southern Observatory (ESO) [What about ESA and ALMA?].

    *Not identified in the article. I have asked for the names and locations of the supercomputers.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:43 pm on March 11, 2019 Permalink | Reply
    Tags: , , Frontera at TACC, Supercomputing,   

    From insideHPC: “New Texascale Magazine from TACC looks at HPC for the Endless Frontier” 

    From insideHPC

    1
    https://www.tacc.utexas.edu/documents/1084364/1705087/Texascale-2018.pdf/

    March 11, 2019

    This feature story describes how the computational power of Frontera will be a game changer for research. Late last year, the Texas Advanced Computing Center announced plans to deploy Frontera, the world’s fastest supercomputer in academia.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer



    TACC Frontera Dell EMC supercomputer fastest at any university

    To prepare for launch, TACC just published the inaugural edition of Texascale, an annual magazine with stories that highlight the people, science, systems, and programs that make TACC one of the leading academic computing centers in the world.

    In an inconspicuous-looking data center on The University of Texas at Austin’s J. J. Pickle Research Campus, construction is underway on one of the world’s most powerful supercomputers.

    The Frontera system (Spanish for “frontier”) will allow the nation’s academic scientists and engineers to probe questions both cosmic and commonplace — What is the universe composed of? How can we produce enough food to feed the Earth’s growing population? — that cannot be addressed in a lab or in the field; that require the number-crunching power equivalent to a small city’s worth of computers to solve; and that may be critical to the survival of our species.

    The name Frontera pays homage to the “endless frontier” of science envisioned by Vannevar Bush and presented in a report to President Harry Truman calling for a national strategy for scientific progress. The report led to the founding of the National Science Foundation (NSF) — the federal agency that funds fundamental research and education in science and engineering. It paved the way for investments in basic and applied research that laid the groundwork for our modern world, and inspired the vision for Frontera.

    “Whenever a new technological instrument emerges that can solve previously intractable problems, it has the potential to transform science and society,” said Dan Stanzione, executive director of TACC and one of the designers behind the new machine. “We believe that Frontera will have that kind of impact.”

    The Quest for Computing Greatness

    The pursuit of Frontera formally began in May 2017 when NSF issued an invitation for proposals for a new leadership-class computing facility, the top tier of high performance computing systems funded by the agency. The program would award $60 million to construct a supercomputer that could satisfy the needs of a scientific and engineering community that increasingly relies on computation.

    “For over three decades, NSF has been a leader in providing the computing resources our nation’s researchers need to accelerate innovation,” explained NSF Director France Córdova. “Keeping the U.S. at the forefront of advanced computing capabilities and providing researchers across the country access to those resources are key elements in maintaining our status as a global leader in research and education.”

    “The Frontera project is not just about the system; our proposal is anchored by an experienced team of partners and vendors with a community-leading track record of performance.” — Dan Stanzione, TACC

    Meet the The Architects

    When TACC proposed Frontera, it didn’t simply offer to build a fastest-in-its-class supercomputer. It put together an exceptional team of supercomputer experts and power users who together have internationally recognized expertise in designing, deploying, configuring, and operating HPC systems at the largest scale. Learn more about principal investigators who led the charge.

    NSF’s invitation for proposals indicated that the initial system would only be the beginning. In addition to enabling cutting-edge computations, the supercomputer would serve as a platform for designing a future leadership-class facility to be deployed five years later that would be 10 times faster still — more powerful than anything that exists in the world today.

    TACC has deployed major supercomputers several times in the past with support from NSF. Since 2006, TACC has operated three supercomputers that debuted among the Top15 most powerful in the world — Ranger (2008-2013; #4), Stampede1 (2012-2017, #7) and Stampede2 (2017-present, #12) — and three more systems that rose to the Top25. These systems established TACC, which was founded in 2001, as one of the world leaders in advanced computing.

    TACC solidified its reputation when, on August 28, 2018, NSF announced that the center had won the competition to design, build, deploy, and run the most capable system they had ever commissioned.

    “This award is an investment in the entire U.S. research ecosystem that will enable leap-ahead discoveries,” NSF Director Córdova said at the time.

    Frontera represents a further step for TACC into the upper echelons of supercomputing — the Formula One race cars of the scientific computing world. When Frontera launches in 2019, it will be the fastest supercomputer at any U.S. university and one of the fastest in the world — a powerful, all-purpose tool for science and engineering.

    ““Many of the frontiers of research today can be advanced only by computing,” Stanzione said. “Frontera will be an important tool to solve Grand Challenges that will improve our nation’s health, well-being, competitiveness, and security.”

    Supercomputers Expand the Mission

    Supercomputers have historically had very specific uses in the world of research, performing virtual experiments and analyses of problems that can’t be easily physically experimented upon or solved with smaller computers.

    Since 1945, when the ENIAC (Electronic Numerical Integrator and Computer) at the University of Pennsylvania first calculated artillery firing tables for the United States Army’s Ballistic Research Laboratory, the uses of large-scale computing have grown dramatically.

    Today, every discipline has problems that require advanced computing. Whether it’s cellular modeling in biology, the design of new catalysts in chemistry, black hole simulations in astrophysics, or Internet-scale text analyses in the social sciences, the details change, but the need remains the same.

    “Computation is arguably the most critical tool we possess to reach more deeply into the endless frontier of science,” Stanzione says. “While specific subfields of science need equipment like radio telescopes, MRI machines, and electron microscopes, large computers span multiple fields. Computing is the universal instrument.”

    In the past decade, the uses of high performance computing have expanded further. Massive amounts of data from sensors, wireless devices, and the Internet opened up an era of big data, for which supercomputers are well suited. More recently, machine and deep learning have provided a new way of not just analyzing massive datasets, but of using them to derive new hypotheses and make predictions about the future.

    As the problems that can be solved by supercomputers expanded, NSF’s vision for cyberinfrastructure — the catch-all term for the set of information technologies and people needed to provide advanced computing to the nation — evolved as well. Frontera represents the latest iteration of that vision.

    Data-Driven Design

    TACC’s leadership knew they had to design something innovative from the ground up to win the competition for Frontera. Taking a data-driven approach to the planning process, they investigated the usage patterns of researchers on Stampede1, as well as on Blue Waters — the previous NSF-funded leadership-class system — and in the Department of Energy (DOE)’s large-scale scientific computing program, INCITE, and analyzed the types of problems that scientists need supercomputers to solve.

    They found that Stampede1 usage was dominated by 15 commonly used applications. Together these accounted for 63 percent of Stampede1’s computing hours in 2016. Some 2,285 additional applications utilized the remaining 37 percent of the compute cycles. (These trends were consistent on Blue Waters and DOE systems as well.) Digging deeper they determined that, of the top 15 applications, 97 percent of the usage solved equations that describe motions of bodies in the universe, the interactions of atoms and molecules, or electron and fluids in motion.

    Frontera will be the fastest supercomputer at a U.S. university and likely Top 5 in the world when it launches in 2019. It will support simulation, data analysis and AI on the largest scales.

    “We did a careful analysis to understand the questions our community was using our supercomputers to solve and the codes and equations they used to solve them,” said TACC’s director of High Performance Computing, Bill Barth. “This narrowed the pool of problems that Frontera would need to excel in solving.”

    But past use wasn’t the only factor they considered. “It was also important to consider emerging uses of advanced computing resources for which Frontera will be critical,” Stanzione said. “Prominent among these are data-driven and data-intensive applications, as well as machine and deep learning.”

    Though still small in terms of their overall use of Stampede2, and other current systems, these areas are growing quickly and offer new ways to solve enduring problems.

    Whereas researchers traditionally wrote HPC codes in programming languages like C++ and Fortran, data-intensive problems often require non-traditional software or frameworks, such as R, Python, or TensorFlow.

    “The coming decade will see significant efforts to integrate physics-driven and data-driven approaches to learning,” said Tommy Minyard, TACC director of Advanced Computing Systems. “We designed Frontera with the capability to address very large problems in these emerging communities of computation and serve a wide range of both simulation-based and data-driven science.”

    The Right Chips for the Right Jobs

    Anyone following computer hardware trends in recent years has noticed the blossoming of options in terms of computer processors. Today’s landscape includes a range of chip architectures, from low energy ARM processors common in cell phones, to adaptable FPGAs (field-programmable gate arrays), to many varieties of CPU, GPUs and AI-accelerating chips.

    The team considered a wide-range of system options for Frontera before concluding that a CPU-based primary system with powerful Intel Xeon x86 nodes and a fast network would be the most useful platform for most applications.

    3

    Once built, TACC expects that the main compute system will achieve 35 to 40 petaflops of peak performance. For comparison, Frontera will be twice as powerful as Stampede2 (currently the fastest university supercomputer) and 70 times as fast as Ranger, which operated at TACC until 2013.

    To match what Frontera will compute in just one second, a person would have to perform one calculation every second for one billion years.

    In addition to its main system, Frontera will also include a subsystem made up of graphics processing units (GPUs) that have proven particularly effective for deep learning and molecular dynamics problems.

    “For certain application classes that can make effective use of GPUs, the subsystem will provide a cost-efficient path to high performance for those in the community that can fully exploit it,” Stanzione said.

    Designing a Complete Ecosystem

    The effectiveness of a supercomputer depends on more than just its processors. Storage, networking, power, and cooling are all critical as well.

    Frontera will include a storage subsystem from DataDirect Networks with almost 53 petabytes of capacity and nearly 2 terabytes per second of aggregate bandwidth. Of this, 50 petabytes will use disk-based, distributed storage, while 3 petabytes will employ a new type of very fast storage known as Non-volatile Memory Express storage, broadening the system’s usefulness for the data science community.

    Supercomputing applications often employ many compute nodes, or devices, at once, which requires passing data and instructions from one part of the system to another. Mellanox InfiniBand interconnects will provide 100 Gigabits per second (Gbps) connectivity to each node, and 200 Gbps between the central switches.

    These components will be integrated via servers from Dell EMC, who has partnered with TACC since 2003 on massive systems, including Stampede1 and 2.

    “The new Frontera system represents the next phase in the long-term relationship between TACC and Dell EMC, focused on applying the latest technical innovation to truly enable human potential,” said Thierry Pellegrino, vice president of Dell EMC High Performance Computing.”

    Though a top system in its own right, Frontera won’t operate as an island. Users will have access to TACC’s other supercomputers — Stampede2, Lonestar, Wrangler, and many more, each with a unique architecture — and storage resources, including Stockyard, TACC’s global file system; Corral, TACC’s data collection repository; and Ranch, a tape-based long-term archival system.

    Together, they compose an ecosystem for scientific computing that is arguably unmatched in the world.

    4

    New Models of Access & Use

    Researchers traditionally interact with supercomputers through the command line — a text-only program that takes instructions and passes them on to the computer’s operating system to run.

    The bulk of a supercomputer’s time (roughly 90 percent of the cycles on Stampede2) is consumed by researchers using the system in this way. But as computing becomes more complex, having a lower barrier to entry and offering an end-to-end solution to access data, software, and computing services has grown in importance.

    Science gateways offer streamlined, user-friendly interfaces to cyberinfrastructure services. In recent years, TACC has become a leader in building these accessible interfaces for science.

    “Visual interfaces can remove much of the complexity of traditional HPC, and lower this entry barrier,” Stanzione said. “We’ve deployed more than 20 web-based gateways, including several of the most widely used in the world. On Frontera, we’ll allow any community to build their own portals, applications, and workflows, using the system as the engine for computations.”

    Though they use a minority of computing cycles, a majority of researchers actually access supercomputers through portals and gateways. To serve this group, Frontera will support high-level languages like Python, R, and Julia, and offer a set of RESTful APIs (application program interfaces) that will make the process of building community-wide tools easier.

    “We’re committed to delivering the transformative power of computing to a wide variety of domains from science and engineering to the humanities,” said Maytal Dahan, TACC’s director of Advanced Computing Interfaces. “Expanding into disciplines unaccustomed to computing from the command line means providing access in a way that abstracts the complexity and technology and lets researchers focus on their scientific impact and discoveries.”

    5

    The Cloud

    For some years, there has been a debate in the advanced computing community about whether supercomputers or “the cloud” are more useful for science. The TACC team believes it’s not about which is better, but how they might work together. By design, Frontera takes a bold step towards bridging this divide by partnering with the nation’s largest cloud providers — Microsoft, Amazon, and Google — to provide cloud services that complement TACC’s existing offerings and have unique advantages.

    It’s no secret that supercomputers use a lot of power. Frontera will require more than 5.5 megawatts to operate — the equivalent of powering more than 3,500 homes. To limit the expense and environmental impact of running Frontera, TACC will employ a number of energy-saving measures with the new system. Some were put in place years ago; others will be deployed at TACC for the first time. All told, TACC expects one-third of the power for Frontera to come from renewable sources.

    These include long-term storage for sharing datasets with collaborators; access to additional types of computing processors and architectures that will appear after Frontera launches; cloud-based services like image classification; and Virtual Desktop Interfaces that allow a cloud-based filesystem to look like one’s home computer.

    “The modern scientific computing landscape is changing rapidly,” Stanzione said. “Frontera’s computing ecosystem will be enhanced by playing to the unique strengths of the cloud, rather than competing with them.”

    Software & Containers

    When the applications that researchers rely on are not available on HPC systems, it creates a barrier to large-scale science. For that reason, Frontera will support the widest catalog of applications of any large-scale scientific computing system in history.

    TACC will work with application teams to support highly-tuned versions of several dozen of the most widely used applications and libraries. Moreover, Frontera will provide support for container-based virtualization, which sidesteps the challenges of adapting tools to a new system while enabling entirely new types of computation.

    With containers, user communities develop and test their programs on laptops or in the cloud, and then transfer those same workflows to HPC systems using programs like Singularity. This facilitates the development of event-driven workflows, which automate computations in response to external events like natural disasters, or for the collection of data from large-scale instruments and experiments.

    “Frontera will be a more modern supercomputer, not just in the technologies it uses, but in the way people will access it,” Stanzione said.

    A Frontier System to Solve Frontier Challenges

    Talking about a supercomputer in terms of its chips and access modes is a bit like talking about a telescope in terms of its lenses and mounts. The technology is important, but the ultimate question is: what can it do that other systems can’t?

    Entirely new problems and classes of research will be enabled by Frontera. Examples of projects Frontera will tackle in its first year include efforts to explore models of the Universe beyond the Standard Model in collaboration with researchers from the Large Hadron Collider; research that uses deep learning and simulation to predict in advance when a major disruption may occur within a fusion reactor to prevent damaging these incredibly expensive systems; and data-driven genomics studies to identify the right species of crops to plant in the right place at the right time to maximize production and feed the planet. [See more about each project in the box below.]

    The LHC modeling effort, fusion disruption predictions, and genomic analyses represent the types of ‘frontier,’ Grand Challenge research problems Frontera will help address.

    “Many phenomena that were previously too complex to model with the hardware of just a few years ago are within reach for systems with tens of petaflops,” said Stanzione.

    A review committee made up of computational and domain experts will ultimately select the projects that will run on Frontera, with a small percentage of time reserved for emergencies (as in the case of hurricane forecasting), industry collaborations, or discretionary use.

    It’s impossible to say what the exact impact of Frontera will be, but for comparison, Stampede1, which was one quarter as powerful as Frontera, enabled research that led to nearly 4,000 journal articles. These include confirmations of gravitational wave detections by LIGO that contributed to a Nobel Prize in Physics in 2016; discoveries of FDA approved drugs that have been successful in treating cancer; and a greater understanding of DNA interactions enabling the design of faster and cheaper gene sequencers.

    From new machine learning techniques to diagnose and treat diseases to fundamental mathematical and computer science research that will be the basis for the next generation of scientists’ discoveries, Frontera will have an outsized impact on science nationwide.

    Frontera will be the most powerful supercomputer at any U.S. university and likely top 10 in the world when it launches in 2019. It will support simulation, data analysis, and AI on the largest scales.

    Physics Beyond the Standard Model

    The NSF program that funds Frontera is titled, Towards a Leadership-Class Computing Facility. This phrasing is important because, as powerful as Frontera is, NSF sees it as a step toward even greater support for the nation’s scientists and engineers. In fact, the program not only funds the construction and operation of Frontera — the fastest system NSF has ever deployed — it also supports the planning, experimentation, and design required to build a system in five years that will be 10 times more capable than Frontera.

    “We’ll be planning for the next generation of computational science and what that means in terms of hardware, architecture, and applications,” Stanzione said. “We’ll start with science drivers — the applications, workflows, and codes that will be used — and use those factors to determine the architecture and the balance between storage, networks, and compute needed in the future.”

    Much like the data-driven design process that influenced the blueprint for Frontera, the TACC team will employ a “design — operate — evaluate” cycle on Frontera to plan Phase 2.

    TACC has assembled a Frontera Science Engagement Team, consisting of a more than a dozen leading computational scientists from a range of disciplines and universities, to help determine the “the workload of the future” — the science drivers and requirements for the next generation of systems. The team will also act as liaisons to the broader community in their respective fields, presenting at major conferences, convening discussions, and recruiting colleagues to participate in the planning.

    Fusion physicist William Tang joined the Frontera Science Engagement Team in part because he believed in TACC’s vision for cyberinfrastructure. “AI and deep learning are huge areas of growth. TACC definitely saw that and encouraged that a lot more. That played a significant part in the winning proposal, and I’m excited to join the activities going forward,” Tang said.

    A separate technology assessment team will use a similar strategy to identify critical emerging technologies, evaluate them, and ultimately develop some as testbed systems.

    TACC will upgrade and make available their FPGA testbed, which investigates new ways of using interconnected FPGAs as computational accelerators. They also hope to add an ARM testbed and other emerging technologies.

    Other testbeds will be built offsite in collaboration with partners. TACC will work with Stanford University and Atos to deploy a quantum simulator that will allow them to study quantum systems. Partnerships with the cloud providers Microsoft, Google, and Amazon, will allow TACC to track AMD (Advanced Micro Devices) solutions, neuromorphic prototypes and tensor processing units.

    Finally, TACC will work closely with Argonne National Laboratory to assess the technologies that will be deployed in the Aurora21 system, which will enter production in 2021. TACC will have early access to the same compute and storage technologies that will be deployed in Aurora21, as well as Argonne’s simulators, prototypes, software tools, and application porting efforts, which TACC will evaluate for the academic research community.

    “The primary compute elements of Frontera represent a relatively conservative approach to scientific computing,” Minyard said. “While this may remain the best path forward through the mid-2020’s and beyond, a serious evaluation of a Phase 2 system will require not only projections and comparisons, but hands-on access to future technologies. TACC will provide the testbed systems not only for our team and Phase 2 partners, but to our full user community as well.”

    Using the “design — operate — evaluate” process, TACC will develop a quantitative understanding of present and future application performance. It will build performance models for the processors, interconnects, storage, software, and modes of computing that will be relevant in the Phase 2 timeframe.

    “It’s a push/pull process,” Stanzione said. “Users must have an environment in which they can be productive today, but that also incentivizes them to continuously modernize their applications to take advantage of emerging computational technologies.”

    The deployment of two to three small scale systems at TACC will allow the assessment team to evaluate the performance of the system against their model and gather specific feedback from the NSF science user community on usability. From this process, the design of the Phase 2 leadership class system will emerge.

    With Great Power Comes Great Responsibility

    The design process will culminate some years in the future. Meanwhile, in the coming months, Frontera’s server racks will begin to roll into TACC’s data center. From January to March 2019, TACC will integrate the system with hundreds of miles of networking cables and install the software stack. In the spring, TACC will host an early user period where experienced researchers will test the system and work out any bugs. Full production will begin in the summer of 2019.

    “We want it to be one of the most useable and accessible systems in the world,” Stanzione said. “Our design is not uniquely brilliant by us. It’s the logical next step — smart engineering choices by experienced operators.”

    It won’t be TACC’s first rodeo. Over 17 years, the team has developed and deployed more than two dozen HPC systems totaling more than $150 million in federal investment. The center has grown to nearly 150 professionals, including more than 75 PhD computational scientists and engineers, and earned a stellar reputation for providing reliable resources and superb user service. Frontera will provide a unique resource for science and engineering, capable of scaling to the very largest capability jobs, running the widest array of jobs, and supporting science in all forms.

    The project represents the achievement of TACC’s mission of “Powering Discoveries to Change the World.”

    “Computation is a key element to scientific progress, to engineering new products, to improving human health, and to our economic competitiveness. This system will be the NSF’s largest investment in computing in the next several years. For that reason, we have an enormous responsibility to our colleagues all around the U.S. to deliver a system that will enable them to be successful,” Stanzione said. “And if we succeed, we can change the world.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:50 pm on March 4, 2019 Permalink | Reply
    Tags: , Aristotle Cloud Federation, Combining the consulting strengths of the XSEDE Cyberinfrastructure Resource Integration (CRI) group with Aristotle Cloud Federation project expertise in OpenStack cloud implementation is a win-win fo, , Supercomputing,   

    From insideHPC: “XSEDE teams with Aristotle Cloud Federation to implement clouds on U.S. campuses” 

    From insideHPC

    Two NSF-funded projects have joined forces to help university systems administrators implement cloud computing systems on their campuses.

    “Combining the consulting strengths of the XSEDE Cyberinfrastructure Resource Integration (CRI) group with Aristotle Cloud Federation project expertise in OpenStack cloud implementation is a win-win for U.S. campus cyberinfrastructure administrators interested in adding clouds to their campus research resources,” said John Towns, principal investigator and project director for XSEDE”

    After an initial phone consultation, XSEDE staff will assist campus systems administrators or researchers onsite with the configuration of their OpenStack cloud and ensure that they have the knowledge they need to maintain it. There is no fee for XSEDE staff to travel to U.S. campuses and provide this service; the service is performed as part of the XSEDE CRI group’s mission to help campus cyberinfrastructure staff manage their local computing and storage resources.

    The cloud implementation process and documentation was developed by Aristotle Cloud Federation staff consultants and used by federation member Dartmouth College to deploy their first OpenStack cloud. “Aristotle project experience and lessons learned from deploying multiple campus clouds is saving us time and effort,” said George Morris, director of research computing at Dartmouth.

    Clouds have a complementary role to play in campus cyberinfrastructure,” said David Lifka, Aristotle project PI and vice president for information technologies and CIO at Cornell University. “The key is rightsizing the on-premise cloud so that utilization is 85% or higher; that makes it cost-effective, and campus users can then leverage public or NSF clouds when they need more resources or a Platform as a Service such as machine learning,” he added.

    Campus systems administrators or scientific research centers interested in participating in this opportunity should contact help@xsede.org. The OpenStack cloud implementation service will be available starting September 2019.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:58 pm on February 28, 2019 Permalink | Reply
    Tags: "Featured Image: How Gas Affects the Structure of Galaxies", , , , , , Supercomputing   

    From AAS NOVA: “Featured Image: How Gas Affects the Structure of Galaxies” 

    AASNOVA

    From AAS NOVA

    25 February 2019
    Susanna Kohler

    1
    These beautiful images (click for the full view) from a simulation of a Milky-Way-sized, isolated disk galaxy capture how the presence of gas affects a galaxy’s formation and evolution over time. The simulations, run by a team of scientists led by Woo-Young Seo (Seoul National University and Chungbuk National University, Republic of Korea), demonstrate that the amount of gas present in a forming galaxy influences the formation of features like spiral structure, a central bar, and even a nuclear ring — a site of intense star formation that encircles the very center of the galaxy. The images above (sized at 20 x 20 kpc) and below (same simulation, but zoomed to the central 2 x 2 kpc region) follow a warm galaxy model with a 5% gas fraction over the span of 5 billion years. For more information about the authors’ discoveries — and more gorgeous images of simulated, evolving galaxies — check out the article linked below.

    2

    Citation

    “Effects of Gas on Formation and Evolution of Stellar Bars and Nuclear Rings in Disk Galaxies,” Woo-Young Seo et al 2019 ApJ 872 5.
    https://iopscience.iop.org/article/10.3847/1538-4357/aafc5f/meta

    Simulations were run on Tachyon 2 Korea Institute of Science and Technology Sun AMD Opteron 2GHz supercomputer.

    Tachyon 2 Korea Institute of Science and Technology Sun AMD Opteron 2GHz supercomputer

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    1

    AAS Mission and Vision Statement

    The mission of the American Astronomical Society is to enhance and share humanity’s scientific understanding of the Universe.

    The Society, through its publications, disseminates and archives the results of astronomical research. The Society also communicates and explains our understanding of the universe to the public.
    The Society facilitates and strengthens the interactions among members through professional meetings and other means. The Society supports member divisions representing specialized research and astronomical interests.
    The Society represents the goals of its community of members to the nation and the world. The Society also works with other scientific and educational societies to promote the advancement of science.
    The Society, through its members, trains, mentors and supports the next generation of astronomers. The Society supports and promotes increased participation of historically underrepresented groups in astronomy.
    The Society assists its members to develop their skills in the fields of education and public outreach at all levels. The Society promotes broad interest in astronomy, which enhances science literacy and leads many to careers in science and engineering.

    Adopted June 7, 2009

     
  • richardmitnick 7:04 pm on February 22, 2019 Permalink | Reply
    Tags: "Supercomputing Neutron Star Structures and Mergers", , Bridges at Pittsburgh Supercomputer Center, , , , Stampede2 at the Texas Advanced Computing Center (TACC), Supercomputing, ,   

    From insideHPC: “Supercomputing Neutron Star Structures and Mergers” 

    From insideHPC

    1
    This image of an eccentric binary neutron star system’s close encounter is an example of the large surface gravity wave excitations, which are similar to ocean waves found in very deep water. Credit: William East, Perimeter Institute for Theoretical Physics


    Perimeter Institute in Waterloo, Canada

    Over at XSEDE, Kimberly Mann Bruch & Jan Zverina from the San Diego Supercomputer Center write that researchers are using supercomputers to create detailed simulations of neutron star structures and mergers to better understand gravitational waves, which were detected for the first time in 2015.

    SDSC Dell Comet* supercomputer

    During a supernova, a single massive star explodes – some die and form black holes while others survive, depending on the star’s mass. Some of these supernova survivors are stars whose centers collapse and their protons and electrons form into a neutron star, which has an average gravitational pull that is two billion times the gravity on Earth.

    Researchers from the U.S., Canada, and Brazil have been focusing on the construction of a gravitational wave model for the detection of eccentric binary neutron stars. Using Comet* at the San Diego Supercomputer Center (SDSC) and Stampede2 at the Texas Advanced Computing Center (TACC), the scientists performed simulations of oscillating binary neutron stars to develop a novel model to predict the timing of various pericenter passages, which are the points of closest approach for revolving space objects.

    Texas Advanced Computer Center

    TACC DELL EMC Stampede2 supercomputer

    Their study, Evolution of Highly Eccentric Binary Neutron Stars Including Tidal Effects was published in Physical Review D. Frans Pretorius, a physics professor at Princeton University, is the Principal Investigator on the allocated project.

    “Our study’s findings provide insight into binary neutron stars and their role in detecting gravitational waves,” according to co-author Huan Yang, with the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada. “We can see that the oscillation of the stars significantly alters the trajectory and it is important to mention the evolution of the modes. For this case, during some of the later close encounters, the frequency of the orbit is larger when this evolution is tracked – compared to when it is not – as energy and angular momentum are taken out of the neutron star oscillations and put back into orbit.”

    In other words, probing gravitational waves from eccentric binary neutron stars provides a unique opportunity to observe neutron star oscillations. Through these measurements, researchers can infer the internal structure of neutron stars.

    “This is analogous to the example of ‘hearing the shape on a drum,’ where the shape of a drumhead can be determined by measuring frequencies of its modes,” said Yang. “By ‘hearing’ the modes of neutron stars with gravitational waves, the star’s size and internal structure will be similarly determined, or at least constrained.”

    “In particular, our dynamical space-time simulations solve the equations of Einstein’s theory of general relativity coupled to perfect fluids,” said co-author Vasileios Paschalidis, with the University of Arizona’s Theoretical Astrophysics Program. “Neutron star matter can be described as a perfect fluid, therefore the simulations contain the necessary physics to understand how neutron stars oscillate due to tidal interactions after every pericenter passage, and how the orbit changes due to the excited neutron star oscillations. Such simulations are computationally very expensive and can be performed only in high-performance computing centers.”

    “XSEDE resources significantly accelerated our scientific output,” noted Paschalidis, whose group has been using XSEDE for well over a decade, when they were students or post-doctoral researchers. “If I were to put a number on it, I would say that using XSEDE accelerated our research by a factor of three or more, compared to using local resources alone.”

    Neutron Star Mergers Form the Cauldron that Brews Gravitational Waves

    Merging neutron stars. Image Credit: Mark Garlick, University of Warwick.

    The merger of two neutron stars produces a hot (up to one trillion degrees Kelvin), rapidly rotating massive neutron star. This remnant is expected to collapse to form a black hole within a timescale that could be as short as one millisecond, or as long as many hours, depending on the sum of the masses of the two neutron stars.

    Featured in a recent issue of the Monthly Notices of the Royal Astronomical Society, Princeton University Computational and Theoretical Astrophysicist David Radice and his colleagues presented results from their simulations of the formation of neutron star merger remnants surviving for at least one tenth of a second. Radice turned to XSEDE for access to Comet, Stampede2, and Bridges, which is based at the Pittsburgh Supercomputing Center (PSC).

    Pittsburgh Supercomputer Center

    Bridges supercomputer at PSC

    It has been long thought that this type of merger product would be driven toward solid-body rotation by turbulent angular momentum transport, which acts as an effective viscosity. However, Radice and his collaborators discovered that the evolution of these objects is actually more complex.

    4
    The massive neutron star shown in this three-dimensional rendition of a Comet-enabled simulation shows the emergence of a wind driven by neutrino radiation. The star is surrounded by debris expelled during and shortly after the merger. Credit: David Radice, Princeton University

    “We found that long-lived neutron star merger remnants are born with so much angular momentum that they are unable to reach solid body rotation,” said Radice. “Instead, they are viscously unstable. We expect that this instability will result in the launching of massive neutron rich winds. These winds, in turn, will be extremely bright in the UV/optical/infrared bands. The observation of such transients, in combination with gravitational-wave events or short gamma-ray bursts, would be ‘smoking gun’ evidence for the formation of long-lived neutron star merger remnants.”

    If detected, the bright transients predicted in this study could allow astronomers to measure the threshold mass below which neutron star mergers do not result in rapid black hole formation. This insight would be key in the quest to understand the properties of matter at extreme densities found in the hearts of neutron stars.

    Radice’s research used 35 high-resolution, general-relativistic neutron star merger simulations, which calculated the geometry of space-time as predicted by Einstein’s equations and simulated the neutron star matter using sophisticated microphysical models. On average, one of these simulations required about 300,000 CPU-hours.

    “My research would not be possible without XSEDE,” said Radice, who has used XSEDE resources since 2013, and for this study collaborated with Lars Koesterke at TACC to run his code efficiently on Stampede2. Specifically, this work was conducted in the context of an XSEDE Extended Collaborative Support Services (ECSS) project, which will be of benefit to future research.”

    “The cost can be up to a factor of three times higher for the selected models that were run at even higher resolution and depending on the detail level in the microphysics,” added Radice. “Because of the unique requirements of this study, which included a large number of intermediate-size simulations and few larger calculations, a key enabler was the availability of a combination of capability and capacity supercomputers including Comet and Bridges.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 3:58 pm on February 19, 2019 Permalink | Reply
    Tags: A simplified version of that interface will make some of that data accessible to the public, , , , , Every 40 seconds LSST’s camera will snap a new image of the sky, Hundreds of computer cores at NCSA will be dedicated to this task, International data highways, LSST Data Journey, , National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, NCSA will be the central node of LSST’s data network, Supercomputing, , The two data centers NCSA and IN2P3 will provide petascale computing power corresponding to several million billion computing operations per second, They are also developing machine learning algorithms to help classify the different objects LSST finds in the sky   

    From Symmetry: “An astronomical data challenge” 

    Symmetry Mag
    From Symmetry

    1
    Illustration by Sandbox Studio, Chicago with Ana Kova

    02/19/19
    Manuel Gnida

    The Large Synoptic Survey Telescope will manage unprecedented volumes of data produced each night.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The Large Synoptic Survey Telescope—scheduled to come online in the early 2020s—will use a 3.2-gigapixel camera to photograph a giant swath of the heavens. It’ll keep it up for 10 years, every night with a clear sky, creating the world’s largest astronomical stop-motion movie.

    The results will give scientists both an unprecedented big-picture look at the motions of billions of celestial objects over time, and an ongoing stream of millions of real-time updates each night about changes in the sky.

    3
    Illustration by Sandbox Studio, Chicago with Ana Kova

    Accomplishing both of these tasks will require dealing with a lot of data, more than 20 terabytes each day for a decade. Collecting and storing the enormous volume of raw data, turning it into processed data that scientists can use, distributing it among institutions all over the globe, and doing all of this reliably and fast requires elaborate data management and technology.

    International data highways

    This type of data stream can be handled only with high-performance computing, the kind available at the National Center for Supercomputing Applications at the University of Illinois, Urbana-Champaign.

    NCSA U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer

    Unfortunately, the U of I is a long way from Cerro Pachón, the remote Chilean mountaintop where the telescope will actually sit.

    But a network of dedicated data highways will make it feel like the two are right next door.

    LSST Data Journey,Illustration by Sandbox Studio, Chicago with Ana Kova

    Every 40 seconds, LSST’s camera will snap a new image of the sky. The camera’s data acquisition system will read out the data, and, after some initial corrections, send them hurtling down the mountain through newly installed high-speed optical fibers. These fibers have a bandwidth of up to 400 gigabits per second, thousands of times larger than the bandwidth of your typical home internet.

    Within a second, the data will arrive at the LSST base site in La Serena, Chile, which will store a copy before sending them to Chile’s capital, Santiago.

    From there, the data will take one of two routes across the ocean.

    The main route will lead them to São Paolo, Brazil, then fire them through cables across the ocean floor to Florida, which will pass them to Chicago, where they will finally be rerouted to the NCSA facility at the University of Illinois.

    If the primary path is interrupted, the data will take an alternative route through the Republic of Panama instead of Brazil. Either way, the entire trip—covering a distance of about 5000 miles—will take no more than 5 seconds.

    Curating LSST data for the world

    NCSA will be the central node of LSST’s data network. It will archive a second copy of the raw data and maintain key connections to two US-based facilities, the LSST headquarters in Tucson, which will manage science operations, and SLAC National Accelerator Laboratory in Menlo Park, California, which will provide support for the camera. But NCSA will also serve as the main data processing center, getting raw data ready for astrophysics research.

    NCSA will prepare the data at two speeds: quickly, for use in nightly alerts about changes to the sky, and at a more leisurely pace, for release as part of the annual catalogs of LSST data.

    6
    Illustration by Sandbox Studio, Chicago with Ana Kova

    Alert production has to be quick, to give scientists at LSST and other instruments time to respond to transient events, such as a sudden flare from an active galaxy or dying star, or the discovery of a new asteroid streaking across the firmament. LSST will send out about 10 million of these alerts per night, each within a minute after the event.

    Hundreds of computer cores at NCSA will be dedicated to this task. With the help of event brokers—software that facilitates the interaction with the alert stream—everyone in the world will be able to subscribe to all or a subset of these alerts.

    NCSA will share the task of processing data for the annual data releases with IN2P3, the French National Institution of Nuclear and Particle Physics, which will also archive a copy of the raw data.

    3

    The two data centers will provide petascale computing power, corresponding to several million billion computing operations per second.

    7
    Illustration by Sandbox Studio, Chicago with Ana Kova

    The releases will be curated catalogs of billions of objects containing calibrated images and measurements of object properties, such as positions, shapes and the power of their light emissions. To pull these details from the data, LSST’s data experts are creating advanced software for image processing and analysis. They are also developing machine learning algorithms to help classify the different objects LSST finds in the sky.

    Annual data releases will be made available to scientists in the US and Chile and institutions supporting LSST operations.

    Last but not least, LSST’s data management team is working on an interface that will make it easy for scientists to use the data LSST collects. What’s even better: A simplified version of that interface will make some of that data accessible to the public.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 4:35 pm on February 15, 2019 Permalink | Reply
    Tags: EuroHPC JU will be the owner of the precursors to exascale supercomputers it will acquire, EuroHPC JU-European High Performance Computing Joint Undertaking, EuroHPC Takes First Steps Towards Exascale, , Supercomputing   

    From insideHPC: “EuroHPC Takes First Steps Towards Exascale” 

    From insideHPC

    1

    The European High Performance Computing Joint Undertaking (EuroHPC JU) has launched its first calls for expressions of interest, to select the sites that will host the Joint Undertaking’s first supercomputers (petascale and precursor to exascale machines) in 2020.

    “Deciding where Europe will host its most powerful petascale and precursor to exascale machines is only the first step in this great European initiative on high performance computing,” said Mariya Gabriel, Commissioner for Digital Economy and Society. “Regardless of where users are located in Europe, these supercomputers will be used in more than 800 scientific and industrial application fields for the benefit of European citizens.”

    Supercomputing, also known as high performance computing (HPC), involves thousands of processors working in parallel to analyse billions of pieces of data in real time, performing calculations much faster than a normal computer, and enabling scientific and industrial challenges of great scale and complexity to be met. The EuroHPC JU has the target of equipping the EU by the end of 2020 with a world-class supercomputing infrastructure that will be available to users from academia, industry and small and medium-sized enterprises, and the public sector. These new European supercomputers will also support the development of leading scientific, public sector and industrial applications in many domains, including personalised medicine, bio-engineering, weather forecasting and tackling climate change, discovering new materials and medicines, oil and gas exploration, designing new planes and cars, and smart cities.

    The EuroHPC JU was established in 2018, with the participation of 25 European countries and the European Commission, and has its headquarters in Luxembourg. By 2020, its objective is to acquire and deploy in the EU at least two supercomputers that will rank among the top five in the world, and at least two others that today would be in the top 25 machines globally. These supercomputers will be hosted and operated by hosting entities (existing national supercomputing centres) located in different Member States participating in the EuroHPC JU.

    To this purpose, the EuroHPC JU has now opened two calls for expressions of interest:

    Call for hosting entities for petascale supercomputers (with a performance level capable of executing at least 1015 operations per second, or 1 Petaflop)
    Call for hosting entities for precursor to exascale supercomputers (with a performance level capable of executing more than 150 Petaflops).

    In addition to these plans, the EuroHPC JU aims to acquire by 2022/23 exascale supercomputers, capable of 1018 operations per second, with at least one being based on European HPC technology.

    In the acquisition of the petascale supercomputers, the EuroHPC JU’s financial contribution, from the EU’s budget, will be up to EUR 30 million, covering up to 35% of the acquisition costs. All the remaining costs of the supercomputers will be covered by the country where the hosting entity is established.

    For the precursor to exascale supercomputers, the EuroHPC JU’s financial contribution, from the EU’s budget, will be up to EUR 250 million and will enable the JU to fund up to 50% of the acquisition costs, and up to 50% of the operating costs of the supercomputers. The hosting entities and their supporting countries will cover the remaining acquisition and operating costs. The EuroHPC JU will be the owner of the precursors to exascale supercomputers it will acquire.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 4:15 pm on February 12, 2019 Permalink | Reply
    Tags: , , Chemist Francesco Evangelista-a winner of the Dirac Medal, , Emory University, , Supercomputing   

    From Emory University- “A new spin on computing: Chemist leads $3.9 million DOE quest for quantum software” 

    From Emory University

    February 5, 2019
    Carol Clark

    1
    “Quantum computers are not just exponentially faster, they work in a radically different way from classical computers,” says chemist Francesco Evangelista, who is leading a project to develop quantum software He is a winner of the Dirac medal.

    When most people think of a chemistry lab, they picture scientists in white coats mixing chemicals in beakers. But the lab of theoretical chemist Francesco Evangelista looks more like the office of a tech start-up. Graduate students in jeans and t-shirts sit around a large, round table chatting as they work on laptops.

    “A ‘classical’ chemist is focused on getting a chemical reaction and creating new molecules,” explains Evangelista, assistant professor at Emory University. “As theoretical chemists, we want to understand how chemistry really works — how all the atoms involved interact with one another during a reaction.”

    Working at the intersection of math, physics, chemistry and computer science, the theorists develop algorithms to serve as simulation models for the molecular behaviors of atomic nuclei and electrons. They also develop software that enables them to feed these algorithms into “super” computers — nearly a million times faster than a laptop — to study chemical processes.

    The problem is, even super computers are taxed by the mind-boggling combinatorial complexity underlying reactions. That limits the pace of the research.

    “Computers have hit a barrier in terms of speed,” Evangelista says. “One way to make them more powerful is to make transistors smaller, but you can’t make them smaller than the width of a couple of atoms — the limit imposed by quantum mechanics. That’s why there is a race right now to make breakthroughs in quantum computing.”

    Evangelista and his graduate students have now joined that race.

    The Department of Energy (DOE) awarded Evangelista $3.9 million to lead research into the development of software to run the first generation of quantum computers. He is the principal investigator for the project, encompassing scientists at seven universities, to develop new methods and algorithms for calculating problems in quantum chemistry. The tools the team develops will be open access, made available to other researchers for free.

    Watch a video about Francesco Evangelista’s work,
    produced by the Camille & Henry Dreyfus Foundation:

    While big-data leaders — such as IBM, Google, Intel and Rigetti — have developed prototypes of quantum computers, the field remains in its infancy. Many technological challenges remain before quantum computers can fulfill their promise of speeding up calculations to crack major mysteries of the natural world.

    The federal government will play a strong supporting role in achieving this goal. President Trump recently signed a $1.2 billion law, the National Quantum Initiative Act, to fund advances in quantum technologies over the next five years.

    “Right now, it’s a bit of a wild west, but eventually people working on this giant endeavor are going to work out some of the current technological problems,” Evangelista says. “When that happens, we need to have quantum software ready and a community trained to use it for theoretical chemistry. Our project is working on programming codes that will someday get quantum computers to do the calculations we want them to do.”

    The project will pave the way for quantum computers to simulate chemical systems critical to the mission of the DOE, such as transition metal catalysts, high-temperature superconductors and novel materials that are beyond the realm of simulation on “classical” computers. The insights gained could speed up research into how to improve everything from solar power to nuclear energy.

    Unlike objects in the “classical” world, that we can touch, see and experience around us, nature behaves much differently in the ultra-small quantum world of atoms and subatomic particles.

    “One of the weird things about quantum mechanics is that you can’t say whether an electron is actually only here or there,” Evangelista says.

    He takes a coin from his pocket. “In the classical world, we know that an object like this quarter is either in my pocket or in your pocket,” Evangelista says. “But if this was an electron, it could be in both our pockets. I cannot tell you exactly where it is, but I can use a wave function to describe the likelihood of whether it is here or there.”

    To make things even more complicated, the behavior of electrons can be correlated, or entangled. When objects in our day-to-day lives, like strands of hair, become entangled they can be teased apart and separated again. That rule doesn’t apply at the quantum scale where entangled objects are somehow intimately connected even if they are apart in space.

    “Three electrons moving in three separate orbitals can actually be interacting with one another,” Evangelista says. “Somehow they are talking together and their motion is correlated like ballerinas dancing and moving in a concerted way.”

    2
    Graduate students in Evangelista’s lab are developing algorithms to simulate quantum software so they can run tests and adapt the design based on the results.

    Much of Evangelista’s work involves trying to predict the collective behavior of strongly correlated electrons. In order to understand how a drug interacts with a protein, for example, he needs to consider how it affects the hundreds of thousands of atoms in that protein, along with the millions of electrons within those atoms.

    “The problem quickly explodes in complexity,” Evangelista says. “Computationally, it’s difficult to account for all the possible combinations of ways the electrons could be interacting. The computer soon runs out of memory.”

    A classical computer stores memory in a line of “bits,” which are represented by either a “0” or a “1.” It operates on chunks of 64 bits of memory at a time, and each bit is either distinctly a 0 or a 1. If you add another bit to the line, you get just one more bit of memory.

    A quantum computer stores memory in quantum bits, or qubits. A single qubit can be either a 0 or a 1 — or mostly a 0 and part of a 1 — or any other combination of the two. When you add a qubit to a quantum computer, it increases the memory by a factor of two. The fastest quantum computers now available contain around 70 qubits.

    “Quantum computers are not just exponentially faster, they work in a radically different way from classical computers,” Evangelista says.

    For instance, a classical computer can determine all the consequences of a chess move by working one at a time through the chain of possible next moves. A quantum computer, however, could potentially determine all these possible moves in one go, without having to work through each step.

    While quantum computers are powerful, they are also somewhat delicate.

    “They’re extremely sensitive,” Evangelista says. “They have to be kept at low temperatures to maintain their coherence. In a typical setup, you also need a second computer kept at very low temperatures to drive the quantum computer, otherwise the heat from the wires coming out will destroy entanglement.”

    The potential error rate is one of the challenges of the DOE project to develop quantum software. The researchers need to determine the range of errors that can still yield a practical solution to a calculation. They will also develop standard benchmarks for testing the accuracy and computing power of new quantum hardware and they will validate prototypes of quantum computers in collaborations with industry partners Google and Rigetti.

    Just as they develop algorithms to simulate chemical processes, Evangelista and his graduate students are now developing algorithms to simulate quantum software so they can run tests and adapt the design based on the results.

    Evangelista pulled together researchers from other universities with a range of expertise for the project, including some who are new to quantum computing and others who are already experts in the field. The team includes scientists from Rice University, Northwestern, the University of Michigan, CalTech, the University of Toronto and Dartmouth.

    The long-range goal is to spur the development of more efficient energy sources, including solar power, by providing detailed data on phenomena such as the ways electrons in a molecule are affected when that molecule absorbs light.

    “Ultimately, such theoretical insights could provide a rational path to efforts like making solar cells more efficient, saving the time and money needed to conduct trial-and-error experiments in a lab,” Evangelista says.

    Evangelista also has ongoing collaborations with Emory chemistry professor Tim Lian, studying ways to harvest and convert solar energy into chemical fuels. In 2017, Evangelista won the Dirac Medal, one of the world’s most prestigious awards for theoretical and computational chemists under 40.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Emory University is a private research university in metropolitan Atlanta, located in the Druid Hills section of DeKalb County, Georgia, United States. The university was founded as Emory College in 1836 in Oxford, Georgia by the Methodist Episcopal Church and was named in honor of Methodist bishop John Emory. In 1915, the college relocated to metropolitan Atlanta and was rechartered as Emory University. The university is the second-oldest private institution of higher education in Georgia and among the fifty oldest private universities in the United States.

    Emory University has nine academic divisions: Emory College of Arts and Sciences, Oxford College, Goizueta Business School, Laney Graduate School, School of Law, School of Medicine, Nell Hodgson Woodruff School of Nursing, Rollins School of Public Health, and the Candler School of Theology. Emory University, the Georgia Institute of Technology, and Peking University in Beijing, China jointly administer the Wallace H. Coulter Department of Biomedical Engineering. The university operates the Confucius Institute in Atlanta in partnership with Nanjing University. Emory has a growing faculty research partnership with the Korea Advanced Institute of Science and Technology (KAIST). Emory University students come from all 50 states, 6 territories of the United States, and over 100 foreign countries.

     
  • richardmitnick 1:48 pm on February 12, 2019 Permalink | Reply
    Tags: , , Supercomputing   

    From insideHPC: “Moving Mountains of Data at NERSC” 

    From insideHPC

    1
    NERSC’s Wayne Hurlbert (left) and Damian Hazen (right) are overseeing the transfer of 43 years’ worth of NERSC data to new tape libraries at Shyh Wang Hall. Image: Peter DaSilva

    NERSC

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science


    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Researchers at NERSC face the daunting task of moving 43 years worth of archival data across the network to new tape libraries, a whopping 120 Petabytes!

    When NERSC relocated from its former home in Oakland to LBNL in 2015, not everything came with. The last remaining task in the heroic effort of moving the supercomputing facility and all of its resources is 43 years of archival data that’s stored on thousands of high performance tapes in Oakland. Those 120 petabytes of experimental and simulation data have to be electronically transferred to new tape libraries at Berkeley Lab, a process that will take up to two years—even with an ESnet 400 Gigabit “superchannel” now in place between the two sites.

    Tape for archival storage makes sense for NERSC and is a key component of NERSC’s storage hierarchy. Tape provides long-term, high capacity stable storage and only consumes power when reading or writing, making it a cost-effective and environmentally friendly storage solution. And the storage capacity of a tape cartridge exceeds that of more commonly known hard disk drives.

    “Increasing hard disk drive capacity has become more and more difficult as manufacturers need to pack more data into a smaller and smaller area, whereas today’s tape drives are leveraging technology developed for disks a decade ago,” said Damian Hazen, NERSC’s storage systems group lead. Hazen points out that tape storage technology has receded from the public eye in part because of capacity online data storage services provided by the likes of Google, Microsoft, and Amazon, and that disk does work well for those with moderate storage needs. But, unbeknownst to most users, these large storage providers also include tape in their storage strategy.”

    “Moderate” does not describe NERSC’s storage requirements. With data from simulations run on NERSC’s Cori supercomputer, and experimental and observational data coming from facilities all over the U.S. and abroad, the NERSC users send approximately 1.5 petabytes of data each month to the archive.

    “Demands on the NERSC archive grow every year,” Hazen said. “With the delivery of NERSC’s next supercomputer Perlmutter in 2020, and tighter integration of computational facilities like NERSC with local and remote experimental facilities like ALS and LCLS-II, this trend will continue.”

    [Perlmutter supercomputer honors Nobel laureate Saul Perlmutter for providing evidence that the expansion of the universe is accelerating.]

    3

    LBNL/ALS

    SLAC LCLS-II

    Environmental Challenges

    To keep up with the data challenges, the NERSC storage group continuously refreshes the technology used in the archive. But in relocating the archive to the environmentally efficient Shyh Wang Hall, there was an additional challenge. The environmental characteristics of the new energy-efficient building meant that NERSC needed to deal with more substantial changes in temperature and humidity in the computer room. This wasn’t good news for tape, which requires a tightly controlled operating environment, and meant that the libraries in Oakland could not just be picked up and moved to Berkeley Lab. New technology emerged at just the right time in the form of an environmentally isolated tape library, which uses built-in environmental controls to maintain an ideal internal environment for the tapes. NERSC deployed two full-sized, environmentally self-contained libraries last fall. Manufactured by IBM, the new NERSC libraries are currently the largest of this technology in the world.

    3
    NERSC’s new environmentally self-contained tape libraries use a specialized robot to retrieve archival data tapes. Image: Peter DaSilva

    “The new libraries solved two problems: the environmental problem, which allowed us to put the tape library right on the computer room floor, and increasing capacity to keep up with growth as we move forward,” said Wayne Hurlbert, a staff engineer in NERSC’s storage systems group. Tape cartridge capacity doubles roughly every two to three years, with 20 terabyte cartridges available as of December 2018.”

    The newer tape cartridges have three times the capacity of the old, and the archive libraries can store a petabyte per square foot. With the new system in place, data from the tape drives in Oakland is now streaming over to the tape archive libraries in the Shyh Wang Hall computer room via the 400 Gigabit link that ESnet built three years ago to connect the two data centers together. It was successfully used to transfer file systems between the two sites without any disruption to users. As with the file system move, the archive data transfer will be largely transparent to users.

    Even with all of this in place, it will still take about two years to move 43 years’ worth of NERSC data. Several factors contribute to this lengthy copy operation, including the extreme amount of data to be moved and the need to balance user access to the archive.

    “We’re very cautious about this,” Hurlbert said. “We need to preserve this data; it’s not infrequent for researchers to need to go back to their data sets, often to use modern techniques to reanalyze. The archive allows us to safeguard irreplaceable data generated over decades; the data represents millions of dollars of investment in computational hardware and immense time, effort, and scientific results from researchers around the world.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:38 pm on February 1, 2019 Permalink | Reply
    Tags: A project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government., , , Nvidia powerful graphics processors, ORNL SUMMIT supercomputer unveiled-world's most powerful in 2018, Summit has a hybrid architecture and each node contains multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink, Supercomputing, TensorFlow machine-learning software, The World’s Fastest Supercomputer Breaks an AI Record,   

    From Oak Ridge National Laboratory via WIRED “The World’s Fastest Supercomputer Breaks an AI Record” 

    i1

    From Oak Ridge National Laboratory

    via

    Wired logo

    WIRED

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    1
    Oak Ridge National Lab’s Summit supercomputer became the world’s most powerful in 2018, reclaiming that title from China for the first time in five years.
    Carlos Jones/Oak Ridge National Lab

    Along America’s west coast, the world’s most valuable companies are racing to make artificial intelligence smarter. Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors. But late last year, a project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government.

    The record-setting project involved the world’s most powerful supercomputer, Summit, at Oak Ridge National Lab. The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list. As part of a climate research project, the giant computer booted up a machine-learning experiment that ran faster than any before.

    Summit, which occupies an area equivalent to two tennis courts, used more than 27,000 powerful graphics processors in the project. It tapped their power to train deep-learning algorithms, the technology driving AI’s frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

    “Deep learning has never been scaled to such levels of performance before,” says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab. (He goes by one name.) His group collaborated with researchers at Summit’s home base, Oak Ridge National Lab.

    Fittingly, the world’s most powerful computer’s AI workout was focused on one of the world’s largest problems: climate change. Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century’s worth of three-hour forecasts for Earth’s atmosphere. (It’s unclear how much power the project used or how much carbon that spewed into the air.)

    The Summit experiment has implications for the future of both AI and climate science. The project demonstrates the scientific potential of adapting deep learning to supercomputers, which traditionally simulate physical and chemical processes such as nuclear explosions, black holes, or new materials. It also shows that machine learning can benefit from more computing power—if you can find it—boding well for future breakthroughs.

    “We didn’t know until we did it that it could be done at this scale,” says Rajat Monga, an engineering director at Google. He and other Googlers helped the project by adapting the company’s open-source TensorFlow machine-learning software to Summit’s giant scale.

    Most work on scaling up deep learning has taken place inside the data centers of internet companies, where servers work together on problems by splitting them up, because they are connected relatively loosely, not bound into one giant computer. Supercomputers like Summit have a different architecture, with specialized high-speed connections linking their thousands of processors into a single system that can work as a whole. Until recently, there has been relatively little work on adapting machine learning to work on that kind of hardware.

    Monga says working to adapt TensorFlow to Summit’s scale will also inform Google’s efforts to expand its internal AI systems. Engineers from Nvidia also helped out on the project, by making sure the machine’s tens of thousands of Nvidia graphics processors worked together smoothly.

    Finding ways to put more computing power behind deep-learning algorithms has played a major part in the technology’s recent ascent. The technology that Siri uses to recognize your voice and Waymo vehicles use to read road signs burst into usefulness in 2012 after researchers adapted it to run on Nvidia graphics processors.

    In an analysis published last May, researchers from OpenAI, a San Francisco research institute cofounded by Elon Musk, calculated that the amount of computing power in the largest publicly disclosed machine-learning experiments has doubled roughly every 3.43 months since 2012; that would mean an 11-fold increase each year. That progression has helped bots from Google parent Alphabet defeat champions at tough board games and videogames, and fueled a big jump in the accuracy of Google’s translation service.

    Google and other companies are now creating new kinds of chips customized for AI to continue that trend. Google has said that “pods” tightly integrating 1,000 of its AI chips—dubbed tensor processing units, or TPUs—can provide 100 petaflops of computing power, one-tenth the rate Summit achieved on its AI experiment.

    The Summit project’s contribution to climate science is to show how giant-scale AI could improve our understanding of future weather patterns. When researchers generate century-long climate predictions, reading the resulting forecast is a challenge. “Imagine you have a YouTube movie that runs for 100 years. There’s no way to find all the cats and dogs in it by hand,” says Prabhat of Lawrence Berkeley. The software typically used to automate the process is imperfect, he says. Summit’s results showed that machine learning can do it better, which should help predict storm impacts such as flooding or physical damage. The Summit results won Oak Ridge, Lawrence Berkeley, and Nvidia researchers the Gordon Bell Prize for boundary-pushing work in supercomputing.

    Running deep learning on supercomputers is a new idea that’s come along at a good moment for climate researchers, says Michael Pritchard, a professor at the University of California, Irvine. The slowing pace of improvements to conventional processors had led engineers to stuff supercomputers with growing numbers of graphics chips, where performance has grown more reliably. “There came a point where you couldn’t keep growing computing power in the normal way,” Pritchard says.

    That shift posed some challenges to conventional simulations, which had to be adapted. It also opened the door to embracing the power of deep learning, which is a natural fit for graphics chips. That could give us a clearer view of our climate’s future. Pritchard’s group showed last year that deep learning can generate more realistic simulations of clouds inside climate forecasts, which could improve forecasts of changing rainfall patterns.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: