Tagged: HPC Wire Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:47 am on August 17, 2018 Permalink | Reply
    Tags: HPC Wire, NSF STAQ project,   

    From HPC Wire: “STAQ(ing) the Quantum Computing Deck” 

    From HPC Wire

    August 16, 2018
    John Russell

    1
    No image caption or credit

    Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perhaps seamlessly capture some of the underlying quantum physics by mapping quantum algorithms more directly to the underlying hardware; this might make nearer-term quantum computers practical for some problems. This approach, at least in part, is central to the Software Tailored Architecture for Quantum Design (STAQ) project, announced by NSF last week and led by co-PIs Kenneth Brown and Jungsang Kim of Duke University.

    Every project needs a goal and the big callout here is building a 64- (or more) qubit ion trap-based quantum computer capable of tackling problems that classical computers currently stumble on. But that doesn’t catch the scope of the project which is making a point of leveraging multidiscipline expertise to put co-design to work in the quantum domain, exploring specific algorithms for condensed matter physics and quantum chemistry, as well as more general quantum algorithm optimization. There’s also a requirement to run a summer school to share the learnings.

    2
    No image caption or credit

    “I always joke that if I knew what the silicon transistor was of quantum computing, I would just do it. But I don’t.” Brown told HPCwire. “Right now I think both superconductors and ion traps have shown a lot of progress and demonstrated a large number of algorithms. The advantage of trapped ions is that every ion is the same. For these small chains [of ions in the trap] you do get this advantage of basically being able to achieve communication between any pair. In superconducting devices, typically, you are only able to talk to sort of neighbor qubits. So if you have an algorithm which requires a longer distance communication between qubits, there is some cost you have to pay to get the information from one to the other.”

    There’s a lot going on here – as there is throughout the quantum computing research community. Zeroing in on ion traps for quantum computing isn’t new but it hasn’t received the same notice that semiconductor-based superconducting approaches have á la IBM, Google, D-Wave, Rigetti et. al. NIST (National Institute of Standards and Technology) has put ion trap technology for use as super accurate atomic clocks and a few academic groups have also explored ion trap quantum computing, but without the fanfare attendant other efforts. It turns out ion trap technology – somewhat similar to the mass spec we all know – has several strengths for use in quantum computing.

    Brown, Kim, and colleague Christopher Monroe’s (University of Maryland) have written a nice paper on the topic, Co-Designing a Scalable Quantum Computer with Trapped Atomic Ions. Brown is quick to point out 1,000-qubit scale-up ideas presented in the 2016 paper far exceed STAQ’s goal, but that such scaling ambitions do seem reachable over time with ion trap technology.

    Here’s brief excerpt from their paper touching on ion technology’s attraction:

    “Superconducting circuitry exploits the significant advantages of modern lithography and fabrication technologies: it can be integrated on a solid-state platform and many qubits can simply be printed on a chip. However, they suffer from inhomogeneities and decoherence, as no two superconducting qubits are the same, and their connectivity cannot be reconfigured without replacing the chip or modifying the wires connecting them within a very low temperature environment.

    “Trapped atomic ions, on the other hand, feature virtually identical qubits, and their wiring can be reconfigured by modifying externally applied electromagnetic fields. However, atomic qubit switching speeds are generally much slower than solid state devices, and the development of engineering infrastructure for trapped ion quantum computers and the mitigation of noise and decoherence from the applied control fields is just beginning.”

    Perhaps a quick (and imperfect) description of ion trap technology is warranted. It’s similar to mass spec. Ions are loaded into traps by generating neutral atoms of the desired element and ionizing the atoms once in the trapping volume. Electrodes (rods) are used to generate forces to contain the ions. RF and LASER emissions are used to control the ions, which can be lined in ‘stationary’ chains. Individual ions have their electron states manipulating using LASERs which turns them into qubit registers. Brown’s group is using Ytterbium (Yb+) ions whose outer electron shell structure is well-suited for manipulation.

    “The trap we use looks like a computer chip, sort of like metal on silicon chip. It’s similar to the four-rod trap (quadrupole) you probably know from mass spec. You cut one of the rods and then you’ve unfolded the trap onto a plate and advantage of that is it allows you to then move the ions around, break the break chains apart, and that sort of thing. It also gives you more control over fields that are containing the ions and the direction of the chain itself. That is housed in a vacuum chamber which is achieved with either vacuum system or with a cryogenic chamber. This is one of the designs questions we are working on right, deciding which way to go,” said Brown.

    One important ion trap technology advantage, according to Brown, is the qubit type, something called ‘hyperfine’ qubits. “They basically have no memory error. So unlike many other qubits where you have a constant decay – and it’s all relative to the gate speeds – our relative decay-to-gate-speed is a long, long time. For example, the best result I know of is if you have a microsecond gate time, which is kind of typical for ions, you can have a memory time of ten minutes,” he said.

    As explained in their paper, “Qubits stored in trapped atomic ions are represented by two stable electronic levels within each ion, often represented as an effective spin with the two states |↓⟩and |↑⟩corresponding to bit values 0 and 1. The qubits can be initialized and detected with nearly perfect accuracy using conventional optical pumping and state-dependent fluorescence techniques. This restricts the atomic species of trapped ion qubits to those with simple electronic structure (e.g., those with a single valence electron: Be+, Mg+, Ca+, Sr+, Ba+, Zn+, Hg+, Cd+, and Yb+)” Shown below is a schematic from their paper roughly describing a chip-based ion trap.

    2

    Co-design is the central tenet for STAQ. “This idea of the software tailored architecture, co-design, is basically we want to make the tools which optimize the mapping of the ideal mathematical algorithm to the actual device of interest. So there are a few things we plan to leverage. One is the at the bottom layer. We often abstract the physics of the quantum device. This has actually been really useful for quantum information as a whole. It allows people to talk about superconducting machines, or ion trap machines, or photon computers, so all these things using the same language. But the underlying physics beneath that gate layer [are] different and there might be some opportunities to simplify some algorithms such that we actually don’t completely remove that abstraction and allow some of the physics [specific to ion trap technology] to seep up to the programmer,” noted Brown.

    Building a stack able to take advantage of this flexibility is one of STAQ’s goals. “The idea of the stack is to try to actually do what one of my colleagues says is like a crossword puzzle. We just don’t optimize the algorithm, and then optimize the gate set, and then optimize each gate on the hardware, but we try to modify the gates so that it’s the most appropriate for optimizing the algorithm given the problem,” said Brown.

    The breadth of expertise on the STAQ team, said Brown, is a distinct advantage: “We have computer architects. We have quantum information theorists. We have people more on the applications side, and hardware people. You need all those people. You need those different layers working. I think what’s nice is we are reaching a point where these machines are reaching sufficient sophistication that it is easier to find people to think about architecture.”

    In some sense flexibility in manipulating ion chains (breaking apart at different lengths, remote entanglement among qubits) allows an almost FPGA-programming-like quality to ion trap quantum computing. “You can do these two-qubit gates between any pair [of ions] and the reason is it’s not like a direct interaction with its neighbor but an interaction which is mediated by the collective motion of the ion chain. In terms of actually mapping algorithms to computers it’s quite nice because if I think about the connection between qubits it’s like a fully connected graph,” said Brown.

    “Now that’s not going to scale to 1000 qubits but it’s not clear not what the limit is. We know 10 qubits, 20 qubits is no problem. [And] we have some ideas on how to get to 50 qubits but at some point we are going to have to shift the way we put these things together.”

    Quantum chemistry is one area of application being examined. “The challenge in doing quantum chemistry on a normal conventional computer is there’s a mismatch between how much classical data we need to store a quantum state,” said Brown. “With a quantum computer you already have this win where there’s a better match. The quantum state on the computer representing the molecule uses a comparable amount of space because they are both in some sense quantum memory. The next thing is each system has kind of its own natural interaction. With an ion trap system, the way the particular gate is performed, the underlying interaction looks a lot looks a lot like a magnetic interaction between two systems. So if the problem you are trying to solve maps nicely to this kind of magnetic interaction, there are actually a lot of shortcuts you can take.”

    Given ion trap technology’s flexibility, STAQ hopes to learn whether it may be possible or worthwhile to create application-specific architectures.

    4

    “That is one of our big research questions,” according to Brown. “[The issue] is what is the gain there. If you think about a tablet computer or an iPad, it has a facial recognition chip. Its job is just to see faces, right. So we expect that quantum computers will be kind of like that, at least in near term, sort of an extra processor that is interacting with some classical computer. It may turn out to be possible to make quantum processors that are say specifically designed for quantum chemistry problems, that could be a great accelerator for all kinds of applications in chemistry.”

    While STAQ plans to leverage the underlying characteristics ion trap technology which might include ASIC-like capabilities, “all of the devices we plan to make will be universal in that they will allow you to do universal quantum computing,” emphasized Brown.

    STAQ will also run an annual summer school at Duke aimed at two different audiences, said Brown, one drawn from upper level undergraduate and early graduate school students looking to learn more about quantum information and another group drawn from industry.

    Looking at near-term (~18-month) goals, Brown said, “On the algorithm side I hope to identify target algorithms for a computer on the scale of say 60 to 70 qubits. On the experimental side, that first year and a half will be building a new engineering design and building a new system based on our previous experiments with ion traps but moving more towards a functional computer and [something] less like a physics experiment.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 9:12 am on June 18, 2018 Permalink | Reply
    Tags: , Deep Neural Network Training with Analog Memory Devices, HPC Wire,   

    From HPC Wire: “IBM Demonstrates Deep Neural Network Training with Analog Memory Devices” 

    From HPC Wire

    June 18, 2018
    Oliver Peckham

    1
    Crossbar arrays of non-volatile memories can accelerate the training of fully connected neural networks by performing computation at the location of the data. (Source: IBM)

    From smarter, more personalized apps to seemingly-ubiquitous Google Assistant and Alexa devices, AI adoption is showing no signs of slowing down – and yet, the hardware used for AI is far from perfect. Currently, GPUs and other digital accelerators are used to speed the processing of deep neural network (DNN) tasks – but all of those systems are effectively wasting time and energy shuttling that data back and forth between memory and processing. As the scale of AI applications continues to increase, those cumulative losses are becoming massive.

    In a paper published this month in Nature, by Stefano Ambrogio, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, Irem Boybat, Carmelo di Nolfo, Severin Sidler, Massimo Giordano, Martina Bodini, Nathan C. P. Farinha, Benjamin Killeen, Christina Cheng, Yassine Jaoudi, and Geoffrey W. Burr, IBM researchers demonstrate DNN training on analog memory devices that they report achieves equivalent accuracy to a GPU-accelerated system. IBM’s solution performs DNN calculations right where the data are located, storing and adjusting weights in memory, with the effect of conserving energy and improving speed.

    Analog computing, which uses variable signals rather than binary signals, is rarely employed in modern computing due to inherent limits on precision. IBM’s researchers, building on a growing understanding that DNN models operate effectively at lower precision, decided to attempt an accurate approach to analog DNNs.

    The research team says it was able to accelerate key training algorithms, notably the backpropagation algorithm, using analog non-volatile memories (NVM). Writing for the IBM blog, lead author Stefano Ambrogio explains:

    “These memories allow the “multiply-accumulate” operations used throughout these algorithms to be parallelized in the analog domain, at the location of weight data, using underlying physics. Instead of large circuits to multiply and add digital numbers together, we simply pass a small current through a resistor into a wire, and then connect many such wires together to let the currents build up. This lets us perform many calculations at the same time, rather than one after the other. And instead of shipping digital data on long journeys between digital memory chips and processing chips, we can perform all the computation inside the analog memory chip.”

    The authors note that their mixed hardware-software approach is able to achieve classification accuracies equivalent to pure software based-training using TensorFlow despite imperfections of existing analog memory devices. Writes Ambrogio:

    “By combining long-term storage in phase-change memory (PCM) devices, near-linear update of conventional Complementary Metal-Oxide Semiconductor (CMOS) capacitors and novel techniques for cancelling out device-to-device variability, we finessed these imperfections and achieved software-equivalent DNN accuracies on a variety of different networks. These experiments used a mixed hardware-software approach, combining software simulations of system elements that are easy to model accurately (such as CMOS devices) together with full hardware implementation of the PCM devices. It was essential to use real analog memory devices for every weight in our neural networks, because modeling approaches for such novel devices frequently fail to capture the full range of device-to-device variability they can exhibit.”

    Ambrogio and his team believe that their early design efforts indicate that a full implemention of the analog approach “should indeed offer equivalent accuracy, and thus do the same job as a digital accelerator – but faster and at lower power.” The team is exploring the design of prototype NVM-based accelerator chips, as part of an IBM Research Frontiers Institute project.

    The team estimates that it will be able to deliver chips with a computational energy efficiency of 28,065 GOP/sec/W and throughput-per-area of 3.6 TOP/sec/mm2. This would be a two orders of magnitude improvement over today’s GPUs according to the reserachers.

    The researchers will now turn their attention to demonstrating their approach on larger networks that call for large, fully-connected layers, such as recurrently-connected Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks with emerging utility for machine translation, captioning and text analytics. As new and better forms of analog memory are developed, they expect continued improvements in areal density and energy efficiency.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 2:05 pm on April 8, 2018 Permalink | Reply
    Tags: Eight startups selected by IBM to be part of the Q Network, HPC Wire, , Q Network partner ecosystem, QISKitan an open-source software developer kit,   

    HPC Wire: “IBM Expands Quantum Computing Network” 

    April 5, 2018
    Tiffany Trader

    1

    IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qubits isn’t enough, there needs to be an engaged ecosystem of partners. As part of its strategy to transition from quantum science to what IBM calls quantum-readiness, Big Blue held the first IBM Q Summit in Palo Alto, California, today (April 5), welcoming a group of startups into its quantum network.

    “Membership in the network will enable these startups to run experiments and algorithms on IBM quantum computers via cloud-based access,” explained Jeff Welser, director, IBM Research – Almaden, in a blog post. “Additionally, these startup members will have the opportunity to collaborate with IBM researchers and technical SMEs on potential applications, as well as other IBM Q Network organizations.”

    The Q Network was launched in December in partnership with both industry and academic and government clients, including JP Morgan Chase, Daimler, Samsung, JSR, Barclays, Keio University, Honda, Oak Ridge National Lab, University of Oxford, University of Melbourne, Hitachi Metals and Nagase. Now IBM has brought in these eight industry-leading startups: Cambridge Quantum Computing (CQC), 1QBit, QC Ware, Q-CTRL, Zapata Computing, Strangeworks, QxBranch, and Quantum Benchmark. (Additional info at end of article.)

    Quantum was a major topic of the inaugural IBM Think conference held in Las Vegas last month, where a number of featured speakers shared an optimistic timeline for establishing production usable applications.

    Arvind Krishna, senior vice president, Hybrid Cloud, and director of IBM Research, said he believes IBM will show a practical quantum advantage within five years and it will have built capable machines for that purpose in three-to-five years.

    Krishna hailed a coming era of practical quantum computing. “Quantum computers will help us solve problems that classical computers never could, in areas such as materials, medicines, transportation logistics, and financial risk,” he said during a keynote address.

    IBM has been focused on making the engineering more stable and robust to enable a broader set of users, outside the physics laboratory. “To exploit and win at quantum, you actually have to have a real quantum computer,” said Krishna.

    The community ecosystem is where IBM is distinguishing itself in the tight landscape of quantum competitors, that includes Google, Intel, Microsoft, early pioneer in quantum annealing D-Wave, and Berkeley-based startup Rigetti.

    IBM has a set of three prototype quantum computers, real quantum devices not simulators, made available through its cloud network, which in just two years has seen 80,000 users run more than 3 million remote executions. There are 5-qubit and 16-qubit quantum systems available to anyone with an internet connection via IBM’s Q Experience platform, and a larger 20-qubit machine for select Q Network partners. IBM has also successfully built an operational prototype 50-qubit processor that will be made available in the next generation IBM Q systems.

    As IBM grows its Q Network partner ecosystem, participating organizations will have various levels of cloud-based access to quantum expertise and resources. This means that not all members will get time on the biggest Q System, but startups in the quantum computing space will get “deeper access to APIs and advanced quantum software tools, libraries and applications, as well as consultation on emerging quantum technologies and applications from IBM scientists, engineers and consultants,” according to Welser.

    The goal of the Q Network is to advance practical applications for business and science and ultimately usher in the commercial quantum era. “We will emerge from this transitional era and enter the era of quantum advantage when we run the first commercial application. It’s not about arbitrary tests or quantum supremacy, it’s very practical,” said Anthony Annunziata, associate director, IBM Q, at last month’s event. “When we can do practical things, we will have achieved the practical era.”

    By making the machines available to a broader community, IBM is seeding the development of a software and user ecosystem. Annunziata stressed the importance of educating and preparing users across organizations for the coming of quantum computing. “It doesn’t matter how much we can abstract away,” he said, “quantum computing is just different. It takes a different mindset and skill set to program a quantum computer, especially to take advantage of it.”

    There are two different ways of programming the IBM Q network machines: a graphical interface with drag-and-drop operations and an open-source software developer kit called QISKit. QISKit, as IBM’s Talia Gershon enthusiastically explained in her keynote talk, makes it possible to entangle two qubits with two lines of code.

    2
    Talia Gershon presenting at IBM Think 2018

    Gershon, senior manager, AI Challenges and Quantum Experiences at IBM, holds that having fundamentally new ways of doing computation will open up a new paradigm in how we approach problems, but first we have to stop “thinking too classically.”

    “Thinking too classically, as my colleague Jay Gambetta says, means you’re trying to apply linear classical logical thinking to understand something quantum and it doesn’t work,” said Gershon. “Thinking too classically is a real problem that hinders progress so how do we get people to change the way they think? Well we start in the classroom. When Einstein first discovered relativity I’m sure nobody intuitively got it and understood why was important and today it’s in every modern physics classroom in the world.

    “Within five years the same thing will happen with quantum computing. Not only will physics departments offer quantum information classes but computer science departments will offer a quantum track. Electrical engineering departments will teach students about quantum circuits and microwave signal processing and chemistry classes will teach students not only how to simulate molecules on a classical machine but also on a quantum computer.”

    ____________________________________________________________________

    Descriptions of the eight startups selected by IBM to be part of the Q Network:

    • Zapata Computing – Based in Cambridge, Mass., Zapata Computing is a quantum software, applications and services company developing algorithms for chemistry, machine learning, security, and error correction.

    • Strangeworks – Based in Austin, Texas, and founded by William Hurley, Strangeworks is a quantum computing software company designing and delivering tools for software developers and systems management for IT Administrators and CIOs.

    • QxBranch – Headquartered in Washington, D.C., QxBranch delivers advanced data analytics for finance, insurance, energy, and security customers worldwide. QxBranch is developing tools and applications enabled by quantum computing with a focus on machine learning and risk analytics.

    • Quantum Benchmark – Quantum Benchmark is a venture-backed software company led by a team of the top research scientists and engineers in quantum computing, with headquarters in Kitchener-Waterloo, Canada. Quantum Benchmark provides solutions that enable error characterization, error mitigation, error correction and performance validation for quantum computing hardware.

    • QC Ware – Based in Palo Alto, Calif., QC Ware develops hardware-agnostic enterprise software solutions running on quantum computers. QC Ware’s investors include Airbus Ventures, DE Shaw Ventures and Alchemist, and it has relationships with NASA and other government agencies. QC Ware won a NSF grant, and its customers include Fortune 500 industrial and technology companies.

    • Q-CTRL – This Sydney, Australia-based startup’s hardware agnostic platform – Black Opal – gives users the ability to design and deploy the most effective controls to suppress errors in their quantum hardware before they accumulate. Q-CTRL is backed by Main Sequence Ventures and Horizons Ventures.

    • Cambridge Quantum Computing (CQC) – Established in 2014 in the UK, CQC combines expertise in quantum information processing, quantum technologies, artificial intelligence, quantum chemistry, optimization and pattern recognition. CQC designs solutions such as a proprietary platform agnostic compiler that will allow developers and users to benefit from quantum computing even in its earliest forms. CQC also has a growing focus in quantum technologies that relate to encryption and security.

    • 1QBit – Headquartered in Vancouver, Canada, and founded in 2012, 1Qbit develops general purpose algorithms for quantum computing hardware. The company’s hardware-agnostic platforms and services are designed to enable the development of applications which scale alongside the advances in both classical and quantum computers. 1QBit is backed by Fujitsu Limited, CME Ventures, Accenture, Allianz and The Royal Bank of Scotland.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 4:08 pm on January 26, 2018 Permalink | Reply
    Tags: 2017 Workshop on Open Source Supercomputing, , HPC Wire, ORNL Researchers Explore Supercomputing Workflow Best Practices, ,   

    From HPC wire: “ORNL Researchers Explore Supercomputing Workflow Best Practices” 

    HPC Wire

    January 25, 2018
    Scientists at the Department of Energy’s Oak Ridge National Laboratory are examining the diverse supercomputing workflow management systems in use in the United States and around the world to help supercomputers work together more effectively and efficiently.

    Because supercomputers have largely developed in isolation from each other, existing modeling and simulation, grid/data analysis, and optimization workflows meet highly specific needs and therefore cannot easily be transferred from one computing environment to another.

    Divergent workflow management systems can make it difficult for research scientists at national laboratories to collaborate with partners at universities and international supercomputing centers to create innovative workflow-based solutions that are the strength and promise of supercomputing.

    Led by Jay Jay Billings, team lead for the Scientific Software Development group in ORNL’s Computing and Computational Sciences Directorate, the scientists have proposed a “building blocks” approach in which individual components from multiple workflow management systems are combined in specialized workflows.

    Billings worked with Shantenu Jha of the Computational Science Initiative at Brookhaven National Laboratory and Rutgers University, and Jha presented their research at the 2017 Workshop on Open Source Supercomputing in Denver in November 2017. Their article appears in the workshop’s proceedings.

    The researchers began by analyzing how existing workflow management systems work—the tasks and data they process, the order of execution, and the components involved. Factors that can be used to define workflow management systems include whether a workflow is long or short running, runs internal cycles or in linear fashion with an endpoint, and requires humans to complete. Long used to understand business processes, the workflow concept was introduced in scientific contexts where automation was useful for research tasks such as setting up and running problems on supercomputers and then analyzing the resulting data.

    Viewed through the prism of today’s complex research endeavors, supercomputers’ workflows clearly have disconnects that can hamper scientific advancement. For example, Billings pointed out that a project might draw on multiple facilities’ work while acquiring data from experimental equipment, performing modeling and simulation on supercomputers, and conducting data analysis using grid computers or supercomputers. Workflow management systems with few common building blocks would require installation of one or more additional workflow management systems—a burdensome level of effort that also causes work to slow down.

    “Poor or nonexistent interoperability is almost certainly a consequence of the ‘Wild West’ state of the field,” Billings said. “And lack of interoperability limits reusability, so it may be difficult to replicate data analysis to verify research results or adapt the workflow for new problems.”

    The open building blocks workflows concept being advanced by ORNL’s Scientific Software Development group will enable supercomputers around the world to work together to address larger scientific problems that require workflows to run on multiple systems for complete execution.

    Future work includes testing the hypothesis that the group’s approach is more scalable and sustainable and a better practice.

    This research is supported by DOE and ORNL’s Laboratory Directed Research and Development program.

    ORNL is managed by UT–Battelle for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 8:56 pm on September 5, 2017 Permalink | Reply
    Tags: HPC Wire, , IBM Advances Web-based Quantum Programming, Jupyter-based Data Science Experience notebook environment, Quantum Information Software developer Kit (QISKit)   

    From HPC: “IBM Advances Web-based Quantum Programming” 

    HPC Wire

    September 5, 2017
    Alex Woodie

    IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems.

    Big Blue has been providing scientists, researchers, and developers with free access to IBM Q processors for over a year. The favorite way to access these quantum systems is through the Quantum Information Software developer Kit (QISKit), which is software development environment designed to allow users to develop and deploy quantum algorithms via a Python interface.

    In a blog post today, IBM announced that it has issued more than 20 new QISKit notebooks this week, and the bulk of them are targeted at quantum researchers to perform various types of experiments. But among the new notebooks is one designed to help developers conduct quantum experiments through the Data Science Experience (DSX), which is IBM’s cloud-based data science notebook offering that’s targeted at commercial data scientists.

    The hope is that the new DSX offering could open the door to a new class of developer, specifically “entrepreneurial-minded programmers and developers” who are eager to experiment with quantum computing’s potential, but who aren’t necessarily interested in quantum computing for quantum computing’s sake.

    Jay M. Gambetta, who’s a manager for Theory of Quantum Computing and Innovation at IBM, say the new DSX option is an excellent choice for a developer who’s just getting started with QISkit.

    “You can skip all the installation and environment creation steps on your computer, and instead use this Web-hosted Jupyter notebook environment for running the Quantum programs,” Gambetta tells Datanami (HPCwire‘s sister pub) via email. “It also provides a platform where you can invite fellow researchers to collaborate on the notebooks you have developed or simply share your work within the community.”

    While DSX helps data scientists script and solve big data problems using the latest machine learning (it includes Apache Spark MLlib and IBM’s own SystemML libraries), the capability to add quantum computing to the mix gives the environment something not readily available elsewhere.

    It also gives data scientists the potential to solve intractable problems that even exceed the capabilities of today’s mammoth distributed clusters. For example, an IBM spokesperson says a quantum computer could yield the solution to the traveling salesman’s problem. “A new Jupyter notebook gives developers the chance to explore this age-old problem,” the spokesperson says.

    1
    IBM is pairing its Jupyter-based Data Science Experience software (shown) with its cloud-based quantum computer.

    Gambetta says it’s still an open research problem which applications have the potential to benefit from approximate quantum computing. “With quantum programing so new there is much to be learnt and implemented,” he says. “But for now we are starting with putting a few examples on DSX.”

    IBM delivered public access to a quantum computer two years ago as an enablement tool for scientific research. In March of this year, it launched IBM Q on its cloud with an eye toward allowing both scientists and business users to experiment with the novel processing approach.

    Quantum computers are designed to simultaneously store and calculate bits data in multiple states, what’s called “entanglement,” as opposed to the on or off of today’s binary systems. The hope is that the quantum approach opens computational powers that exceed the capability of today’s “classic” approach.

    On its website, IBM compares the current state-of-the-art in artificial intelligence with one possible quantum future:

    “While technologies like AI can find patterns buried in vast amounts of existing data, quantum computers will deliver solutions to important problems where patterns cannot be found and the number of possibilities that you need to explore to get to the answer are too enormous ever to be processed by classical computers.”

    IBM today offers public access to two quantum computers on its Bluemix cloud under the IBM Q banner, including a 16 qubit processor based on its original design, and a 17 qubit processor that’s a prototype of the commercial quantum systems that IBM hopes to build and sell.

    The new Jupyter notebooks in QISKit bring quantum computers to bear on a range of specific scientific problems, including:

    -Running tomography and decoherence experiments for studying the state of the quantum system;
    -New tools for visualizing quantum states;
    -A notebook that demonstrates how to calculate the equilibrium bond lengths in Hydrogen (H2), and Lithium hydride (LiH);
    -And a new notebook that showcases MaxCut problems, defined by finding the minimum (cost of something), and the maximum (the profit of that something), as well as the Traveling Salesman challenge of the “perfect route.”

    “Whether you’re a quantum research scientist, trying to study the fundamentals of quantum states, a developer who wants to understand how you’d actually solve problems in chemistry or optimization, or someone who is curious how to build quantum games, you’re gonna love what we just put out,” writes Talia Gershon, a research staff member at the Thomas J. Watson Research Center in Yorktown Heights, New York.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 4:28 pm on July 19, 2017 Permalink | Reply
    Tags: HPC Wire, , , , Trinity supercomputer   

    From HPC Wire: “Trinity Supercomputer’s Haswell and KNL Partitions Are Merged” 

    HPC Wire

    July 19, 2017
    No writer credit found

    LANL Cray XC30 Trinity supercomputer

    Trinity supercomputer’s two partitions – one based on Intel Xeon Haswell processors and the other on Xeon Phi Knights Landing – have been fully integrated are now available for use on classified work in the National Nuclear Security Administration (NNSA)’s Stockpile Stewardship Program, according to an announcement today. The KNL partition had been undergoing testing and was available for non-classified science work.

    “The main benefit of doing open science was to find any remaining issues with the system hardware and software before Trinity is turned over for production computing in the classified environment,” said Trinity project director Jim Lujan. “In addition, some great science results were realized,” he said. “Knights Landing is a multicore processor that has 68 compute cores on one piece of silicon, called a die. This allows for improved electrical efficiency that is vital for getting to exascale, the next frontier of supercomputing, and is three times as power-efficient as the Haswell processors,” Archer noted.

    The Trinity project is managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the New Mexico Alliance for Computing at Extreme Scale (ACES) partnership.

    In June 2017, the ACES team took the classified Trinity-Haswell system down and merged it with the KNL partition. The full system, sited at LANL, was back up for production use the first week of July.

    The Knights Landing processors were accepted for use in December 2016 and since then they have been used for open science work in the unclassified network, permitting nearly unprecedented large-scale science simulations. Presumably the merge is the last step in the Trinity contract beyond maintenance.

    Trinity, based on a Cray XC30, now has 301,952 Xeon and 678, 912 Xeon Phi processors along with two pebibytes (PiB) of memory. Besides blending the Haswell and KNL processors, Trinity benefits from the introduction of solid state storage (burst buffers). This is changing the ratio of disk and tape necessary to satisfy bandwidth and capacity requirements, and it drastically improves the usability of the systems for application input/output. With its new solid-state storage burst buffer and capacity-based campaign storage, Trinity enables users to iterate more frequently, ultimately reducing the amount of time to produce a scientific result.

    1

    “With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos Advanced Simulation and Computing (ASC) program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

    Trinity Timeline:

    June 2015, Trinity first arrived at Los Alamos, Haswell partition installation began.
    February 12 to April 8, 2016, approximately 60 days of computing access made available for open science using the Haswell-only partition.
    June 2016, Knights Landing components of Trinity began installation.
    July 5, 2016, Trinity’s classified side began serving the Advanced Technology Computing Campaign (ATCC-1)
    February 8, 2017, Trinity Open Science (unclassified) early access shakeout began on the Knights Landing partition before integration with the Haswell partition in the classified network.
    July 2017, Intel Haswell and Intel Knights Landing partitions were merged, transitioning to classified computing.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 2:33 pm on July 13, 2017 Permalink | Reply
    Tags: HPC Wire, Intel “Skylake” Xeon Scalable processors,   

    From HPC Wire: “Intel Skylake: Xeon Goes from Chip to Platform” 

    HPC Wire

    1
    No image caption or credit

    July 13, 2017
    Doug Black

    With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic markets. Skylake will carry Intel’s flag in the fight for leadership in the emerging advanced data center encompassing highly demanding network workloads, cloud computing, real time analytics, virtualized infrastructures, high-performance computing and artificial intelligence.

    Most interesting, Skylake takes a big step toward accommodating what one industry analyst has called “the wild west of technology disaggregation,” life in the post-CPU-centric era.

    “What surprised me most is how much platform goodness Intel brought to the table,” said industry watcher Patrick Moorhead, Moor Insights & Strategy, soon after the launch announcement. “I wasn’t expecting so many enhancements outside of the CPU chip itself.”

    In fact, Moorhead said, Skylake turns Xeon into a platform, one that “consists of CPUs, chipset, internal and external accelerators, SSD flash and software stacks.”

    The successor to the Intel Xeon processor E5 and E7 product lines, Skylake has up to 28 high-performance cores and provides platform features with, according to Intel, significant performance increases, including:

    Artificial Intelligence: Delivers 2.2x higher deep learning training and inference compared to the previous generation, according to Intel, and 113x deep learning performance gains compared to a three-year-old non-optimized server system when combined with software optimizations accelerating delivery of AI-based services.
    Networking: Delivers up to 2.5x increased IPSec forwarding rate for networking applications compared to the previous generation when using Intel QuickAssist and Deep Platform Development Kit.
    Virtualization: Operates up to approximately 4.2x more virtual machines versus a four-year-old system for faster service deployment, server utilization, lower energy costs and space efficiency.
    High Performance Computing: Provides up to a 2x FLOPs/clock improvement with Intel AVX-512 (the 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for the x86 instruction set architecture) as well as integrated Intel Omni-Path Architecture ports, delivering improved compute capability, I/O flexibility and memory bandwidth, Intel said.
    Storage: Processes up to 5x more IOPS while reducing latency by up to 70 percent versus out-of-the-box NVMe SSDs when combined with Intel Optane SSDs and Storage Performance Development Kit, making data more accessible for advanced analytics.

    2
    No image caption or credit.

    Overall, Intel said, Skylake delivers performance increase up to 1.65x versus the previous generation of Intel processors, and up to 5x OLTP warehouse workloads versus the current install base.

    The company also introduced Intel Select Solutions, aimed at simplifying deployment of data center and network infrastructure, with initial solutions delivery on Canonical Ubuntu, Microsoft SQL 16 and VMware vSAN 6.6. Intel said this is an expansion of the Intel Builders ecosystem collaborations and will offer Intel-verified configurations for specific workloads, such as machine learning inference, and is then sold and marketed as a package by OEMs and ODMs under the “Select Solution” sub-brand.

    Intel said Xeon Scalable platform is supported by hundreds of ecosystem of partners, more than 480 Intel builders and 7,000-plus software vendors, including support from Amazon, AT&T, BBVA, Google, Microsoft, Montefiore, Technicolor and Telefonica.

    But it’s Intel’s support for multiple processing architectures that drew the most attention.

    Moorhead said Skylake enables heterogeneous compute in several ways. “First off, Intel provides the host processer, a Xeon, as you can’t boot to an accelerator. Inside of Xeon, they provide accelerators like AVX-512. Inside Xeon SoCs, Intel has added FPGAs. Inside the PCH contains a QAT accelerator. Intel also has PCIe accelerator cards for QAT and FPGAs.”

    In the end, Moorhead said, the Skylark announcement is directed at datacenter managers “who want to run their apps and do inference on the same machines using the new Xeons.” He cited Amazon’s support for this approach, “so it has merit.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 2:16 pm on July 13, 2017 Permalink | Reply
    Tags: HPC Wire,   

    From HPC Wire: “Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface” 

    HPC Wire

    July 13, 2017
    Ken Chiacchia
    Tiffany Jolley

    New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series and timeliness in general, according to Paul Morin, director of the Polar Geospatial Center at the University of Minnesota.

    In the second plenary session of the PEARC conference in New Orleans on July 12, Morin described how access to the DigitalGlobe satellite constellation, the NSF XSEDE network of supercomputing centers and the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign have enabled his group to map Antarctica—an area of 5.4 million square miles, compared with the 3.7 million square miles of the “lower 48” United States—at 1-meter resolution in two years.

    U Illinois Blue Waters Cray supercomputer

    Nine months later, then-president Barack Obama announced a joint White House initiative involving the NSF and the National Geospatial Intelligence Agency (NGIA) in which Morin’s group mapped a similar area in the Arctic including the entire state of Alaska in two years.

    “If I wrote this story in a single proposal I wouldn’t have been able to write [any proposals] afterward,” Morin said. “It’s that absurd.” But the leaps in technology have made what used to be multi-decadal mapping projects—when they could be done at all—into annual events, with even more frequent updates soon to come.

    The inaugural Practice and Experience in Advanced Research Computing (PEARC) conference—with the theme Sustainability, Success and Impact—stresses key objectives for those who manage, develop and use advanced research computing throughout the U.S. and the world. Organizations supporting this new HPC conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Extreme Science and Engineering Development Environment (XSEDE), the Science Gateways Community Institute, the Campus Research Computing (CaRC) Consortium, the Advanced CyberInfrastructure Research and Education Facilitators (ACI-REF) consortium, the National Center for Supercomputing Applications’ Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC) and Internet2.

    Follow the Poop

    One project made possible with the DigitalGlobe constellation—a set of Hubble-like multispectral orbiting telescopes “pointed the other way”—was a University of Minnesota census of emperor penguin populations in Antarctica.

    “What’s the first thing you do if you get access to a bunch of sub-meter-resolution [orbital telescopes covering] Antarctica?” Morin asked. “You point them at penguins.”

    Thanks in part to a lack of predators the birds over-winter on the ice, huddling in colonies for warmth. Historically these colonies were discovered by accident: Morin’s project enabled the first continent-wide survey to find and estimate the population size of all the colonies.

    The researchers realized that they had a relatively easy way to spot the colonies in the DigitalGlobe imagery: Because the penguins eat beta-carotene-rich krill, their excrement stains the ice red.

    “You can identify their location by looking for poo,” Morin said. The project enabled the first complete population count of emperor penguins: 595,000 birds, +14%

    “We started to realize we were onto something,” he added. His group began to wonder if they could leverage the sub-meter-resolution, multispectral, stereo view of the constellation’s WorldView I, II and III satellites to derive the topography of the Antarctic, and later the Arctic. One challenge, he knew, would be finding the computational power to extract topographic data from the stereo images in a reasonable amount of time. He found his answer at the NSF and the NGIA.

    “We proposed to a science agency and a combat support agency that we were going to map the topography of 30 degrees of the globe in 24 months.”

    Blue Waters on the Ice

    Morin and his collaborators found themselves in the middle of a seismic shift in topographic technology.

    “Eight years ago, people were doing [this] from the ground,” with a combination of land-based surveys and accurate but expensive LIDAR mapping from aircraft, he said. These methods made sense in places where population and industrial density made the cost worthwhile. But it had left the Antarctic and Arctic largely unmapped.

    Deriving topographic information from the photographs posed a computational problem well beyond the capabilities of a campus cluster. The group did initial computations at the Ohio Supercomputer Center, but needed to expand for the final data analysis.

    Ohio Super Computer Center

    Ohio Oakley HP supercommputer

    Ohio Ruby HP supercomputer

    Ohio Dell Owens supercompter

    From 2014 to 2015, Morin used XSEDE resources, most notably Gordon at San Diego Supercomputer Center and XSEDE’s Extended Collaborative Support Service to carry out his initial computations.

    SDSC home built Gordon-Simons supercomputer

    XSEDE then helped his group acquire an allocation on Blue Waters, an NSF-funded Cray Inc. system at Indiana and NCSA with 49,000 CPUs and a peak performance of 13.3 petaFLOPS.

    Collecting the equivalent area of California daily, a now-expanded group of subject experts made use of the polar-orbiting satellites and Blue Waters to derive elevation data. They completed a higher-resolution map of Alaska—the earlier version of which had taken the U.S. Geological Survey 50 years—in a year. While the initial images are licensed for U.S. government use only, the group was able to release the resulting topographic data for public use.

    Mapping Change

    Thanks to the one-meter resolution of their initial analysis, the group quickly found they could identify many man-made structures on the surface. They could also spot vegetation changes such as clearcutting. They could even quantify vegetation regrowth after replanting.

    “We’re watching individual trees growing here.”

    Another set of images he showed in his PEARC17 presentation were before-and-after topographic maps of Nuugaatsiaq, Greenland, which was devastated by a tsunami last month. The Greenland government is using the images, which show both human structures and the landslide that caused the 10-meter tsunami, to plan recovery efforts.

    The activity of the regions’ ice sheets was a striking example of the technology’s capabilities.

    “Ice is a mineral that flows,” Morin said, and so the new topographic data offer much more frequent information about ice-sheet changes driven by climate change than previously available. “We not only have an image of the ice but we know exactly how high it is.”

    Morin also showed an image of the Larsen Ice Shelf revealing a crack that had appeared in the glacier. The real news, though, was that the crack—which created an iceberg the size of the big island of Hawaii—was less than 24 hours old. It had appeared sometime after midnight on July 12.

    “We [now] have better topography for Siberia than we have for Montana,” he noted.

    New Directions

    While the large, high-resolution satellites have already transformed the field, innovations are already coming that could create another shift, Morin said.

    “This is not your father’s topography,” he noted. “Everything has changed; everything is time sensitive; everything is on demand.” In an interview later that morning, he added, “XSEDE, Blue Waters and NSF have changed how earth science happens now.”

    One advance won’t require new technology: just a little more time. While the current topographic dataset is at 1-meter resolution, the data can go tighter with more computation. The satellite images actually have a 30-centimeter resolution, which would allow for the project to shift from imaging objects the size of automobiles to those the size of a coffee table.

    At that point, he said, “instead of [just the] presence or absence of trees we’ll be able to tell what species of tree. It doesn’t take recollection of imagery; it just takes reprocessing.”

    The new, massive constellation of CubeSats such as the Planet company’s toaster-sized Dove satellites now being launched promises an even more disruptive advance. A swarm of these satellites will provide much more frequent coverage of the entire Earth’s surface than possible with the large telescopes.

    “The quality isn’t as good, but right now we’re talking about coverage,” Morin said. His group’s work has taken advantage of a system that allows mapping of a major portion of the Earth in a year. “What happens when we have monthly coverage?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: