Tagged: Moore’s Law Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:34 am on April 11, 2019 Permalink | Reply
    Tags: , Moore's Law, , ,   

    From Science Node: “The end of an era” 

    Science Node bloc
    From Science Node

    10 Apr, 2019
    Alisa Alering

    For the last fifty years, computer technology has been getting faster and cheaper. Now that extraordinary progress is coming to an end. What happens next?

    John Shalf, department head for Computer Science at Berkeley Lab, has a few ideas. He’s going to share them in his keynote at ISC High Performance 2019 in Frankfurt, Germany (June 16-20), but he gave Science Node a sneak preview.

    Moore’s Law is based on Gordon Moore’s 1965 prediction that the number of transistors on a microchip doubles every two years, while the cost is halved. His prediction proved true for several decades. What’s different now?

    1
    Double trouble. From 1965 to 2004, the number of transistors on a microchip doubled every two years while cost decreased. Now that you can’t get more transistors on a chip, high-performance computing is in need of a new direction. Data courtesy Data quest/Intel.

    The end of Dennard scaling happened in 2004, when we couldn’t crank up the clock frequencies anymore on chips, so we moved to exponentially increasing parallelism in order to continue performance scaling. It was not an ideal solution, but it enabled us to continue some semblance of performance scaling. Now we’ve gotten to the point where we can’t squeeze any more transistors onto the chip.

    If you can’t cram any more transistors on the chip, then we can’t continue to scale the number of cores as a means to scale performance. And we’ll get no power improvement: with the end of Moore’s Law, in order to get ten times more performance we would need ten times more power in the future. Capital equipment cost won’t improve either. Meaning that if I spend $100 million and can get a 100 petaflop machine today, then I spend $100 million ten years from now, I’ll get the same machine.

    That sounds fairly dire. Is there anything we can do?

    There are three dimensions we can pursue: One is new architectures and packaging, the other is CMOS transistor replacements using new materials, the third is new models of computation that are not necessarily digital.

    Let’s break it down. Tell me about architectures.

    2
    John Shalf, of Lawrence Berkeley National Laboratory, wants to consider all options—from new materials and specialization to industry partnerships–when it comes to imagining the future of high-performance computing. Courtesy John Shalf.

    We need to change course and learn from our colleagues in other industries. Our friends in the phone business and in mega data centers are already pointing out the solution. Architectural specialization is one of the biggest sources of improvement in the iPhone. The A8 chip, introduced in 2014, had 29 different discreet accelerators. We’re now at the A11, and it has nearly 40 different discreet hardware accelerators. Future generation chips are slowly squeezing out the CPUs and having special function accelerators for different parts of their workload.

    And for the mega-data center, Google is making its own custom chip. They weren’t seeing the kind of performance improvements they needed from Intel or Nvidia, so they’re building their own custom chips tailored to improve the performance for their workloads. So are Facebook and Amazon. The only people absent from this are HPC.

    With Moore’s Law tapering off, the only way to get a leg up in performance is to go back to customization. The embedded systems and the ARM ecosystem is an example where, even though the chips are custom, the components—the little circuit designs on those chips—are reusable across many different disciplines. The new commodity is going to be these little IP blocks we arrange on the chip. We may need to add some IP blocks that are useful for scientific applications, but there’s a lot of IP reuse in that embedded ecosystem and we need to learn how to tap into that.

    How do new materials fit in?

    We’ve been using silicon for the past several decades because it is inexpensive and ubiquitous, and has many years of development effort behind it. We have developed an entire scalable manufacturing infrastructure around it, so it continues to be the most cost-effective route for mass-manufacture of digital devices. It’s pretty amazing, to use one material system for that long. But now we need to look at some new transistor that can continue to scale performance beyond what we’re able to wring out of silicon. Silicon is, frankly, not that great of a material when it comes to electron mobility.

    _________________________________________________________
    The Materials Project

    The current pace of innovation is extremely slow because the primary means available for characterizing new materials is to read a lot of papers. One solution might be Kristin Persson’s Materials Project, originally invented to advance the exploration of battery materials.

    By scaling materials computations over supercomputing clusters, research can be targeted to the most promising compounds, helping to remove guesswork from materials design. The hope is that reapplying this technology to also discover better electronic materials will speed the pace of discovery for new electronic devices.
    In 2016, an eight laboratory consortium was formed to push this in the DOE “Big ideas Summit” where grass-roots ideas from the labs are presented to the highest levels of DOE leadership. Read the whitepaper and elevator pitch here.

    After the ‘Beyond Moore’s Law’ project was invited back for the 2017 Big Ideas Summit, the DOE created a Microelectronics BRN (Basic Research Needs) Workshop. The initial report from that meeting is released, and the DOE’s FY20 budget includes a line item for Microelectronics research.
    _________________________________________________________

    The problem is, we know historically that once you demonstrate a new device concept in the laboratory, it takes about ten years to commercialize it. Prior experience has shown a fairly consistent timeline of 10 years from lab to fab. Although there are some promising directions, nobody has demonstrated something that’s clearly superior to silicon transistors in the lab yet. With no CMOS replacement imminent, that means we’re already ten years too late! We need to develop tools and processes to accelerate the pace for discovery of more efficient microelectronic devices to replace CMOS and the materials that make them possible.

    So, until we find a new material for the perfect chip, can we solve the problem with new models of computing. What about quantum computing?

    New models would include quantum and neuromorphic computing. These models expand computing into new directions, but they’re best at computing problems that are done poorly using digital computing.

    I like to use the example of ‘quantum Excel.’ Say I balance my checkbook by creating a spreadsheet with formulas, and it tells me how balanced my checkbook is. If I were to use a quantum computer for that—and it would be many, many, many years in the future where we’d have enough qubits to do it, but let’s just imagine—quantum Excel would be the superposition of all possible balanced checkbooks.

    And a neuromorphic computer would say, ‘Yes, it looks correct,’ and then you’d ask it again and it would say, ‘It looks correct within an 80% confidence interval.’ Neuromorphic is great at pattern recognition, but it wouldn’t be as good for running partial differential equations and computing exact arithmetic.

    We really need to go back to the basics. We need to go back to ‘What are the application requirements?’

    Clearly there are a lot of challenges. What’s exciting about this time right now?

    3
    The Summit supercomputer at Oak Ridge National Laboratory operates at a top speed of 200 petaflops and is currently the world’s fastest computer. But the end of Moore’s Law means that to get 10x that performance in the future, we also would need 10x more power. Courtesy Carlos Jones/ORNL.

    Computer architecture has become very, very important again. The previous era of exponential scaling created a much narrower space for innovation because the focus was general purpose computing, the universal machine. The problems we now face opens up the door again to mathematicians and computer architects to collaborate to solve big problems together. And I think that’s very exciting. Those kinds of collaborations lead to really fun, creative, and innovative solutions to worldwide important scientific problems.

    The real issue is that our economic model for acquiring supercomputing systems will be deeply disrupted. Originally, systems were designed by mathematicians to solve important mathematical problems. However, the exponential improvement rates of Moore’s law ensured that the most general purpose machines that were designed for the broadest range of problems would have a superior development budget and, over time, would ultimately deliver more cost-effective performance than specialized solutions.

    The end of Moore’s Law spells the end of general purpose computing as we know it. Continuing with this approach dooms us to modest or even non-existent performance improvements. But the cost of customization using current processes is unaffordable.

    We must reconsider our relationship with industry to re-enable specialization targeted at our relatively small HPC market. Developing a self-sustaining business model is paramount. The embedded ecosystem (including the ARM ecosystem) provides one potential path forward, but there is also the possibility of leveraging the emerging open source hardware ecosystem and even packaging technologies such as Chiplets to create cost-effective specialization.

    We must consider all options for business models and all options for partnerships across agencies or countries to ensure an affordable and sustainable path forward for the future of scientific and technical computing.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:07 pm on December 3, 2018 Permalink | Reply
    Tags: , , , , MESO devices, Moore's Law, Multiferroics,   

    From UC Berkeley: “New quantum materials could take computing devices beyond the semiconductor era” 

    UC Berkeley

    From UC Berkeley

    December 3, 2018
    Robert Sanders
    rlsanders@berkeley.edu

    1
    MESO devices, based on magnetoelectric and spin-orbit materials, could someday replace the ubiquitous semiconductor transistor, today represented by CMOS. MESO uses up-and-down magnetic spins in a multiferroic material to store binary information and conduct logic operations. (Intel graphic)

    Researchers from Intel Corp. and UC Berkeley are looking beyond current transistor technology and preparing the way for a new type of memory and logic circuit that could someday be in every computer on the planet.

    In a paper appearing online Dec. 3 in advance of publication in the journal Nature, the researchers propose a way to turn relatively new types of materials, multiferroics and topological materials, into logic and memory devices that will be 10 to 100 times more energy-efficient than foreseeable improvements to current microprocessors, which are based on CMOS (complementary metal–oxide–semiconductor).

    The magneto-electric spin-orbit or MESO devices will also pack five times more logic operations into the same space than CMOS, continuing the trend toward more computations per unit area, a central tenet of Moore’s Law.

    The new devices will boost technologies that require intense computing power with low energy use, specifically highly automated, self-driving cars and drones, both of which require ever increasing numbers of computer operations per second.

    “As CMOS develops into its maturity, we will basically have very powerful technology options that see us through. In some ways, this could continue computing improvements for another whole generation of people,” said lead author Sasikanth Manipatruni, who leads hardware development for the MESO project at Intel’s Components Research group in Hillsboro, Oregon. MESO was invented by Intel scientists, and Manipatruni designed the first MESO device.

    Transistor technology, invented 70 years ago, is used today in everything from cellphones and appliances to cars and supercomputers. Transistors shuffle electrons around inside a semiconductor and store them as binary bits 0 and 1.

    2
    Single crystals of the multiferroic material bismuth-iron-oxide. The bismuth atoms (blue) form a cubic lattice with oxygen atoms (yellow) at each face of the cube and an iron atom (gray) near the center. The somewhat off-center iron interacts with the oxygen to form an electric dipole (P), which is coupled to the magnetic spins of the atoms (M) so that flipping the dipole with an electric field (E) also flips the magnetic moment. The collective magnetic spins of the atoms in the material encode the binary bits 0 and 1, and allow for information storage and logic operations.

    In the new MESO devices, the binary bits are the up-and-down magnetic spin states in a multiferroic, a material first created in 2001 by Ramamoorthy Ramesh, a UC Berkeley professor of materials science and engineering and of physics and a senior author of the paper.

    “The discovery was that there are materials where you can apply a voltage and change the magnetic order of the multiferroic,” said Ramesh, who is also a faculty scientist at Lawrence Berkeley National Laboratory. “But to me, ‘What would we do with these multiferroics?’ was always a big question. MESO bridges that gap and provides one pathway for computing to evolve”

    In the Nature paper, the researchers report that they have reduced the voltage needed for multiferroic magneto-electric switching from 3 volts to 500 millivolts, and predict that it should be possible to reduce this to 100 millivolts: one-fifth to one-tenth that required by CMOS transistors in use today. Lower voltage means lower energy use: the total energy to switch a bit from 1 to 0 would be one-tenth to one-thirtieth of the energy required by CMOS.

    “A number of critical techniques need to be developed to allow these new types of computing devices and architectures,” said Manipatruni, who combined the functions of magneto-electrics and spin-orbit materials to propose MESO. “We are trying to trigger a wave of innovation in industry and academia on what the next transistor-like option should look like.”

    Internet of things and AI

    The need for more energy-efficient computers is urgent. The Department of Energy projects that, with the computer chip industry expected to expand to several trillion dollars in the next few decades, energy use by computers could skyrocket from 3 percent of all U.S. energy consumption today to 20 percent, nearly as much as today’s transportation sector. Without more energy-efficient transistors, the incorporation of computers into everything – the so-called internet of things – would be hampered. And without new science and technology, Ramesh said, America’s lead in making computer chips could be upstaged by semiconductor manufacturers in other countries.

    “Because of machine learning, artificial intelligence and IOT, the future home, the future car, the future manufacturing capability is going to look very different,” said Ramesh, who until recently was the associate director for Energy Technologies at Berkeley Lab. “If we use existing technologies and make no more discoveries, the energy consumption is going to be large. We need new science-based breakthroughs.”

    Paper co-author Ian Young, a UC Berkeley Ph.D., started a group at Intel eight years ago, along with Manipatruni and Dmitri Nikonov, to investigate alternatives to transistors, and five years ago they began focusing on multiferroics and spin-orbit materials, so-called “topological” materials with unique quantum properties.

    “Our analysis brought us to this type of material, magneto-electrics, and all roads led to Ramesh,” said Manipatruni.

    Multiferroics and spin-orbit materials

    Multiferroics are materials whose atoms exhibit more than one “collective state.” In ferromagnets, for example, the magnetic moments of all the iron atoms in the material are aligned to generate a permanent magnet. In ferroelectric materials, on the other hand, the positive and negative charges of atoms are offset, creating electric dipoles that align throughout the material and create a permanent electric moment.

    MESO is based on a multiferroic material consisting of bismuth, iron and oxygen (BiFeO3) that is both magnetic and ferroelectric. Its key advantage, Ramesh said, is that these two states – magnetic and ferroelectric – are linked or coupled, so that changing one affects the other. By manipulating the electric field, you can change the magnetic state, which is critical to MESO.

    The key breakthrough came with the rapid development of topological materials with spin-orbit effect, which allow for the state of the multiferroic to be read out efficiently. In MESO devices, an electric field alters or flips the dipole electric field throughout the material, which alters or flips the electron spins that generate the magnetic field. This capability comes from spin-orbit coupling, a quantum effect in materials, which produces a current determined by electron spin direction.

    In another paper that appeared earlier this month in Science Advances, UC Berkeley and Intel experimentally demonstrated voltage-controlled magnetic switching using the magneto-electric material bismuth-iron-oxide (BiFeO3), a key requirement for MESO.

    “We are looking for revolutionary and not evolutionary approaches for computing in the beyond-CMOS era,” Young said. “MESO is built around low-voltage interconnects and low-voltage magneto-electrics, and brings innovation in quantum materials to computing.”

    Other co-authors of the Nature paper are Chia-Ching Lin, Tanay Gosavi and Huichu Liu of Intel and Bhagwati Prasad, Yen-Lin Huang and Everton Bonturim of UC Berkeley. The work was supported by Intel.

    RELATED INFORMATION

    Beyond CMOS computing with spin and polarization Nature Physics

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

     
  • richardmitnick 12:33 pm on December 3, 2018 Permalink | Reply
    Tags: a major Lab research initiative called “Beyond Moore’s Law, , , , Moore's Law, Ramamoorthy Ramesh   

    From Lawrence Berkeley National Lab: “Berkeley Lab Takes a Quantum Leap in Microelectronics” Ramamoorthy Ramesh 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    December 3, 2018
    Julie Chao
    JHChao@lbl.gov
    (510) 486-6491

    1
    (Courtesy Ramamoorthy Ramesh)
    A Q&A with Ramamoorthy Ramesh on the need for next-generation computer chips

    Ramamoorthy Ramesh, a Lawrence Berkeley National Laboratory (Berkeley Lab) scientist in the Materials Sciences Division, leads a major Lab research initiative called “Beyond Moore’s Law,” which aims to develop next-generation microelectronics and computing architectures.

    Moore’s Law – which holds that the number of transistors on a chip will double about every two years and has held true in the industry for the last four decades – is coming to an inevitable end as physical limitations are reached. Major innovations will be required to sustain advances in computing. Working with industry leaders, Berkeley Lab’s approach spans fundamental materials discovery, materials physics, device development, algorithms, and systems architecture.

    In collaboration with scientists at Intel Corp., Ramesh proposes a new memory in logic device for replacing or augmenting conventional transistors. The work is detailed in a new Nature paper described in this UC Berkeley news release [blog post will follow]. Here Ramesh discusses the need for a quantum leap in microelectronics and how Berkeley Lab plans to play a role.

    Q. Why is the end of Moore’s Law such an urgent problem?

    If we look around, at the macro level there are two big global phenomena happening in electronics. One is the Internet of Things. It basically means every building, every car, every manufacturing capability is going to be fully accessorized with microelectronics. So, they’re all going to be interconnected. While the exact size of this market (in terms of number of units and their dollar value) is being debated, there is agreement that it is growing rapidly.

    The second big revolution is artificial intelligence/machine learning. This field is in its nascent stages and will find applications in diverse technology spaces. However, these applications are currently limited by the memory wall and the limitations imposed by the efficiency of computing. Thus, we will need more powerful chips that consume much lower energy. Driven by these emerging applications, there is the potential for the microelectronics market to grow exponentially.

    Semiconductors have been progressively shrinking and becoming faster, but they are consuming more and more power. If we don’t do anything to curb their energy consumption, the total energy consumption of microelectronics will jump from 4 percent to about 20 percent of primary energy. As a point of reference, today transportation consumes 24 percent of U.S. energy, manufacturing another 24 percent, and buildings 38 percent; that’s almost 90 percent. This could become almost like transportation. So, we said, that’s a big number. We need to go to a totally new technology and reduce energy consumption by several orders of magnitude.

    Q. So energy consumption is the main driver for the need for semiconductor innovation?

    No, there are two other factors. One is national security. Microelectronics and computing systems are a critical part of our national security infrastructure. And the other is global competitiveness. China has been investing hundreds of billions of dollars into making these fabs. Previously only U.S. companies made them. For two years, the fastest computer in the world was built in China. So this is a strategic issue for the U.S.

    Q. What is Berkeley Lab doing to address the problem?

    Berkeley lab is pursuing a “co-design” framework using exemplar demonstration pathways. In our co-design framework, the four key components are: (1) computational materials discovery and device scale modeling (led by Kristin Persson and Lin-wang Wang), (2) materials synthesis and materials physics (led by Peter Fischer), (3) scale up of synthesis pathways (led by Patrick Naulleau), and (4) circuit architecture and algorithms (led by John Shalf). These components are all working together to identify the key elements of an “attojoule” (10-18 Joules) logic-in-memory switch, where attojoule refers to the energy consumption per logic operation.

    One key outcome of the Berkeley Lab co-design framework is to understand the fundamental scientific issues that will impact the attojoule device, which will be about six orders of magnitude lower in energy compared to today’s state-of-the-art CMOS transistors, which work at around 50 picojoules (10-12 Joules) per logic operation.

    This paper presents the key elements of a pathway by which such an attojoule switch can be designed and fabricated using magnetoelectric multiferroics and more broadly, using quantum materials. There are still scientific as well as technological challenges.

    Berkeley Lab’s capabilities and facilities are well suited to tackle these challenges. We have nanoscience and x-ray facilities such as the Molecular Foundry and Advanced Light Source, big scientific instruments, which will be critical and allow us to rapidly explore new materials and understand their electronic, magnetic, and chemical properties.

    Another is the Materials Project, which enables discovery of new materials using a computational approach. Plus there is our ongoing work on deep UV lithography, which is carried out under the aegis of the Center for X-Ray Optics. This provides us with a perfect framework to address how we can do device processing at large scales.

    All of this will be done in collaboration with faculty and students at UC Berkeley and our partners in industry, as this paper illustrated.

    Q. What is the timeline?

    It will take a decade. There’s still a lot of work to be done. Your computer today operates at 3 volts. This device in the Nature paper proposes something at 100 millivolts. We need to understand the physics a lot better. That’s why a place like Berkeley Lab is so important.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

     
  • richardmitnick 1:44 pm on July 24, 2018 Permalink | Reply
    Tags: , Beyond silicon: $1.5 billion U.S. program aims to spur new types of computer chips, , Moore's Law, Using carbon nanotubes   

    From AAAS: “Beyond silicon: $1.5 billion U.S. program aims to spur new types of computer chips” 

    AAAS

    From AAAS

    Jul. 24, 2018
    Robert F. Service

    1
    A wafer contains hundreds of tiny computer chips made from carbon nanotubes, which switch faster and more efficiently than transistors made from silicon.
    Stanford Engineering

    Silicon computer chips have been on a roll for half a century, getting ever more powerful. But the pace of innovation is slowing. Today the U.S. military’s Defense Advanced Research Projects Agency (DARPA) announced dozens of new grants totaling $75 million in a program that aims to reinvigorate the chip industry with basic research into new designs and materials, such as carbon nanotubes. Over the next few years, the DARPA program, which supports both academic and industry scientists, will grow to $300 million per year up to a total of $1.5 billion over 5 years.

    “It’s a critical time to do this,” says Erica Fuchs, a computer science policy expert at Carnegie Mellon University in Pittsburgh, Pennsylvania.

    In 1965, Intel co-founder Gordon Moore made the observation that would become his eponymous “law”: The number of transistors on chips was doubling every 2 years, a time frame later cut to every 18 months. But the gains from miniaturizing the chips are dwindling. Today, chip speeds are stuck in place, and each new generation of chips brings only a 30% improvement in energy efficiency, says Max Shulaker, an electrical engineer at the Massachusetts Institute of Technology in Cambridge. Fabricators are approaching physical limits of silicon, says Gregory Wright, a wireless communications expert at Nokia Bell Labs in Holmdel, New Jersey. Electrons are confined to patches of silicon just 100 atoms wide, he says, forcing complex designs that prevent electrons from leaking out and causing errors. “We’re running out of room,” he says.

    Moreover, only a handful of companies can afford the multibillion-dollar fabrication plants that make the chips, stifling innovation in a field once dominated by small startups, says Valeria Bertacco, a computer scientist at the University of Michigan in Ann Arbor. And some big companies are going down separate paths, designing specialized chips for specific tasks, Fuchs says. That has reduced the incentive for them to pay for shared, precompetitive basic research. The number of companies involved in the Semiconductor Research Corporation in Durham, North Carolina, which backs such work, dropped from 80 in 1996 to less than half that in 2013, according to a study by Fuchs and her colleagues.

    DARPA is now trying to fill the gap, with grants to researchers such as Shulaker. He is fashioning 3D chips with transistors made of carbon nanotubes, which switch on and off faster and more efficiently than silicon transistors. Companies today already make 3D chips with silicon as a way to pack logic and memory functions closer together to speed up processing. But the chips are slowed down by bulky and sparse wiring that carries information between the chip layers. And because 2D silicon chip layers must be fabricated separately at more than 1000°C, there is no way to build up 3D chips in an integrated fabrication plan without melting the lower layers.

    Carbon nanotube transistors, which can be made nearly at room temperature, offer a better path to dense, integrated 3D chips, Shulaker says. Even though his team’s 3D chips will have features 10 times larger than state-of-the-art silicon devices, their speed and energy efficiency is expected to be 50 times better—a potential boon for power-hungry data centers.

    The DARPA program is also supporting research into flexible chip architectures. Daniel Bliss, a wireless communications expert at Arizona State University in Tempe, and his colleagues want to improve wireless communications with chips that can be reconfigured on the fly to carry out specialized tasks. Bliss is working on radio chips that mix and filter signals using software rather than hardware—an advance that would allow a larger number of devices to transmit and receive signals without interference. This could improve mobile and satellite communications, as well as enable a rapid growth in the internet of things, where myriad devices communicate with one another, he says.

    Another DARPA grant, for researchers at Stanford University in Palo Alto, California, will go to improving computer tools used in chipmaking. These tools verify novel chip designs with a form of artificial intelligence called machine learning. They would help automate the largely manual process of detecting design bugs in chips made up of billions of transistors, and could speed up the ability of companies to design, test, and fabricate new chip architectures.

    If even a fraction of the new projects succeed, the DARPA project “will completely revolutionize how we design electronics,” says Subhasish Mitra, a Stanford electrical and computer engineer, and a researcher on the 3D carbon nanotube and circuit validation projects. He says it will also spur engineers to look beyond silicon, which has dominated research for decades. “When I was a student, life was boring,” Mitra says. “It was clear that silicon would move forward along a known path. Now, it’s absolutely clear that’s not what the future is.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

     
  • richardmitnick 3:37 pm on April 14, 2018 Permalink | Reply
    Tags: , , Moore's Law, ,   

    From LBNL: “Valleytronics Discovery Could Extend Limits of Moore’s Law” 

    Berkeley Logo

    Berkeley Lab

    April 13, 2018
    John German
    jdgerman@lbl.gov
    (510) 486-6601

    1
    Valleytronics utilizes different local energy extrema (valleys) with selection rules to store 0s and 1s. In SnS, these extrema have different shapes and responses to different polarizations of light, allowing the 0s and 1s to be directly recognized. This schematic illustrates the variation of electron energy in different states, represented by curved surfaces in space. The two valleys of the curved surface are shown. No image credit.

    Research appearing today in Nature Communications finds useful new information-handling potential in samples of tin(II) sulfide (SnS), a candidate “valleytronics” transistor material that might one day enable chipmakers to pack more computing power onto microchips.

    The research was led by Jie Yao of the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and Shuren Lin of UC Berkeley’s Department of Materials Science and Engineering and included scientists from Singapore and China. The research team used the unique capabilities of Berkeley Lab’s Molecular Foundry, a DOE Office of Science user facility.

    For several decades, improvements in conventional transistor materials have been sufficient to sustain Moore’s Law – the historical pattern of microchip manufacturers packing more transistors (and thus more information storage and handling capacity) into a given volume of silicon. Today, however, chipmakers are concerned that they might soon reach the fundamental limits of conventional materials. If they can’t continue to pack more transistors into smaller spaces, they worry that Moore’s Law would break down, preventing future circuits from becoming smaller and more powerful than their predecessors.

    That’s why researchers worldwide are on the hunt for new materials that can compute in smaller spaces, primarily by taking advantage of the additional degrees of freedom that the materials offer – in other words, using a material’s unique properties to perform more computations in the same space. Spintronics, for example, is a concept for transistors that harnesses the up and down spins of electrons in materials as the on/off transistor states.

    Valleytronics, another emerging approach, utilizes the highly selective response of candidate crystalline materials under specific illumination conditions to denote their on/off states – that is, using the materials’ band structures so that the information of 0s and 1s is stored in separate energy valleys of electrons, which are dependent on the crystal structures of the materials.

    In this new study, the research team has shown that tin(II) sulfide (SnS) is able to absorb different polarizations of light and then selectively reemit light of different colors at different polarizations. This is useful for concurrently accessing both the usual electronic and valleytronic degrees of freedom, which would substantially increase the computing power and data storage density of circuits made with the material.

    “We show a new material with distinctive energy valleys that can be directly identified and separately controlled,” said Yao. “This is important because it provides us a platform to understand how valley signatures are carried by electrons and how information can be easily stored and processed between the valleys, which are of both scientific and engineering significance.”

    Lin, the first author of the paper, said the material is different from previously investigated candidate valleytronics materials because it possesses such selectivity at room temperature without additional biases apart from the excitation light source, which alleviates the previously stringent requirements in controlling the valleys. Compared to its predecessor materials, SnS is also much easier to process.

    With this finding, researchers will be able to develop operational valleytronic devices, which may one day be integrated into electronic circuits. The unique coupling between light and valleys in this new material may also pave the way toward future hybrid electronic/photonic chips.

    Berkeley Lab’s “Beyond Moore’s Law” initiative leverages the basic science capabilities and unique user facilities of Berkeley Lab and UC Berkeley to evaluate promising candidates for next-generation electronics and computing technologies. Its objective is to build close partnerships with industry to accelerate the time it typically takes to move from the discovery of a technology to its scale-up and commercialization.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 9:05 pm on March 19, 2018 Permalink | Reply
    Tags: , , Moore's Law, , , ,   

    From LLNL: “Breaking the Law: Lawrence Livermore, Department of Energy look to shatter Moore’s Law through quantum computing” 


    Lawrence Livermore National Laboratory

    March 19, 2018
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    Lawrence Livermore National Laboratory physicist Jonathan DuBois, who heads the Lab’s Quantum Coherent Device Physics (QCDP) group, examines a prototype quantum computing device designed to solve quantum simulation problems. The device is kept inside a refrigerated vacuum tube (gold-plated to provide solid thermal matching) at temperatures colder than outer space. Photos by Carrie Martin/LLNL.

    The laws of quantum physics impact daily life in rippling undercurrents few people are aware of, from the batteries in our smartphones to the energy generated from solar panels. As the Department of Energy and its national laboratories explore the frontiers of quantum science, such as calculating the energy levels of a single atom or how molecules fit together, more powerful tools are a necessity.

    “The problem basically gets worse the larger the physical system gets — if you get beyond a simple molecule we have no way of resolving those kinds of energy differences,” said Lawrence Livermore National Laboratory (LLNL) physicist Jonathan DuBois, who heads the Lab’s Quantum Coherent Device Physics (QCDP) group. “From a physics perspective, we’re getting more and more amazing, highly controlled physics experiments, and if you tried to simulate what they were doing on a classical computer, it’s almost at the point where it would be kind of impossible.”

    In classical computing, Moore’s Law postulates that the number of transistors in an integrated circuit doubles approximately every two years. However, there are indications that Moore’s Law is slowing down and will eventually hit a wall. That’s where quantum computing comes in. Besides busting through the barriers of Moore’s Law, some are banking on quantum computing as the next evolutionary step in computers. It’s on the priority list for the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program,,which is investigating quantum computing, among other emerging technologies, through its “Beyond Moore’s Law” project. At LLNL, staff scientists DuBois and Eric Holland are leading the effort to develop a comprehensive co-design strategy for near-term application of quantum computing technology to outstanding grand challenge problems in the NNSA mission space.

    Whereas the desktop computers we’re all familiar with store information in binary forms of either a 1 or a zero (on or off), in a quantum system, information can be stored in superpositions, meaning that for a brief moment, mere nanoseconds, data in a quantum bit can exist as either one or zero before being projected into a classical binary state. Theoretically, these machines could solve certain complex problems much faster than any computers ever created before. While classical computers perform functions in serial (generating one answer at a time), quantum computers could potentially perform functions and store data in a highly parallelized way, exponentially increasing speed, performance and storage capacity.

    LLNL recently brought on line a full capability quantum computing lab and testbed facility under the leadership of quantum coherent device group member Eric Holland. Researchers are performing tests on a prototype quantum device birthed under the Lab’s Quantum Computing Strategic Initiative. The initiative, now in its third year, is funded by Laboratory Directed Research & Development (LDRD) and aims to design, fabricate, characterize and build quantum coherent devices. The building and demonstration piece is made possible by DOE’s Advanced Scientific Computing Research (ASCR), a program managed by DOE’s Office of Science that is actively engaged in exploring if and how quantum computation could be useful for DOE applications.

    LLNL researchers are developing algorithms for solving quantum simulation problems on the prototype device, which looks deceptively simple and very strange. It’s a cylindrical metal box, with a sapphire chip suspended in it. The box is kept inside a refrigerated vacuum tube (gold-plated to provide solid thermal matching) at temperatures colder than outer space — negative 460 degrees Fahrenheit. It’s highly superconductive and faces zero resistance in the vacuum, thus extending the lifetime of the superposition state.

    “It’s a perfect electrical conductor, so if you can send an excitation inside here, you’ll get electromagnetic (EM) modes inside the box,” DuBois explained. “We’re using the space inside the box, the quantized EM fields, to store and manipulate quantum information, and the little chip couples to fields and manipulates them, determining the fine splitting in energies between different quantum states. These energy differences are what you use to make changes in quantum space.”

    To “talk” to the box, researchers are using an arbitrary wave form generator, which creates an oscillating signal– the timing of the signal determines what computation is being done in system. DuBois said the physicists are essentially building a quantum solver for Schrödinger’s equation, the bases for almost all physics and the determining factor for the dynamics of a quantum computing system.

    “It turns out that’s actually very hard to solve, and the bigger the system is, the size of what you need to keep track of blows up exponentially,” DuBois said. “The argument here is we can build a system that does that naturally — nature is basically keeping track of all those degrees of freedom for us, and so if we can control it carefully we can get it to basically emulate the quantum dynamics of some problem we’re interested in, a charge transfer in quantum chemistry or biology problem or scattering problem in nuclear physics.”

    Finding out how the device will work is part of the mission of DOE’s Advanced Quantum-Enabled Simulation (AQuES) Testbed Pathfinder program, which is analyzing several different approaches to creating a functional, useful quantum computer for basic science and use in areas such as determining nuclear scattering rates, the electronic structure in molecules or condensed matter or understanding the energy levels in solar panels. In 2017, DOE awarded $1.5 million over three years to a team including DuBois and Lawrence Berkeley National Laboratory physicists Irfan Siddiqi and Jonathan Carter. The team wants to determine the underlying technology for a quantum system, develop a practical, usable quantum computer and build quantum capabilities at the national labs to solve real-world problems.

    The science of quantum computing, according to DuBois, is “at a turning point.” Within the three-year timeframe, he said, the team should be able to assess what type of quantum system is worth pursuing as a testbed system. The researchers first want to demonstrate control over a quantum computer and solve specific quantum dynamics problems. Then, they want to set up a user facility or cloud-based system that any user could log into and solve complex quantum physics problems.

    “There are multiple competing approaches to quantum computing; trapping ions, semiconducting systems, etc., and all have their quirks — none of them are really at the point where it’s actually a quantum computer,” DuBois said. “The hardware side, which is what this is, the question is, ‘what are the first technologies that we can deploy that will help bridge the gap between what actually exists in the lab and how people are thinking of these systems as theoretical objects?'”

    Quantum computers have come a long way since the first superconducting quantum bit, or “qubit,” was created in 1999. In last nearly 20 years, quantum systems have improved exponentially, evidenced by the life span of the qubit’s superposition, or how long it takes the qubit to decay into 0 or 1. In 1999 that figure was a nanosecond. Currently, systems are up to tens to hundreds of milliseconds, which may not sound like much, but every year, the lifetime of the quantum bit has doubled.

    For the Testbed project, LLNL’s first generation quantum device will be roughly 20 qubits, DuBois said, large enough to be interesting, but small enough to be useful. A system of that size could potentially reduce the time it takes for most current supercomputing systems to perform quantum dynamics calculations from about a day down to mere microseconds, DuBois said. To get to that point, LLNL and LBNL physicists will need to understand how to design systems that can extend the quantum state.

    “It needs to last long enough to be quantum and it needs to be controllable,” DuBois said. “There’s a spectrum to that; the bigger the space is, the more powerful it has to be. Then there’s how controllable it would be. The finest level of control would be to change the value to anything I want. That’s what we’re aiming for, but there’s a competition involved. We want to hit that sweet spot.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration

    LLNL/NIF


    DOE Seal
    NNSA

     
  • richardmitnick 8:59 am on July 29, 2016 Permalink | Reply
    Tags: , Moore's Law,   

    From SUNY Buffalo: “Vortex laser offers hope for Moore’s Law” 

    SUNY Buffalo

    SUNY Buffalo

    July 28, 2016
    Cory Nealon

    1
    The image above shows vortex laser on a chip. Because the laser beam travels in a corkscrew pattern, encoding information into different vortex twists, it’s able to carry 10 times or more the amount of information than that of conventional lasers. Credit: University at Buffalo.

    The optics advancement may solve an approaching data bottleneck by helping to boost computing power and information transfer rates tenfold.

    Described in a study published today (July 28, 2016) by the journal Science, the optics advancement could become a central component of next generation computers designed to handle society’s growing demand for information sharing.

    It may also be a salve to those fretting over the predicted end of Moore’s Law, the idea that researchers will find new ways to continue making computers smaller, faster and cheaper.

    “To transfer more data while using less energy, we need to rethink what’s inside these machines,” says Liang Feng, PhD, assistant professor in the Department of Electrical Engineering at the University at Buffalo’s School of Engineering and Applied Sciences, and the study’s co-lead author.

    The other co-lead author is Natalia M. Litchinitser, PhD, professor of electrical engineering at UB.

    Additional authors are: Pei Miao and Zhifeng Zhang, PhD candidates at UB; Jingbo Sun, PhD, assistant research professor of electrical engineering at UB; Wiktor Walasik, PhD, postdoctoral researcher at UB; and Stefano Longhi, PhD, professor at the Polytechnic University of Milan in Italy, and UB graduate students.

    For decades, researchers have been able to cram evermore components onto silicon-based computer chips. Their success explains why today’s smartphones have more computing power than the world’s most powerful computers of the 1980s, which cost millions in today’s dollars and were the size of a large file cabinet.

    But researchers are running into a bottleneck in which existing technology may no longer meet society’s demand for data. Predictions vary, but many suggest this could happen within the next five years.

    Researchers are addressing the matter in numerous ways including optical communications, which uses light to carry information. Examples of optical communications vary from old lighthouses to modern fiber optic cables used to watch television and browse the internet.

    Lasers are a central part of today’s optical communication systems. Researchers have been manipulating lasers in various ways, most commonly by funneling different signals into one path, to carry more information. But these techniques — specifically, wavelength-division multiplexing and time-division multiplexing — are also reaching their limits.

    The UB-led research team is pushing laser technology forward using another light manipulation technique called orbital angular momentum, which distributes the laser in a corkscrew pattern with a vortex at the center.

    Usually too large to work on today’s computers, the UB-led team was able to shrink the vortex laser to the point where it is compatible with computer chips. Because the laser beam travels in a corkscrew pattern, encoding information into different vortex twists, it’s able to carry 10 times or more the amount of information than that of conventional lasers, which move linearly.

    The vortex laser is one component of many, such as advanced transmitters and receivers, which will ultimately be needed to continue building more powerful computers and datacenters.

    The research was supported with grants from the U.S. Army Research Office, the U.S. Department of Energy and National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SUNY Buffalo Campus

    UB is a premier, research-intensive public university and a member of the Association of American Universities. As the largest, most comprehensive institution in the 64-campus State University of New York system, our research, creative activity and people positively impact the world.

     
  • richardmitnick 9:03 am on July 25, 2016 Permalink | Reply
    Tags: , , Final International Technology Roadmap for Semiconductors (ITRS), Moore's Law   

    From ars technica: “Transistors will stop shrinking in 2021, but Moore’s law will live on” 

    Ars Technica
    ars technica

    25/7/2016
    Sebastian Anthony

    1
    A 22nm Haswell wafer, with a pin for scale. No image credit

    Transistors will stop shrinking after 2021, but Moore’s law will probably continue, according to the final International Technology Roadmap for Semiconductors (ITRS).

    The ITRS—which has been produced almost annually by a collaboration of most of the world’s major semiconductor companies since 1993—is about as authoritative as it gets when it comes to predicting the future of computing. The 2015 roadmap will however be its last.

    The most interesting aspect of the ITRS is that it tries to predict what materials and processes we might be using in the next 15 years. The idea is that, by collaborating on such a roadmap, the companies involved can sink their R&D money into the “right” technologies.

    For example, despite all the fuss surrounding graphene and carbon nanotubes a few years back, the 2011 ITRS predicted that it would still be at least 10 to 15 years before they were actually used in memory or logic devices. Germanium and III-V semiconductors, though, were predicted to be only five to 10 years away. Thus, if you were deciding where to invest your R&D money, you might opt for III-V rather than nanotubes (which appears to be what Intel and IBM are doing).

    The latest and last ITRS focuses on two key areas: that it will no longer be economically viable to shrink transistors after 2021—and, pray tell, what might be done to keep Moore’s law going despite transistors reaching their minimal limit. (Remember, Moore’s law simply predicts a doubling of transistor density within a given integrated circuit, not the size or performance of those transistors.)

    The first problem has been known about for a long while. Basically, starting at around the 65nm node in 2006, the economic gains from moving to smaller transistors have been slowly dribbling away. Previously, moving to a smaller node meant you could cram tons more chips onto a single silicon wafer, at a reasonably small price increase. With recent nodes like 22 or 14nm, though, there are so many additional steps required that it costs a lot more to manufacture a completed wafer—not to mention additional costs for things like package-on-package (PoP) and through-silicon vias (TSV) packaging.

    This is the primary reason that the semiconductor industry has been whittled from around 20 leading-edge logic-manufacturing companies in 2000, down to just four today: Intel, TSMC, GlobalFoundries, and Samsung. (IBM recently left the business by selling its fabs to GloFo.)

    2
    A diagram showing future transistor topologies, from Applied Materials (which makes the machines that actually create the various layers/features on a die). Gate-all-around is shown at the top.

    The second problem—how to keep increasing transistor density—has a couple of likely solutions. First, ITRS expects that chip makers and designers will begin to move away from FinFET in 2019, towards gate-all-around transistor designs. Then, a few years later, these transistors will become vertical, with the channel fashioned out of some kind of nanowire. This will allow for a massive increase in transistor density, similar to recent advances in 3D V-NAND memory.

    The gains won’t last for long though, according to ITRS: by 2024 (so, just eight years from now), we will once again run up against a thermal ceiling. Basically, there is a hard limit on how much heat can be dissipated from a given surface area. So, as chips get smaller and/or denser, it eventually becomes impossible to keep the chip cool. The only real solution is to completely rethink chip packaging and cooling. To begin with, we’ll probably see microfluidic channels that increase the effective surface area for heat transfer. But after that, as we stack circuits on top of each other, we’ll need something even fancier. Electronic blood, perhaps?

    The final ITRS is one of the most beastly reports I’ve ever seen, spanning seven different sections and hundreds of pages and diagrams. Suffice it to say I’ve only touched on a tiny portion of the roadmap here. There are large sections on heterogeneous integration, and also some important bits on connectivity (semiconductors play a key role in modulating optical and radio signals).

    3
    Here’s what ASML’s EUV lithography machine may eventually look like. Pretty large, eh?

    I’ll leave you with one more important short-term nugget, though. We are fast approaching the cut-off date for choosing which lithography and patterning techs will be used for commercial 7nm and 5nm logic chips.

    As you may know, extreme ultraviolet (EUV) has been waiting in the wings for years now, never quite reaching full readiness due to its extremely high power usage and some resolution concerns. In the mean time, chip makers have fallen back on increasing levels of multiple patterning—multiple lithographic exposures, which increase manufacturing time (and costs).

    Now, however, directed self-assembly (DSA)—where the patterns assemble themselves—is also getting very close to readiness. If either technology wants to be used over multiple patterning for 7nm logic, the ITRS says they will need to prove their readiness in the next few months.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: