Tagged: Computer technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:15 pm on January 30, 2019 Permalink | Reply
    Tags: Computer technology, , Intel Xeon W-3175X processor released,   

    From insideHPC: “Intel steps up with 28 core Xeon Processor for High-End Workstations” 

    From insideHPC

    January 30, 2019

    1
    Intel Corporation has announced the release of the Intel Xeon W-3175X processor in January 2019. The Intel Xeon W-3175X is a 28-core workstation powerhouse built for select, highly-threaded and computing-intensive applications such as architectural and industrial design and professional content creation. (Credit: Tim Herman/Intel Corporation)

    Today Intel announced that their new Intel Xeon W-3175X processor is now available. This unlocked 28-core workstation powerhouse is built for select, highly-threaded and computing-intensive applications such as architectural and industrial design and professional content creation.

    “Built for handling heavily threaded applications and tasks, the Intel Xeon W-3175X processor delivers uncompromising single- and all-core world-class performance for the most advanced professional creators and their demanding workloads. With the most cores and threads, CPU PCIe lanes, and memory capacity of any Intel desktop processor, the Intel Xeon W-3175X processor has the features that matter for massive mega-tasking projects such as film editing and 3D rendering.”

    Other key features and capabilities:

    Intel Mesh Architecture, which delivers low latency and high data bandwidth between CPU cores, cache, memory and I/O while increasing the number of cores per processor– a critical need for the demanding, highly-threaded workloads of creators and experts.
    Intel Extreme Tuning Utility, a precision toolset that helps experienced overclockers optimize their experience with unlocked1processors.
    Intel Extreme Memory Profile, which simplifies the overclocking1experience by removing the guesswork of memory overclocking.
    Intel AVX-512 ratio offset and memory controller trim voltage control that allow for optimization of overclocking frequencies regardless of SSE or AVX workloads, and allow maximization of memory overclocking1.
    Intel Turbo Boost Technology 2.0 that delivers frequencies up to 4.3 GHz.
    Up to 68 platform PCIe lanes, 38.5 MB Intel Smart Cache, 6-channel DDR4 memory support with up to 512 GB at 2666 MHz, and ECC and standard RAS support power peripherals and high-speed tools.
    Intel C621 chipset based systems designed to support the Intel Xeon W-3175X processor allow professional content creators to achieve a new level of performance.
    Asetek 690LX-PN all-in-one liquid cooler, a custom created solution sold separately by Asetek, helps ensure the processor runs smoothly at both stock settings and while overclocking.

    The Intel Xeon W-3175X processor is available from system integrators that develop purpose-built desktop workstations.

    Intel Xeon W-3175X Specifications:

    Base Clock Speed (GHz): 3.1
    Intel Turbo Boost Technology 2.0 Maximum Single Core Turbo Frequency (GHz): 4.3
    Cores/Threads: 28/56
    TDP: 255W
    Intel Smart Cache: 38.5 MB
    Unlocked: Yes
    Platform PCIE Lanes: Up to 68
    Memory Support: Six Channels, DDR4-2666
    Standard RAS Support: Yes
    ECC Support: Yes
    RCP Pricing (USD 1K): $2,999

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 11:04 am on January 2, 2019 Permalink | Reply
    Tags: Computer technology, , , , Physicists record “lifetime” of graphene qubits, , ,   

    From MIT News: “Physicists record ‘lifetime’ of graphene qubits” 

    MIT News
    MIT Widget

    From MIT News

    December 31, 2018
    Rob Matheson

    1
    Researchers from MIT and elsewhere have recorded the “temporal coherence” of a graphene qubit — how long it maintains a special state that lets it represent two logical states simultaneously — marking a critical step forward for practical quantum computing. Stock image

    First measurement of its kind could provide stepping stone to practical quantum computing.

    Researchers from MIT and elsewhere have recorded, for the first time, the “temporal coherence” of a graphene qubit — meaning how long it can maintain a special state that allows it to represent two logical states simultaneously. The demonstration, which used a new kind of graphene-based qubit, represents a critical step forward for practical quantum computing, the researchers say.

    Superconducting quantum bits (simply, qubits) are artificial atoms that use various methods to produce bits of quantum information, the fundamental component of quantum computers. Similar to traditional binary circuits in computers, qubits can maintain one of two states corresponding to the classic binary bits, a 0 or 1. But these qubits can also be a superposition of both states simultaneously, which could allow quantum computers to solve complex problems that are practically impossible for traditional computers.

    The amount of time that these qubits stay in this superposition state is referred to as their “coherence time.” The longer the coherence time, the greater the ability for the qubit to compute complex problems.

    Recently, researchers have been incorporating graphene-based materials into superconducting quantum computing devices, which promise faster, more efficient computing, among other perks. Until now, however, there’s been no recorded coherence for these advanced qubits, so there’s no knowing if they’re feasible for practical quantum computing.

    In a paper published today in Nature Nanotechnology, the researchers demonstrate, for the first time, a coherent qubit made from graphene and exotic materials. These materials enable the qubit to change states through voltage, much like transistors in today’s traditional computer chips — and unlike most other types of superconducting qubits. Moreover, the researchers put a number to that coherence, clocking it at 55 nanoseconds, before the qubit returns to its ground state.

    The work combined expertise from co-authors William D. Oliver, a physics professor of the practice and Lincoln Laboratory Fellow whose work focuses on quantum computing systems, and Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT who researches innovations in graphene.

    “Our motivation is to use the unique properties of graphene to improve the performance of superconducting qubits,” says first author Joel I-Jan Wang, a postdoc in Oliver’s group in the Research Laboratory of Electronics (RLE) at MIT. “In this work, we show for the first time that a superconducting qubit made from graphene is temporally quantum coherent, a key requisite for building more sophisticated quantum circuits. Ours is the first device to show a measurable coherence time — a primary metric of a qubit — that’s long enough for humans to control.”

    There are 14 other co-authors, including Daniel Rodan-Legrain, a graduate student in Jarillo-Herrero’s group who contributed equally to the work with Wang; MIT researchers from RLE, the Department of Physics, the Department of Electrical Engineering and Computer Science, and Lincoln Laboratory; and researchers from the Laboratory of Irradiated Solids at the École Polytechnique and the Advanced Materials Laboratory of the National Institute for Materials Science.

    A pristine graphene sandwich

    Superconducting qubits rely on a structure known as a “Josephson junction,” where an insulator (usually an oxide) is sandwiched between two superconducting materials (usually aluminum). In traditional tunable qubit designs, a current loop creates a small magnetic field that causes electrons to hop back and forth between the superconducting materials, causing the qubit to switch states.

    But this flowing current consumes a lot of energy and causes other issues. Recently, a few research groups have replaced the insulator with graphene, an atom-thick layer of carbon that’s inexpensive to mass produce and has unique properties that might enable faster, more efficient computation.

    To fabricate their qubit, the researchers turned to a class of materials, called van der Waals materials — atomic-thin materials that can be stacked like Legos on top of one another, with little to no resistance or damage. These materials can be stacked in specific ways to create various electronic systems. Despite their near-flawless surface quality, only a few research groups have ever applied van der Waals materials to quantum circuits, and none have previously been shown to exhibit temporal coherence.

    For their Josephson junction, the researchers sandwiched a sheet of graphene in between the two layers of a van der Waals insulator called hexagonal boron nitride (hBN). Importantly, graphene takes on the superconductivity of the superconducting materials it touches. The selected van der Waals materials can be made to usher electrons around using voltage, instead of the traditional current-based magnetic field. Therefore, so can the graphene — and so can the entire qubit.

    When voltage gets applied to the qubit, electrons bounce back and forth between two superconducting leads connected by graphene, changing the qubit from ground (0) to excited or superposition state (1). The bottom hBN layer serves as a substrate to host the graphene. The top hBN layer encapsulates the graphene, protecting it from any contamination. Because the materials are so pristine, the traveling electrons never interact with defects. This represents the ideal “ballistic transport” for qubits, where a majority of electrons move from one superconducting lead to another without scattering with impurities, making a quick, precise change of states.

    How voltage helps

    The work can help tackle the qubit “scaling problem,” Wang says. Currently, only about 1,000 qubits can fit on a single chip. Having qubits controlled by voltage will be especially important as millions of qubits start being crammed on a single chip. “Without voltage control, you’ll also need thousands or millions of current loops too, and that takes up a lot of space and leads to energy dissipation,” he says.

    Additionally, voltage control means greater efficiency and a more localized, precise targeting of individual qubits on a chip, without “cross talk.” That happens when a little bit of the magnetic field created by the current interferes with a qubit it’s not targeting, causing computation problems.

    For now, the researchers’ qubit has a brief lifetime. For reference, conventional superconducting qubits that hold promise for practical application have documented coherence times of a few tens of microseconds, a few hundred times greater than the researchers’ qubit.

    But the researchers are already addressing several issues that cause this short lifetime, most of which require structural modifications. They’re also using their new coherence-probing method to further investigate how electrons move ballistically around the qubits, with aims of extending the coherence of qubits in general.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:07 pm on December 3, 2018 Permalink | Reply
    Tags: , , Computer technology, , MESO devices, , Multiferroics,   

    From UC Berkeley: “New quantum materials could take computing devices beyond the semiconductor era” 

    UC Berkeley

    From UC Berkeley

    December 3, 2018
    Robert Sanders
    rlsanders@berkeley.edu

    1
    MESO devices, based on magnetoelectric and spin-orbit materials, could someday replace the ubiquitous semiconductor transistor, today represented by CMOS. MESO uses up-and-down magnetic spins in a multiferroic material to store binary information and conduct logic operations. (Intel graphic)

    Researchers from Intel Corp. and UC Berkeley are looking beyond current transistor technology and preparing the way for a new type of memory and logic circuit that could someday be in every computer on the planet.

    In a paper appearing online Dec. 3 in advance of publication in the journal Nature, the researchers propose a way to turn relatively new types of materials, multiferroics and topological materials, into logic and memory devices that will be 10 to 100 times more energy-efficient than foreseeable improvements to current microprocessors, which are based on CMOS (complementary metal–oxide–semiconductor).

    The magneto-electric spin-orbit or MESO devices will also pack five times more logic operations into the same space than CMOS, continuing the trend toward more computations per unit area, a central tenet of Moore’s Law.

    The new devices will boost technologies that require intense computing power with low energy use, specifically highly automated, self-driving cars and drones, both of which require ever increasing numbers of computer operations per second.

    “As CMOS develops into its maturity, we will basically have very powerful technology options that see us through. In some ways, this could continue computing improvements for another whole generation of people,” said lead author Sasikanth Manipatruni, who leads hardware development for the MESO project at Intel’s Components Research group in Hillsboro, Oregon. MESO was invented by Intel scientists, and Manipatruni designed the first MESO device.

    Transistor technology, invented 70 years ago, is used today in everything from cellphones and appliances to cars and supercomputers. Transistors shuffle electrons around inside a semiconductor and store them as binary bits 0 and 1.

    2
    Single crystals of the multiferroic material bismuth-iron-oxide. The bismuth atoms (blue) form a cubic lattice with oxygen atoms (yellow) at each face of the cube and an iron atom (gray) near the center. The somewhat off-center iron interacts with the oxygen to form an electric dipole (P), which is coupled to the magnetic spins of the atoms (M) so that flipping the dipole with an electric field (E) also flips the magnetic moment. The collective magnetic spins of the atoms in the material encode the binary bits 0 and 1, and allow for information storage and logic operations.

    In the new MESO devices, the binary bits are the up-and-down magnetic spin states in a multiferroic, a material first created in 2001 by Ramamoorthy Ramesh, a UC Berkeley professor of materials science and engineering and of physics and a senior author of the paper.

    “The discovery was that there are materials where you can apply a voltage and change the magnetic order of the multiferroic,” said Ramesh, who is also a faculty scientist at Lawrence Berkeley National Laboratory. “But to me, ‘What would we do with these multiferroics?’ was always a big question. MESO bridges that gap and provides one pathway for computing to evolve”

    In the Nature paper, the researchers report that they have reduced the voltage needed for multiferroic magneto-electric switching from 3 volts to 500 millivolts, and predict that it should be possible to reduce this to 100 millivolts: one-fifth to one-tenth that required by CMOS transistors in use today. Lower voltage means lower energy use: the total energy to switch a bit from 1 to 0 would be one-tenth to one-thirtieth of the energy required by CMOS.

    “A number of critical techniques need to be developed to allow these new types of computing devices and architectures,” said Manipatruni, who combined the functions of magneto-electrics and spin-orbit materials to propose MESO. “We are trying to trigger a wave of innovation in industry and academia on what the next transistor-like option should look like.”

    Internet of things and AI

    The need for more energy-efficient computers is urgent. The Department of Energy projects that, with the computer chip industry expected to expand to several trillion dollars in the next few decades, energy use by computers could skyrocket from 3 percent of all U.S. energy consumption today to 20 percent, nearly as much as today’s transportation sector. Without more energy-efficient transistors, the incorporation of computers into everything – the so-called internet of things – would be hampered. And without new science and technology, Ramesh said, America’s lead in making computer chips could be upstaged by semiconductor manufacturers in other countries.

    “Because of machine learning, artificial intelligence and IOT, the future home, the future car, the future manufacturing capability is going to look very different,” said Ramesh, who until recently was the associate director for Energy Technologies at Berkeley Lab. “If we use existing technologies and make no more discoveries, the energy consumption is going to be large. We need new science-based breakthroughs.”

    Paper co-author Ian Young, a UC Berkeley Ph.D., started a group at Intel eight years ago, along with Manipatruni and Dmitri Nikonov, to investigate alternatives to transistors, and five years ago they began focusing on multiferroics and spin-orbit materials, so-called “topological” materials with unique quantum properties.

    “Our analysis brought us to this type of material, magneto-electrics, and all roads led to Ramesh,” said Manipatruni.

    Multiferroics and spin-orbit materials

    Multiferroics are materials whose atoms exhibit more than one “collective state.” In ferromagnets, for example, the magnetic moments of all the iron atoms in the material are aligned to generate a permanent magnet. In ferroelectric materials, on the other hand, the positive and negative charges of atoms are offset, creating electric dipoles that align throughout the material and create a permanent electric moment.

    MESO is based on a multiferroic material consisting of bismuth, iron and oxygen (BiFeO3) that is both magnetic and ferroelectric. Its key advantage, Ramesh said, is that these two states – magnetic and ferroelectric – are linked or coupled, so that changing one affects the other. By manipulating the electric field, you can change the magnetic state, which is critical to MESO.

    The key breakthrough came with the rapid development of topological materials with spin-orbit effect, which allow for the state of the multiferroic to be read out efficiently. In MESO devices, an electric field alters or flips the dipole electric field throughout the material, which alters or flips the electron spins that generate the magnetic field. This capability comes from spin-orbit coupling, a quantum effect in materials, which produces a current determined by electron spin direction.

    In another paper that appeared earlier this month in Science Advances, UC Berkeley and Intel experimentally demonstrated voltage-controlled magnetic switching using the magneto-electric material bismuth-iron-oxide (BiFeO3), a key requirement for MESO.

    “We are looking for revolutionary and not evolutionary approaches for computing in the beyond-CMOS era,” Young said. “MESO is built around low-voltage interconnects and low-voltage magneto-electrics, and brings innovation in quantum materials to computing.”

    Other co-authors of the Nature paper are Chia-Ching Lin, Tanay Gosavi and Huichu Liu of Intel and Bhagwati Prasad, Yen-Lin Huang and Everton Bonturim of UC Berkeley. The work was supported by Intel.

    RELATED INFORMATION

    Beyond CMOS computing with spin and polarization Nature Physics

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

     
  • richardmitnick 12:33 pm on December 3, 2018 Permalink | Reply
    Tags: a major Lab research initiative called “Beyond Moore’s Law, , Computer technology, , , Ramamoorthy Ramesh   

    From Lawrence Berkeley National Lab: “Berkeley Lab Takes a Quantum Leap in Microelectronics” Ramamoorthy Ramesh 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    December 3, 2018
    Julie Chao
    JHChao@lbl.gov
    (510) 486-6491

    1
    (Courtesy Ramamoorthy Ramesh)
    A Q&A with Ramamoorthy Ramesh on the need for next-generation computer chips

    Ramamoorthy Ramesh, a Lawrence Berkeley National Laboratory (Berkeley Lab) scientist in the Materials Sciences Division, leads a major Lab research initiative called “Beyond Moore’s Law,” which aims to develop next-generation microelectronics and computing architectures.

    Moore’s Law – which holds that the number of transistors on a chip will double about every two years and has held true in the industry for the last four decades – is coming to an inevitable end as physical limitations are reached. Major innovations will be required to sustain advances in computing. Working with industry leaders, Berkeley Lab’s approach spans fundamental materials discovery, materials physics, device development, algorithms, and systems architecture.

    In collaboration with scientists at Intel Corp., Ramesh proposes a new memory in logic device for replacing or augmenting conventional transistors. The work is detailed in a new Nature paper described in this UC Berkeley news release [blog post will follow]. Here Ramesh discusses the need for a quantum leap in microelectronics and how Berkeley Lab plans to play a role.

    Q. Why is the end of Moore’s Law such an urgent problem?

    If we look around, at the macro level there are two big global phenomena happening in electronics. One is the Internet of Things. It basically means every building, every car, every manufacturing capability is going to be fully accessorized with microelectronics. So, they’re all going to be interconnected. While the exact size of this market (in terms of number of units and their dollar value) is being debated, there is agreement that it is growing rapidly.

    The second big revolution is artificial intelligence/machine learning. This field is in its nascent stages and will find applications in diverse technology spaces. However, these applications are currently limited by the memory wall and the limitations imposed by the efficiency of computing. Thus, we will need more powerful chips that consume much lower energy. Driven by these emerging applications, there is the potential for the microelectronics market to grow exponentially.

    Semiconductors have been progressively shrinking and becoming faster, but they are consuming more and more power. If we don’t do anything to curb their energy consumption, the total energy consumption of microelectronics will jump from 4 percent to about 20 percent of primary energy. As a point of reference, today transportation consumes 24 percent of U.S. energy, manufacturing another 24 percent, and buildings 38 percent; that’s almost 90 percent. This could become almost like transportation. So, we said, that’s a big number. We need to go to a totally new technology and reduce energy consumption by several orders of magnitude.

    Q. So energy consumption is the main driver for the need for semiconductor innovation?

    No, there are two other factors. One is national security. Microelectronics and computing systems are a critical part of our national security infrastructure. And the other is global competitiveness. China has been investing hundreds of billions of dollars into making these fabs. Previously only U.S. companies made them. For two years, the fastest computer in the world was built in China. So this is a strategic issue for the U.S.

    Q. What is Berkeley Lab doing to address the problem?

    Berkeley lab is pursuing a “co-design” framework using exemplar demonstration pathways. In our co-design framework, the four key components are: (1) computational materials discovery and device scale modeling (led by Kristin Persson and Lin-wang Wang), (2) materials synthesis and materials physics (led by Peter Fischer), (3) scale up of synthesis pathways (led by Patrick Naulleau), and (4) circuit architecture and algorithms (led by John Shalf). These components are all working together to identify the key elements of an “attojoule” (10-18 Joules) logic-in-memory switch, where attojoule refers to the energy consumption per logic operation.

    One key outcome of the Berkeley Lab co-design framework is to understand the fundamental scientific issues that will impact the attojoule device, which will be about six orders of magnitude lower in energy compared to today’s state-of-the-art CMOS transistors, which work at around 50 picojoules (10-12 Joules) per logic operation.

    This paper presents the key elements of a pathway by which such an attojoule switch can be designed and fabricated using magnetoelectric multiferroics and more broadly, using quantum materials. There are still scientific as well as technological challenges.

    Berkeley Lab’s capabilities and facilities are well suited to tackle these challenges. We have nanoscience and x-ray facilities such as the Molecular Foundry and Advanced Light Source, big scientific instruments, which will be critical and allow us to rapidly explore new materials and understand their electronic, magnetic, and chemical properties.

    Another is the Materials Project, which enables discovery of new materials using a computational approach. Plus there is our ongoing work on deep UV lithography, which is carried out under the aegis of the Center for X-Ray Optics. This provides us with a perfect framework to address how we can do device processing at large scales.

    All of this will be done in collaboration with faculty and students at UC Berkeley and our partners in industry, as this paper illustrated.

    Q. What is the timeline?

    It will take a decade. There’s still a lot of work to be done. Your computer today operates at 3 volts. This device in the Nature paper proposes something at 100 millivolts. We need to understand the physics a lot better. That’s why a place like Berkeley Lab is so important.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

     
  • richardmitnick 4:52 pm on June 17, 2016 Permalink | Reply
    Tags: , Computer technology, , World’s First 1000-Processor Chip   

    From UC Davis: “World’s First 1,000-Processor Chip” 

    UC Davis bloc

    UC Davis

    June 17, 2016
    Andy Fell

    1
    This microchip with 1,000 processor cores was designed by graduate students in the UC Davis Department of Electrical and Computer Engineering. The chip is thought to be fastest designed in a university lab. No image credit.

    A microchip containing 1,000 independent programmable processors has been designed by a team at the University of California, Davis, Department of Electrical and Computer Engineering. The energy-efficient “KiloCore” chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors. The KiloCore was presented at the 2016 Symposium on VLSI Technology and Circuits in Honolulu on June 16.

    “To the best of our knowledge, it is the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university,” said Bevan Baas, professor of electrical and computer engineering, who led the team that designed the chip architecture. While other multiple-processor chips have been created, none exceed about 300 processors, according to an analysis by Baas’ team. Most were created for research purposes and few are sold commercially. The KiloCore chip was fabricated by IBM using their 32 nm CMOS technology.

    Each processor core can run its own small program independently of the others, which is a fundamentally more flexible approach than so-called Single-Instruction-Multiple-Data approaches utilized by processors such as GPUs; the idea is to break an application up into many small pieces, each of which can run in parallel on different processors, enabling high throughput with lower energy use, Baas said.

    Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

    The chip is the most energy-efficient “many-core” processor ever reported, Baas said. For example, the 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

    Applications already developed for the chip include wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacenter record processing.

    The team has completed a compiler and automatic program mapping tools for use in programming the chip.

    Additional team members are Aaron Stillmaker, Jon Pimentel, Timothy Andreas, Bin Liu, Anh Tran and Emmanuel Adeagbo, all graduate students at UC Davis. The fabrication was sponsored by the Department of Defense and ARL/ARO Grant W911NF-13-1-0090; with support from NSF Grants 0903549, 1018972, 1321163, and CAREER Award 0546907; and SRC GRC Grants 1971 and 2321.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC Davis Campus

    The University of California, Davis, is a major public research university located in Davis, California, just west of Sacramento. It encompasses 5,300 acres of land, making it the second largest UC campus in terms of land ownership, after UC Merced.

     
  • richardmitnick 12:14 pm on March 11, 2016 Permalink | Reply
    Tags: , , Computer technology,   

    From physicsworld.com: “Einstein meets the dark sector in a new numerical code that simulates the universe” 

    physicsworld
    physicsworld.com

    Mar 10, 2016
    Keith Cooper

    A powerful numerical code that uses [Albert] Einstein’s general theory of relativity to describe how large-scale structures form in the universe has been created by physicists in Switzerland and South Africa. The program promises to help researchers to better incorporate dark matter and dark energy into huge computer simulations of how the universe has evolved over time.

    At the largest length scales, the dynamics of the universe are dominated by gravity. The force binds galaxies together into giant clusters and, in turn, holds these clusters tight within the grasp of immense haloes of dark matter. The cold dark matter (CDM) model assumes that dark matter comprises slow-moving particles. This means that non-relativistic Newtonian physics should be sufficient to describe the effects of gravity on the assembly of large-scale structure in the universe.

    Universe map  2MASS Extended Source Catalog XSC
    Universe map 2MASS Extended Source Catalog XSC

    However, if dark matter moves at speeds approaching that of light, the Newtonian description breaks down and Einstein’s general theory of relativity must be incorporated into the simulation – something that has proven difficult to do.

    Upcoming galaxy surveys, such as those to be performed by the Large Synoptic Survey Telescope in Chile or the European Space Agency’s Euclid mission, will observe the universe on a wider scale and to a higher level of precision than ever before.

    LSST Camera
    LSST Interior
    LSST Exterior
    The LSST, camera built at SLAC, and the building in Chile which will house the telescope

    ESA Euclid spacecraft
    ESA/Euclid spacecraft

    Computer simulations based on Newtonian assumptions may not be able to reproduce this level of precision, making observational results difficult to interpret. More importantly, we don’t know enough about what dark matter and dark energy are, to be able to conclusively say which treatment of gravity is most appropriate for them.

    Evolving geometry

    Now, Julian Adamek of the Observatoire de Paris and colleagues have developed a numerical code called “gevolution”, which provides a framework for introducing the effects of general relativity into complex simulations of the cosmos. “We wanted to provide a tool that describes the evolution of the geometry of space–time,” Adamek told physicsworld.com.

    General relativity describes gravity as the warp created in space–time by the mass of an object. This gives the cosmos a complex geometry, rather than the linear space described by Newtonian gravity. The gevolution code is able to compute the Friedmann–Lemaítre–Robertson–Walker metric that solves Einstein’s field equations to describe [spacetime’s] complex geometry and how particles move through that geometry. The downside is that it sucks up a lot of resources: 115,000 central-processing-unit (CPU) hours compared to 25,000 CPU hours for a similarly sized Newtonian simulation.

    Other uncertainties

    Not everyone is convinced that the code is urgently required, and Joachim Harnois-Déraps of the Institute for Astronomy at the Royal Observatory in Edinburgh points out that there are other challenges facing physicists running cosmological simulations. “There are many places where things could go wrong in simulations.”

    Harnois-Déraps cites inaccuracies in modelling the nonlinear clustering of matter in the universe, as well as feedback from supermassive black holes in active galaxies blowing matter out from galaxies and redistributing it. A recent study led by Markus Haider of the University of Innsbruck in Austria, for example, showed that jets from black holes could be sufficient to blow gas all the way into the voids within the cosmic web of matter that spans the universe.

    “Central and shining”

    “In my opinion, the bulk of our effort should instead go into improving our knowledge about these dominant sources of uncertainty,” says Harnois-Déraps who, despite his scepticism, hails gevolution as a great achievement in coding. “If suddenly a scenario arises where general relativity is needed, the gevolution numerical code would be central and shining.”

    Indeed, Adamek views the gevolution code as a tool, ready and waiting should it be required. Newtonian physics works surprisingly well for the current standard model of cold dark matter and dark energy as the cosmological constant. However, should dark matter prove to have relativistic properties, or if dark energy is a dynamic, changing field rather than a constant, then Newtonian approximations will have to make way for the more precise predictions of general relativity.

    “The Newtonian approach works well in some cases,” says Adamek, “But there might be other situations where we’re better off using the correct gravitational field.”

    The research is described in Nature Physics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 10:40 am on December 10, 2015 Permalink | Reply
    Tags: , Computer technology,   

    From Stanford: “Stanford-led skyscraper-style chip design boosts electronic performance by factor of a thousand” 

    Stanford University Name
    Stanford University

    December 9, 2015
    Ramin Skibba

    1
    A multi-campus team led by Stanford engineers Subhasish Mitra and H.-S. Philip Wong has developed a revolutionary high-rise architecture for computing.

    In modern computer systems, processor and memory chips are laid out like single-story structures in a suburb. But suburban layouts waste time and energy. A new skyscraper-like design, based on materials more advanced than silicon, provides the next computing platform.

    For decades, engineers have designed computer systems with processors and memory chips laid out like single-story structures in a suburb. Wires connect these chips like streets, carrying digital traffic between the processors that compute data and the memory chips that store it.

    But suburban-style layouts create long commutes and regular traffic jams in electronic circuits, wasting time and energy.

    That is why researchers from three other universities are working with Stanford engineers, including Associate Professor Subhasish Mitra and Professor H.-S. Philip Wong, to create a revolutionary new high-rise architecture for computing.

    In Rebooting Computing, a special issue of the IEEE Computer journal, the team describes its new approach as Nano-Engineered Computing Systems Technology, or N3XT.

    N3XT will break data bottlenecks by integrating processors and memory like floors in a skyscraper and by connecting these components with millions of “vias,” which play the role of tiny electronic elevators. The N3XT high-rise approach will move more data, much faster, using far less energy, than would be possible using low-rise circuits.

    “We have assembled a group of top thinkers and advanced technologies to create a platform that can meet the computing demands of the future,” Mitra said.

    Shifting electronics from a low-rise to a high-rise architecture will demand huge investments from industry – and the promise of big payoffs for making the switch.

    “When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand,” Wong said.

    To enable these advances, the N3XT team uses new nano-materials that allow its designs to do what can’t be done with silicon – build high-rise computer circuits.

    “With N3XT the whole is indeed greater than the sum of its parts,” said co-author and Stanford electrical engineering Professor Kunle Olukotun, who is helping optimize how software and hardware interact.

    New transistor and memory materials

    Engineers have previously tried to stack silicon chips but with limited success, said Mohamed M. Sabry Aly, a postdoctoral research fellow at Stanford and first author of the paper.

    Fabricating a silicon chip requires temperatures close to 1,800 degrees Fahrenheit, making it extremely challenging to build a silicon chip atop another without damaging the first layer. The current approach to what are called 3-D, or stacked, chips is to construct two silicon chips separately, then stack them and connect them with a few thousand wires.

    But conventional, 3-D silicon chips are still prone to traffic jams and it takes a lot of energy to push data through what are a relatively few connecting wires.

    The N3XT team is taking a radically different approach: building layers of processors and memory directly atop one another, connected by millions of electronic elevators that can move more data over shorter distances that traditional wire, using less energy. The N3XT approach is to immerse computation and memory storage into an electronic super-device.

    The key is the use of non-silicon materials that can be fabricated at much lower temperatures than silicon, so that processors can be built on top of memory without the new layer damaging the layer below.

    N3XT high-rise chips are based on carbon nanotube transistors (CNTs). Transistors are fundamental units of a computer processor, the tiny on-off switches that create digital zeroes and ones. CNTs are faster and more energy-efficient than silicon processors. Moreover, in the N3XT architecture, they can be fabricated and placed over and below other layers of memory.

    Among the N3XT scholars working at this nexus of computation and memory are Christos Kozyrakis and Eric Pop of Stanford, Jeffrey Bokor and Jan Rabaey of the University of California, Berkeley, Igor Markov of the University of Michigan, and Franz Franchetti and Larry Pileggi of Carnegie Mellon University.

    Team members also envision using data storage technologies that rely on materials other than silicon, which can be manufactured on top of CNTs, using low-temperature fabrication processes.

    One such data storage technology is called resistive random-access memory, or RRAM. Resistance slows down electrons, creating a zero, while conductivity allows electrons to flow, creating a one. Tiny jolts of electricity switch RRAM memory cells between these two digital states. N3XT team members are also experimenting with a variety of nano-scale magnetic materials to store digital ones and zeroes.

    Just as skyscrapers have ventilation systems, N3XT high-rise chip designs incorporate thermal cooling layers. This work, led by Stanford mechanical engineers Kenneth Goodson and Mehdi Asheghi, ensures that the heat rising from the stacked layers of electronics does not degrade overall system performance.

    Proof of principle

    Mitra and Wong have already demonstrated a working prototype of a high-rise chip. At the International Electron Devices Meeting in December 2014 they unveiled a four-layered chip made up of two layers of RRAM memory sandwiched between two layers of CNTs.

    In their N3XT paper they ran simulations showing how their high-rise approach was a thousand times more efficient in carrying out many important and highly demanding industrial software applications.

    Stanford computer scientist and N3XT co-author Chris Ré, who recently won a “genius grant” from the John D. and Catherine T. MacArthur Foundation, said he joined the N3XT collaboration to make sure that computing doesn’t enter what some call a “dark data” era.

    “There are huge volumes of data that sit within our reach and are relevant to some of society’s most pressing problems from health care to climate change, but we lack the computational horsepower to bring this data to light and use it,” Re said. “As we all hope in the N3XT project, we may have to boost horsepower to solve some of these pressing challenges.”

    Media Contact

    Tom Abate, Stanford Engineering: (650) 736-2245, tabate@stanford.edu

    Bjorn Carey, Stanford News Service: (650) 725-1944, bccarey@stanford.edu

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 5:07 pm on September 28, 2015 Permalink | Reply
    Tags: , , Computer technology   

    From AAAS: “Light-based memory chip is first to permanently store data” 

    AAAS

    AAAS

    25 September 2015
    Robert F. Service

    1
    Intense light pulses (pink) write data in a patch of GST, which can be read out as digital 1s and 0s with lower intensity light (red).C. Rios et al., Nature Photonics, Advance Online Publication (2015)

    Today’s electronic computer chips work at blazing speeds. But an alternate version that stores, manipulates, and moves data with photons of light instead of electrons would make today’s chips look like proverbial horses and buggies. Now, one team of researchers reports that it has created the first permanent optical memory on a chip, a critical step in that direction.

    “I am very positive about the work,” says Valerio Pruneri, a laser physicist at the Institute of Photonic Sciences in Barcelona, Spain, who was not involved in the research. “It’s a great demonstration of a new concept.”

    Interest in so-called photonic chips goes back decades, and it’s easy to see why. When electrons move through the basic parts of a computer chip—logic circuits that manipulate data, memory circuits that store it, and metal wires that ferry it along—they bump into one another, slowing down and generating heat that must be siphoned away. That’s not the case with photons, which travel together with no resistance, and do so at, well, light speed. Researchers have already made photon-friendly chips, with optical lines that replace metal wires and optical memory circuits. But the parts have some serious drawbacks. The memory circuits, for example, can store data only if they have a steady supply of power. When the power is turned off, the data disappear, too.

    Now, researchers led by Harish Bhaskaran, a nanoengineering expert at the University of Oxford in the United Kingdom, and electrical engineer Wolfram Pernice at the Karlsruhe Institute of Technology in Germany, have hit on a solution to the disappearing memory problem using a material at the heart of rewritable CDs and DVDs. That material—abbreviated GST—consists of a thin layer of an alloy of germanium, antimony, and tellurium. When zapped with an intense pulse of laser light, GST film changes its atomic structure from an ordered crystalline lattice to an “amorphous” jumble. These two structures reflect light in different ways, and CDs and DVDs use this difference to store data. To read out the data—stored as patterns of tiny spots with a crystalline or amorphous order—a CD or DVD drive shines low-intensity laser light on a disk and tracks the way the light bounces off.

    In their work with GST, the researchers noticed that the material affected not only how light reflects off the film, but also how much of it is absorbed. When a transparent material lay underneath the GST film, spots with a crystalline order absorbed more light than did spots with an amorphous structure.

    Next, the researchers wanted to see whether they could use this property to permanently store data on a chip and later read it out. To do so, they used standard chipmaking technology to outfit a chip with a silicon nitride device, known as a waveguide, which contains and channels pulses of light. They then placed a nanoscale patch of GST atop this waveguide. To write data in this layer, the scientists piped an intense pulse of light into the waveguide. The high intensity of the light’s electromagnetic field melted the GST, turning its crystalline atomic structure amorphous. A second, slightly less intense pulse could then cause the material to revert back to its original crystalline structure.

    When the researchers wanted to read the data, they beamed in less intense pulses of light and measured how much light was transmitted through the waveguide. If little light was absorbed, they knew their data spot on the GST had an amorphous order; if more was absorbed, that meant it was crystalline.

    Bhaskaran, Pernice, and their colleagues also took steps to dramatically increase the amount of data they could store and read. For starters, they sent multiple wavelengths of light through the waveguide at the same time, allowing them to write and read multiple bits of data simultaneously, something you can’t do with electrical data storage devices. And, as they report this week in Nature Photonics, by varying the intensity of their data-writing pulses, they were also able to control how much of each GST patch turned crystalline or amorphous at any one time. With this method, they could make one patch 90% amorphous but just 10% crystalline, and another 80% amorphous and 20% crystalline. That made it possible to store data in eight different such combinations, not just the usual binary 1s and 0s that would be used for 100% amorphous or crystalline spots. This dramatically boosts the amount of data each spot can store, Bhaskaran says.

    Photonic memories still have a long way to go if they ever hope to catch up to their electronic counterparts. At a minimum, their storage density will have to climb orders of magnitude to be competitive. Ultimately, Bhaskaran says, if a more advanced photonic memory can be integrated with photonic logic and interconnections, the resulting chips have the potential to run at 50 to 100 times the speed of today’s computer processors.

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

     
  • richardmitnick 9:22 am on August 13, 2015 Permalink | Reply
    Tags: , , Computer technology, Discrimination,   

    From The Conversation: “Big data algorithms can discriminate, and it’s not clear what to do about it” 

    Conversation
    The Conversation

    August 13, 2015
    Jeremy Kun

    “This program had absolutely nothing to do with race…but multi-variable equations.”

    That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively.

    Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have the highest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code.

    But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups.

    There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate. More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate.

    2
    Is bias baked in? Justin Ruckman, CC BY

    Being quantitative doesn’t protect against bias

    Both Goldstein and Wernick claim their algorithms are fair by appealing to two things. First, the algorithms aren’t explicitly fed protected characteristics such as race or neighborhood as an attribute. Second, they say the algorithms aren’t biased because they’re “quantitative.” Their argument is an appeal to abstraction. Math isn’t human, and so the use of math can’t be immoral.

    Sadly, Goldstein and Wernick are repeating a common misconception about data mining, and mathematics in general, when it’s applied to social problems. The entire purpose of data mining is to discover hidden correlations. So if race is disproportionately (but not explicitly) represented in the data fed to a data-mining algorithm, the algorithm can infer race and use race indirectly to make an ultimate decision.

    Here’s a simple example of the way algorithms can result in a biased outcome based on what it learns from the people who use it. Look at how how Google search suggests finishing a query that starts with the phrase “transgenders are”:

    3
    Taken from Google.com on 2015-08-10.

    Autocomplete features are generally a tally. Count up all the searches you’ve seen and display the most common completions of a given partial query. While most algorithms might be neutral on the face, they’re designed to find trends in the data they’re fed. Carelessly trusting an algorithm allows dominant trends to cause harmful discrimination or at least have distasteful results.

    Beyond biased data, such as Google autocompletes, there are other pitfalls, too. Moritz Hardt, a researcher at Google, describes what he calls the sample size disparity. The idea is as follows. If you want to predict, say, whether an individual will click on an ad, most algorithms optimize to reduce error based on the previous activity of users.

    But if a small fraction of users consists of a racial minority that tends to behave in a different way from the majority, the algorithm may decide it’s better to be wrong for all the minority users and lump them in the “error” category in order to be more accurate on the majority. So an algorithm with 85% accuracy on US participants could err on the entire black sub-population and still seem very good.

    Hardt continues to say it’s hard to determine why data points are erroneously classified. Algorithms rarely come equipped with an explanation for why they behave the way they do, and the easy (and dangerous) course of action is not to ask questions.

    3
    Those smiles might not be so broad if they realized they’d be treated differently by the algorithm. Men image via http://www.shutterstock.com

    Extent of the problem

    While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor.

    Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions.

    Legal hurdles

    Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring.

    The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance. According to Barocas and Selbst, if a company algorithmically decides whom to hire, and that algorithm is blatantly racist but even mildly successful at predicting job performance, this would count as business necessity – and not as illegal discrimination. In other words, the law seems to support using biased algorithms.

    What is fairness?

    Maybe an even deeper problem is that nobody has agreed on what it means for an algorithm to be fair in the first place. Algorithms are mathematical objects, and mathematics is far more precise than law. We can’t hope to design fair algorithms without the ability to precisely demonstrate fairness mathematically. A good mathematical definition of fairness will model biased decision-making in any setting and for any subgroup, not just hiring bias or gender bias.

    And fairness seems to have two conflicting aspects when applied to a population versus an individual. For example, say there’s a pool of applicants to fill 10 jobs, and an algorithm decides to hire candidates completely at random. From a population-wide perspective, this is as fair as possible: all races, genders and orientations are equally likely to be selected.

    But from an individual level, it’s as unfair as possible, because an extremely talented individual is unlikely to be chosen despite their qualifications. On the other hand, hiring based only on qualifications reinforces hiring gaps. Nobody knows if these two concepts are inherently at odds, or whether there is a way to define fairness that reasonably captures both. Cynthia Dwork, a Distinguished Scientist at Microsoft Research, and her colleagues have been studying the relationship between the two, but even Dwork admits they have just scratched the surface.

    5

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:05 am on February 8, 2015 Permalink | Reply
    Tags: , Computer technology,   

    From NOVA: “Powerful and Efficient ‘Neuromorphic’ Chip Works Like a Brain” 

    PBS NOVA

    NOVA

    08 Aug 2014
    Allison Eck

    Compared with biological computers—also known as brains—today’s computer chips are simplistic energy hogs. Which is why some computer scientists have been exploring neuromorphic computing, where they try to emulate neurons with silicon. Yesterday, researchers at IBM announced a new neuromorphic processor, dubbed TrueNorth, in an article published in the journal Science.

    At one million “neurons,” TrueNorth is about as complex as a bee’s brain. Experts are saying this little device (about the size of a postage stamp) is the newest and most promising development in “neuromorphic” computing. Despite its 5.4 billion transistors, the entire system consumes only 70 milliwatts of power, a strikingly low amount. The clock speed on the chip is slow, measured in megahertz—today’s computer chips zip along at the gigahertz level—but its vast parallel circuitry allows it to perform 46 billion operations a second per watt of energy.

    1
    At one million “neurons,” a computer chip dubbed TrueNorth mimics the organization of the brain and is the next step in “neuromorphic” computer programming.

    Here’s John Markoff, writing for The New York Times:

    The chip’s electronic “neurons” are able to signal others when a type of data — light, for example — passes a certain threshold. Working in parallel, the neurons begin to organize the data into patterns suggesting the light is growing brighter, or changing color or shape.

    The processor may thus be able to recognize that a woman in a video is picking up a purse, or control a robot that is reaching into a pocket and pulling out a quarter. Humans are able to recognize these acts without conscious thought, yet today’s computers and robots struggle to interpret them.

    Despite the promise, some scientists are skeptical about TrueNorth’s potential, claiming that it’s not that much more impressive than what a cell phone camera can already do. Still others see it as overhyped or just one of many possible neuromorphic strategies.

    Jonathan Webb, writing for BBC News:

    Prof Steve Furber is a computer engineer at the University of Manchester who works on a similarly ambitious brain simulation project called SpiNNaker. That initiative uses a more flexible strategy, where the connections between neurons are not hard-wired.

    He told BBC News that “time will tell” which strategy succeeds in different applications.

    Proponents argue that the chip is endlessly scalable, meaning additional units can be assembled into bigger, more powerful machines. And if its processing potential improves, as traditional silicon chips did in the past, then TrueNorth’s neuromorphic successors could lead to cell phones powered by extremely high-power, energy-efficient processors, the sort that could make today’s smartphone CPUs look like those in early PCs.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: