Tagged: IBM Corporation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:39 am on August 18, 2015 Permalink | Reply
    Tags: , Computer chip technology, IBM Corporation   

    From wired: “IBM’s ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart” 

    Wired logo

    Wired

    IBM

    Smarter Planet

    08.17.15
    Cade Metz

    1
    At a lab near San Jose, IBM has built the digital equivalent of a rodent brain—roughly speaking. It spans 48 of the company’s experimental TrueNorth chips, a new breed of processor that mimics the brain’s biological building blocks. IBM

    Dharmendra Modha walks me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.

    He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.

    Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth.

    Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.

    “What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.”

    The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.

    The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.

    That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.”
    The Brain on a Phone

    Peter Diehl recently took a trip to China, where his smartphone didn’t have access to the `net, an experience that cast the limitations of today’s AI in sharp relief. Without the internet, he couldn’t use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google’s distant servers. “The whole system breaks down,” he says.

    Deep learning, you see, requires enormous amounts of processing power—processing power that’s typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.

    To understand this, you have to understand how deep learning works. It operates in two stages. First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos. Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.

    Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it’s small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don’t have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. “This is the future,” Mars says. “We’re going to see more of the processing on the devices.”

    Neurons, Axons, Synapses, Spikes

    Google recently discussed its efforts to run neural networks on phones, but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.

    The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular “instructions,” the TrueNorth juggles “spikes,” much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone’s voice as they speak—or changes in color from pixel to pixel in a photo. “You can think of it as a one-bit message sent from one neuron to another.” says Rodrigo Alvarez-Icaza, one of the chip’s chief designers.

    The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.

    Of course, using such a chip also requires a new breed of software. That’s what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM’s research lab in San Jose, California. In some cases, researchers are translating existing code into the “spikes” that the chip can read (and back again). But they’re also working to build native code for the chip.

    Parting Gift

    Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. “That kind of talk always puts up warning flags,” says Chris Nicholson, the co-founder of deep learning startup Skymind. “Silicon operates in a very different way than the stuff our brains are made of.”

    Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn’t going to happen anytime soon. “Ambitions must be balanced with reality,” he says.

    In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation,” he says. “We want to get as close to the brain as possible while maintaining flexibility.”

    This is TrueNorth. It’s not a digital brain. But it is a step toward a digital brain. And with IBM’s boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. “Humans use technology to transform society,” Modha says, pointing to the room of researchers. “These are the humans.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 6:54 am on July 28, 2015 Permalink | Reply
    Tags: , , IBM Corporation   

    From Ars Technica: “Inside the world’s quietest room” 

    Ars Technica logo

    ars technica

    Jul 28, 2015
    Sebastian Anthony

    In a hole, on some bedrock a few miles outside central Zurich, there lived a spin-polarised scanning electron microscope. Not a nasty, dirty, wet hole: it was a nanotech hole, and that means quiet. And electromagnetically shielded. And vibration-free. And cool.

    When you want to carry out experiments at the atomic scale—when you want to pick up a single atom and move it to the other end of a molecule—it requires incredibly exacting equipment. That equipment, though, is worthless without an equally exacting laboratory to put it in. If you’re peering down the (figurative) barrel of a microscope at a single atom, you need to make sure there are absolutely no physical vibrations at all, or you’ll just get a blurry image. Similarly, atoms really don’t like to sit still: you don’t want to spend a few hours setting up a transmission electron microscope (TEM), only to have a temperature fluctuation or EM field imbue the atoms with enough energy to start jumping around on their own accord.

    One solution, as you have probably gathered from the introduction to this story, is to build a bunker deep underground, completely from scratch, with every facet of the project simulated, designed, and built with a singular purpose in mind: to block out the outside world entirely. That’s exactly what IBM Research did back in 2011, when it opened the Binnig and Rohrer Nanotechnology Center.

    5

    The Center, which is located just outside Zurich in Rüschlikon, cost about €80 million (£60 million, $90 million) to build, which includes equipment costs of around €27 million (£20 million, $30 million). IBM constructed and owns the building, but IBM Research and ETH Zurich have shared use of the building and equipment. ETH and IBM collaborate on a lot of research, especially on nanoscale stuff.

    1
    The entrance hall to the Binnig and Rohrer Nanotechnology Center.

    2

    Deep below the Center there are six quiet rooms—or, to put it another way, rooms that are almost completely devoid of any kind of noise, from acoustic waves to physical vibrations to electromagnetic radiation. Each room is dedicated to a different nanometre-scale experiment: in one room, I was shown a Raman microscope, which is used for “fingerprinting” molecules; in another, a giant TEM, which is like an optical microscope, but it uses a beam of electrons instead of light to resolve details as small as 0.09nm. Every room is eerily deadened and quiet, which is juxtapositionally belied by the hulking silhouette of a multi-million-pound apparatus sitting in the middle of it. After investigating a few rooms, I notice that my phone is uncharacteristically lifeless. “That’s the nickel-iron box that encases every room,” my guide informs me.

    It’s impossible to go into every design feature of the noise-free rooms, but I’ll run through the most important and the most interesting. For a start, the rooms are built directly on the bedrock, significantly reducing vibrations from a nearby road and an underground train. Then, the walls of each room are clad with the aforementioned nickel-iron alloy, screening against most external electromagnetic fields, including those produced by experiments in nearby rooms. There are dozens of external sources of EM radiation, but the strongest are generated by mobile phone masts, overhead power lines, and the (electric) underground train, all of which would play havoc with IBM’s nanoscale experiments.

    Internally, most rooms are divided in two: there’s a small ante chamber, which is where the human controller sits, and then the main space with the actual experiment/equipment. Humans generate around 100 watts of heat, and not inconsiderable amounts of noise and vibration, so it’s best to keep them away from experiments while they’re running.

    To provide even more isolation, there are two separate floors in each room: one suspended floor for the scientists to walk on, and another separate floor that only the equipment sits on. The latter isn’t actually a floor: it’s a giant (up-to-68-ton) concrete block that rests on active air suspension. Any vibrations that make it through the bedrock, or that come from large trucks rumbling by, are damped in real time by the air suspension.

    We’re not done yet! To minimise acoustic noise (i.e. sound), the rooms are lined with acoustically absorbent material. Furthermore, if an experiment has noisy ancillary components (a vacuum pump, electrical transformer, etc.), they are placed in another room away from the main apparatus, so that they’re physically and audibly isolated.

    And finally, there’s some very clever air conditioning that’s quiet, generates minimal air flux, and is capable of keeping the temperature in the rooms very stable. In every room, the suspended floor (the human-designated bit) is perforated with holes. Cold air slowly ekes out of these holes, rises to the ceiling, and is then sucked out. The air flow was hardly noticeable, except for on my ankles: in a moment of unwarranted hipness earlier that morning, I had decided to wear boat shoes without socks.

    That’s about it for the major, physical features of IBM Research’s quiet rooms, but there are two other bits that are pretty neat. First, the whole place is lit with LEDs, driven by a DC power supply that is far enough away that its EM emissions don’t interfere. Second, each room is equipped with three pairs of Helmholtz coils, oriented so that they cover the X, Y, and Z axes. These coils are tuned to cancel out any residual magnetic fields that haven’t already been damped by various other shields, such as the Earth’s magnetic field.

    3
    Labelled images of IBM’s noise-free labs, showing various important features

    Just how quiet are the rooms?

    So, after all that effort—each of the six rooms cost about €1.4 million to build, before equipment—just how quiet are the rooms below the Binnig and Rohrer Nanotechnology Center? Let’s break it down by the type of noise.

    The temperature at waist height in the rooms is set to 21 degrees Celsius, with a stability of 0.1°C per hour (i.e. it would take an hour for the temperature to rise to 21.1°C).

    Electromagnetic fields produced by AC sources are damped to less than 3 nT (nanotesla)—or about 1,500 times weaker than the magnetic field produced by a fridge magnet. From DC sources, it’s damped to 20 nT.

    The vibration damping is probably the most impressive: for the equipment on the concrete pedestals, movement is reduced to less than 300nm/s at 1Hz, and less than 10nm/s above 100Hz. These are well below the specs of NIST’s Advanced Measurement Laboratory in Maryland, USA.

    Somewhat ironically for the world’s quietest rooms, the weakest link is acoustic noise. Even though the rooms themselves are shielded from outside noises, and the acoustically absorbent material does a good job of stopping internal sound waves dead, there’s no avoiding the quiet hum of some of the machines or the slight susurration of the ventilation system.

    The acoustic noise level in the rooms is always below 30 dB, dipping down as low as 21 dB if there isn’t a noisy experiment running. In human terms, the rooms were definitely quiet, but not so quiet that I could feel my sanity slipping away, or anything crazy like that. I was a little bit disappointed that I couldn’t hear my various internal organs shifting around, truth be told.

    Why did IBM build six of these rooms?

    “You’re only as good as your tools.” It’s a trite, overused statement, but in this case it perfectly describes why IBM and ETH Zurich spent so many millions of euros on the quiet rooms.

    Big machines like the TEM or spin-SEM need to be kept very still, with as little outside interference as possible: if you can’t stay within the machine’s nominal operational parameters, you’re not going to get much scientifically useful data out of it.

    On the flip side, however, if you surpass the machine’s optimal parameters—if you reduce the amount of vibration, noise, etc. beyond the “recommended specs”—then you can produce images and graphs with more resolution than even the manufacturer thought possible.

    IBM Research’s spin-SEM, for example, used to be located in the basement of the main building, on a normal concrete floor. After being relocated to the quiet rooms, the lead scientist who uses the the spin-SEM said its resolution is 2-3 times better (an utterly huge gain, in case you were wondering).

    For much the same reason, my guide said that “several tooling manufacturers” have contacted IBM Research to ask if they can test their equipment in the noise-free labs: they want to see just how well it will perform under near-perfect conditions.

    The best story, though, I saved for last. Back in the ’80s and ’90s, before the Center was built, the IBM researchers didn’t have a specialised nanotechnology facility: they just worked in their labs, which were usually located down in the basement. When Gerd Binnig and Heinrich Rohrer invented the scanning tunnelling microscope (STM)—an achievement that would later net them a Nobel prize—they worked in the dead of night to minimise vibrations from the nearby road and other outside interference.

    After the new building was finished—which, incidentally, is named after Binnig and Rohrer—my guide spoke to some IBM retirees who had just finished inspecting the noise-free rooms. “We wish we’d had these rooms back in the ’80s and 90s, so that we didn’t have to work at 3am,” they said.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:24 pm on June 30, 2012 Permalink | Reply
    Tags: , , , IBM Corporation, , Smarter Planet,   

    From the WCG Clean Energy Project at Harvard: The IBM Contribution 

    The Clean Energy Project (CEP2) at Harvard University gives us the look into IBM’s contribution to the betterment of Society via World community Grid (WCG).

    Watch this short video.

    You can visit the WCG web site (link is above), download the BOINC software agent, and attach to the Clean Energy Project. We would love to have you.

    From the project:

    Mission
    The mission of The Clean Energy Project is to find new materials for the next generation of solar cells and later, energy storage devices. By harnessing the immense power of World Community Grid, researchers can calculate the electronic properties of hundreds of thousands of organic materials – thousands of times more than could ever be tested in a lab – and determine which candidates are most promising for developing affordable solar energy technology.

     
  • richardmitnick 11:36 am on March 27, 2012 Permalink | Reply
    Tags: IBM Corporation, ,   

    From the Wall Street Journal: “Rutgers University, IBM Open Supercomputer Center” 

    As a proud alumnus of Rutgers University, I could not let this go by.

    This is copyright protected material, so just a highlight or two.

    By HEATHER HADDON
    March 27, 2012

    “Rutgers University and International Business Machines Corp. IBM -0.02% will cut the ribbon Tuesday on a technology center in New Jersey that houses a $3.3 million supercomputer—stacks of processors that can digest massive quantities of data in a fraction of the time that a desktop unit would take.

    ibm

    i2

    Named “IBM Blue Gene/P,” the machine, about the size of two refrigerators, will be one of the most powerful computers in the Northeast, with thousands of central processing units, or CPUs. IBM hopes in the coming year it will make the prestigious “TOP 500” list of the world’s most powerful computers, determined by a group of academic and government researchers.

    The supercomputer has similar analytical capabilities to “Watson,” the IBM computer that competed on the TV game show “Jeopardy.”

    A genome analysis that would take a year on a desktop, for example, could wrap up in a day on this computer, said Michael Pazzani, vice president for research and economic development at Rutgers in Piscataway, N.J., where the technology center is located.

    ‘This is the first step in a multiyear plan that involves IBM and Rutgers. We hope to become one of the top 10 academic computing centers in the world,’ Mr. Pazzani said.”

    See the full article here.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 463 other followers

%d bloggers like this: