Tagged: Artificial Intelligence Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:29 pm on December 27, 2015 Permalink | Reply
    Tags: , Artificial Intelligence, , Walter Pitts   

    From Nautilus: “The Man Who Tried to Redeem the World with Logic” 

    Nautilus

    Nautilus

    February 5, 2015 [This just appeared]
    By Amanda Gefter
    Illustration by Julia Breckenreid

    1

    Walter Pitts rose from the streets to MIT, but couldn’t escape himself.

    Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker, had no trouble raising his fists to get his way. The neighborhood boys weren’t much better. One afternoon in 1935, they chased him through the streets until he ducked into the local library to hide. The library was familiar ground, where he had taught himself Greek, Latin, logic, and mathematics—better than home, where his father insisted he drop out of school and go to work. Outside, the world was messy. Inside, it all made sense.

    Not wanting to risk another run-in that night, Pitts stayed hidden until the library closed for the evening. Alone, he wandered through the stacks of books until he came across Principia Mathematica, a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913, which attempted to reduce all of mathematics to pure logic. Pitts sat down and began to read. For three days he remained in the library until he had read each volume cover to cover—nearly 2,000 pages in all—and had identified several mistakes. Deciding that Bertrand Russell himself needed to know about these, the boy drafted a letter to Russell detailing the errors. Not only did Russell write back, he was so impressed that he invited Pitts to study with him as a graduate student at Cambridge University in England. Pitts couldn’t oblige him, though—he was only 12 years old. But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again.

    In 1923, the year that Walter Pitts was born, a 25-year-old Warren McCulloch was also digesting the Principia. But that is where the similarities ended—McCulloch could not have come from a more different world. Born into a well-to-do East Coast family of lawyers, doctors, theologians, and engineers, McCulloch attended a private boys academy in New Jersey, then studied mathematics at Haverford College in Pennsylvania, then philosophy and psychology at Yale. In 1923 he was at Columbia, where he was studying “experimental aesthetics” and was about to earn his medical degree in neurophysiology. But McCulloch was a philosopher at heart. He wanted to know what it means to know. [Sigmund]Freud had just published The Ego and the Id, and psychoanalysis was all the rage. McCulloch didn’t buy it—he felt certain that somehow the mysterious workings and failings of the mind were rooted in the purely mechanical firings of neurons in the brain.

    Though they started at opposite ends of the socioeconomic spectrum, McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence [AI]. But this is more than a story about a fruitful research collaboration. It is also about the bonds of friendship, the fragility of the mind, and the limits of logic’s ability to redeem a messy and imperfect world.

    2
    Walter Pitts (1923-1969): Walter Pitts’ life passed from homeless runaway, to MIT neuroscience pioneer, to withdrawn alcoholic. Estate of Francis Bello / Science Source

    Standing face to face, they were an unlikely pair. McCulloch, 42 years old when he met Pitts, was a confident, gray-eyed, wild-bearded, chain-smoking philosopher-poet who lived on whiskey and ice cream and never went to bed before 4 a.m. Pitts, 18, was small and shy, with a long forehead that prematurely aged him, and a squat, duck-like, bespectacled face. McCulloch was a respected scientist. Pitts was a homeless runaway. He’d been hanging around the University of Chicago, working a menial job and sneaking into Russell’s lectures, where he met a young medical student named Jerome Lettvin. It was Lettvin who introduced the two men. The moment they spoke, they realized they shared a hero in common: Gottfried Leibniz. The 17th-century philosopher had attempted to create an alphabet of human thought, each letter of which represented a concept and could be combined and manipulated according to a set of logical rules to compute all knowledge—a vision that promised to transform the imperfect outside world into the rational sanctuary of a library.

    McCulloch explained to Pitts that he was trying to model the brain with a Leibnizian logical calculus. He had been inspired by the Principia, in which Russell and Whitehead tried to show that all of mathematics could be built from the ground up using basic, indisputable logic. Their building block was the proposition—the simplest possible statement, either true or false. From there, they employed the fundamental operations of logic, like the conjunction (“and”), disjunction (“or”), and negation (“not”), to link propositions into increasingly complicated networks. From these simple propositions, they derived the full complexity of modern mathematics.

    Which got McCulloch thinking about neurons. He knew that each of the brain’s nerve cells only fires after a minimum threshold has been reached: Enough of its neighboring nerve cells must send signals across the neuron’s synapses before it will fire off its own electrical spike. It occurred to McCulloch that this set-up was binary—either the neuron fires or it doesn’t. A neuron’s signal, he realized, is a proposition, and neurons seemed to work like logic gates, taking in multiple inputs and producing a single output. By varying a neuron’s firing threshold, it could be made to perform “and,” “or,” and “not” functions.

    Fresh from reading a new paper by a British mathematician named Alan Turing which proved the possibility of a machine that could compute any function (so long as it was possible to do so in a finite number of steps), McCulloch became convinced that the brain was just such a machine—one which uses logic encoded in neural networks to compute. Neurons, he thought, could be linked together by the rules of logic to build more complex chains of thought, in the same way that the Principia linked chains of propositions to build complex mathematics.

    As McCulloch explained his project, Pitts understood it immediately, and knew exactly which mathematical tools could be used. McCulloch, enchanted, invited the teen to live with him and his family in Hinsdale, a rural suburb on the outskirts of Chicago. The Hinsdale household was a bustling, free-spirited bohemia. Chicago intellectuals and literary types constantly dropped by the house to discuss poetry, psychology, and radical politics while Spanish Civil War and union songs blared from the phonograph. But late at night, when McCulloch’s wife Rook and the three children went to bed, McCulloch and Pitts alone would pour the whiskey, hunker down, and attempt to build a computational brain from the neuron up.

    Before Pitts’ arrival, McCulloch had hit a wall: There was nothing stopping chains of neurons from twisting themselves into loops, so that the output of the last neuron in a chain became the input of the first—a neural network chasing its tail. McCulloch had no idea how to model that mathematically. From the point of view of logic, a loop smells a lot like paradox: the consequent becomes the antecedent, the effect becomes the cause. McCulloch had been labeling each link in the chain with a time stamp, so that if the first neuron fired at time t, the next one fired at t+1, and so on. But when the chains circled back, t+1 suddenly came before t.

    Pitts knew how to tackle the problem. He used modulo mathematics, which deals with numbers that circle back around on themselves like the hours of a clock. He showed McCulloch that the paradox of time t+1 coming before time t wasn’t a paradox at all, because in his calculations “before” and “after” lost their meaning. Time was removed from the equation altogether. If one were to see a lightning bolt flash on the sky, the eyes would send a signal to the brain, shuffling it through a chain of neurons. Starting with any given neuron in the chain, you could retrace the signal’s steps and figure out just how long ago lightning struck. Unless, that is, the chain is a loop. In that case, the information encoding the lightning bolt just spins in circles, endlessly. It bears no connection to the time at which the lightning actually occurred. It becomes, as McCulloch put it, “an idea wrenched out of time.” In other words, a memory.

    By the time Pitts finished calculating, he and McCulloch had on their hands a mechanistic model of the mind, the first application of computation to the brain, and the first argument that the brain, at bottom, is an information processor. By stringing simple binary neurons into chains and loops, they had shown that the brain could implement every possible logical operation and compute anything that could be computed by one of Turing’s hypothetical machines. Thanks to those ouroboric loops, they had also found a way for the brain to abstract a piece of information, hang on to it, and abstract it yet again, creating rich, elaborate hierarchies of lingering ideas in a process we call “thinking.”

    McCulloch and Pitts wrote up their findings in a now-seminal paper, A Logical Calculus of Ideas Immanent in Nervous Activity, published in the Bulletin of Mathematical Biophysics. Their model was vastly oversimplified for a biological brain, but it succeeded at showing a proof of principle. Thought, they said, need not be shrouded in Freudian mysticism or engaged in struggles between ego and id. “For the first time in the history of science,” McCulloch announced to a group of philosophy students, “we know how we know.”

    Pitts had found in McCulloch everything he had needed—acceptance, friendship, his intellectual other half, the father he never had. Although he had only lived in Hinsdale for a short time, the runaway would refer to McCulloch’s house as home for the rest of his life. For his part, McCulloch was just as enamored. In Pitts he had found a kindred spirit, his “bootlegged collaborator,” and a mind with the technical prowess to bring McCulloch’s half-formed notions to life. As he put it in a letter of reference about Pitts, “Would I had him with me always.” (1)

    Pitts was soon to make a similar impression on one of the towering intellectual figures of the 20th century, the mathematician, philosopher, and founder of cybernetics, Norbert Wiener. In 1943, Lettvin brought Pitts into Wiener’s office at the Massachusetts Institute of Technology (MIT). Wiener didn’t introduce himself or make small talk. He simply walked Pitts over to a blackboard where he was working out a mathematical proof. As Wiener worked, Pitts chimed in with questions and suggestions. According to Lettvin, by the time they reached the second blackboard, it was clear that Wiener had found his new right-hand man. Wiener would later write that Pitts was “without question the strongest young scientist whom I have ever met … I should be extremely astonished if he does not prove to be one of the two or three most important scientists of his generation, not merely in America but in the world at large.”

    So impressed was Wiener that he promised Pitts a Ph.D. in mathematics at MIT, despite the fact that he had never graduated from high school—something that the strict rules at the University of Chicago prohibited. It was an offer Pitts couldn’t refuse. By the fall of 1943, Pitts had moved into a Cambridge apartment, was enrolled as a special student at MIT, and was studying under one of the most influential scientists in the world. It was quite a long way from blue-collar Detroit.

    Wiener wanted Pitts to make his model of the brain more realistic. Despite the leaps Pitts and McCulloch had made, their work had made barely a ripple among brain scientists—in part because the symbolic logic they’d employed was hard to decipher, but also because their stark and oversimplified model didn’t capture the full messiness of the biological brain. Wiener, however, understood the implications of what they’d done, and knew that a more realistic model would be game-changing. He also realized that it ought to be possible for Pitts’ neural networks to be implemented in man-made machines, ushering in his dream of a cybernetic revolution. Wiener figured that if Pitts was going to make a realistic model of the brain’s 100 billion interconnected neurons, he was going to need statistics on his side. And statistics and probability theory were Wiener’s area of expertise. After all, it had been Wiener who discovered a precise mathematical definition of information: The higher the probability, the higher the entropy and the lower the information content.

    As Pitts began his work at MIT, he realized that although genetics must encode for gross neural features, there was no way our genes could pre-determine the trillions of synaptic connections in the brain—the amount of information it would require was untenable. It must be the case, he figured, that we all start out with essentially random neural networks—highly probable states containing negligible information (a thesis that continues to be debated to the present day). He suspected that by altering the thresholds of neurons over time, randomness could give way to order and information could emerge. He set out to model the process using statistical mechanics. Wiener excitedly cheered him on, because he knew if such a model were embodied in a machine, that machine could learn.

    “I now understand at once some seven-eighths of what Wiener says, which I am told is something of an achievement,” Pitts wrote in a letter to McCulloch in December of 1943, some three months after he’d arrived. His work with Wiener was “to constitute the first adequate discussion of statistical mechanics, understood in the most general possible sense, so that it includes for example the problem of deriving the psychological, or statistical, laws of behavior from the microscopic laws of neurophysiology … Doesn’t it sound fine?”

    That winter, Wiener brought Pitts to a conference he organized in Princeton with the mathematician and physicist John von Neumann, who was equally impressed with Pitts’ mind. Thus formed the beginnings of the group who would become known as the cyberneticians, with Wiener, Pitts, McCulloch, Lettvin, and von Neumann its core. And among this rarified group, the formerly homeless runaway stood out. “None of us would think of publishing a paper without his corrections and approval,” McCulloch wrote. “[Pitts] was in no uncertain terms the genius of our group,” said Lettvin. “He was absolutely incomparable in the scholarship of chemistry, physics, of everything you could talk about history, botany, etc. When you asked him a question, you would get back a whole textbook … To him, the world was connected in a very complex and wonderful fashion.”(2)

    The following June, 1945, von Neumann penned what would become a historic document entitled First Draft of a Report on the EDVAC, the first published description of a stored-program binary computing machine—the modern computer. The EDVAC’s predecessor, the ENIAC, which took up 1,800 square feet of space in Philadelphia, was more like a giant electronic calculator than a computer. It was possible to reprogram the thing, but it took several operators several weeks to reroute all the wires and switches to do it. Von Neumann realized that it might not be necessary to rewire the machine every time you wanted it to perform a new function. If you could take each configuration of the switches and wires, abstract them, and encode them symbolically as pure information, you could feed them into the computer the same way you’d feed it data, only now the data would include the very programs that manipulate the data. Without having to rewire a thing, you’d have a universal Turing machine.

    To accomplish this, von Neumann suggested modeling the computer after Pitts and McCulloch’s neural networks. In place of neurons, he suggested vacuum tubes, which would serve as logic gates, and by stringing them together exactly as Pitts and McCulloch had discovered, you could carry out any computation. To store the programs as data, the computer would need something new: a memory. That’s where Pitts’ loops came into play. “An element which stimulates itself will hold a stimulus indefinitely,” von Neumann wrote in his report, echoing Pitts and employing his modulo mathematics. He detailed every aspect of this new computational architecture. In the entire report, he cited only a single paper: “A Logical Calculus” by McCulloch and Pitts.

    By 1946, Pitts was living on Beacon Street in Boston with Oliver Selfridge, an MIT student who would become “the father of machine perception”; Hyman Minsky, the future economist; and Lettvin. He was teaching mathematical logic at MIT and working with Wiener on the statistical mechanics of the brain. The following year, at the Second Cybernetic Conference, Pitts announced that he was writing his doctoral dissertation on probabilistic three-dimensional neural networks. The scientists in the room were floored. “Ambitious” was hardly the word to describe the mathematical skill that it would take to pull off such a feat. And yet, everyone who knew Pitts was sure that he could do it. They would be waiting with bated breath.

    In a letter to the philosopher Rudolf Carnap, McCulloch catalogued Pitts’ achievements. “He is the most omniverous of scientists and scholars. He has become an excellent dye chemist, a good mammalogist, he knows the sedges, mushrooms and the birds of New England. He knows neuroanatomy and neurophysiology from their original sources in Greek, Latin, Italian, Spanish, Portuguese, and German for he learns any language he needs as soon as he needs it. Things like electrical circuit theory and the practical soldering in of power, lighting, and radio circuits he does himself. In my long life, I have never seen a man so erudite or so really practical.” Even the media took notice. In June 1954, Fortune magazine ran an article featuring the 20 most talented scientists under 40; Pitts was featured, next to Claude Shannon and James Watson. Against all odds, Walter Pitts had skyrocketed into scientific stardom.

    Some years earlier, in a letter to McCulloch, Pitts wrote “About once a week now I become violently homesick to talk all evening and all night to you.” Despite his success, Pitts had become homesick—and home meant McCulloch. He was coming to believe that if he could work with McCulloch again, he would be happier, more productive, and more likely to break new ground. McCulloch, too, seemed to be floundering without his bootlegged collaborator.

    Suddenly, the clouds broke. In 1952, Jerry Wiesner, associate director of MIT’s Research Laboratory of Electronics, invited McCulloch to head a new project on brain science at MIT. McCulloch jumped at the opportunity—because it meant he would be working with Pitts again. He traded his full professorship and his large Hinsdale home for a research associate title and a crappy apartment in Cambridge, and couldn’t have been happier about it. The plan for the project was to use the full arsenal of information theory, neurophysiology, statistical mechanics, and computing machines to understand how the brain gives rise to the mind. Lettvin, along with the young neuroscientist Patrick Wall, joined McCulloch and Pitts at their new headquarters in Building 20 on Vassar Street. They posted a sign on the door: Experimental Epistemology.

    With Pitts and McCulloch together again, and with Wiener and Lettvin in the mix, everything seemed poised for progress and revolution. Neuroscience, cybernetics, artificial intelligence, computer science—it was all on the brink of an intellectual explosion. The sky—or the mind—was the limit.

    There was just one person who wasn’t happy about the reunion: Wiener’s wife. Margaret Wiener was, by all accounts, a controlling, conservative prude—and she despised McCulloch’s influence on her husband. McCulloch hosted wild get-togethers at his family farm in Old Lyme, Connecticut, where ideas roamed free and everyone went skinny-dipping. It had been one thing when McCulloch was in Chicago, but now he was coming to Cambridge and Margaret wouldn’t have it. And so she invented a story. She sat Wiener down and informed him that when their daughter, Barbara, had stayed at McCulloch’s house in Chicago, several of “his boys” had seduced her. Wiener immediately sent an angry telegram to Wiesner: “Please inform [Pitts and Lettvin] that all connection between me and your projects is permanently abolished. They are your problem. Wiener.” He never spoke to Pitts again. And he never told him why.(3)

    For Pitts, this marked the beginning of the end. Wiener, who had taken on a fatherly role in his life, now abandoned him inexplicably. For Pitts, it wasn’t merely a loss. It was something far worse than that: It defied logic.

    And then there were the frogs. In the basement of Building 20 at MIT, along with a garbage can full of crickets, Lettvin kept a group of them. At the time, biologists believed that the eye was like a photographic plate that passively recorded dots of light and sent them, dot for dot, to the brain, which did the heavy lifting of interpretation. Lettvin decided to put the idea to the test, opening up the frog’s skulls and attaching electrodes to single fibers in their optic nerves.

    3
    Pitts with Lettvin: Pitts with Jerome Lettvin and one subject of their experiments on visual perception (1959). Wikipedia

    Together with Pitts, McCulloch and the Chilean biologist and philosopher Humberto Maturana, he subjected the frogs to various visual experiences—brightening and dimming the lights, showing them color photographs of their natural habitat, magnetically dangling artificial flies—and recorded what the eye measured before it sent the information off to the brain. To everyone’s surprise, it didn’t merely record what it saw, but filtered and analyzed information about visual features like contrast, curvature, and movement. “The eye speaks to the brain in a language already highly organized and interpreted,” they reported in the now-seminal paper What the Frog’s Eye Tells the Frog’s Brain, published in 1959.

    The results shook Pitts’ worldview to its core. Instead of the brain computing information digital neuron by digital neuron using the exacting implement of mathematical logic, messy, analog processes in the eye were doing at least part of the interpretive work. “It was apparent to him after we had done the frog’s eye that even if logic played a part, it didn’t play the important or central part that one would have expected,” Lettvin said. “It disappointed him. He would never admit it, but it seemed to add to his despair at the loss of Wiener’s friendship.”

    The spate of bad news aggravated a depressive streak that Pitts had been struggling with for years. “I have a kind of personal woe I should like your advice on,” Pitts had written to McCulloch in one of his letters. “I have noticed in the last two or three years a growing tendency to a kind of melancholy apathy or depression. [Its] effect is to make the positive value seem to disappear from the world, so that nothing seems worth the effort of doing it, and whatever I do or what happens to me ceases to matter very greatly …”

    In other words, Pitts was struggling with the very logic he had sought in life. Pitts wrote that his depression might be “common to all people with an excessively logical education who work in applied mathematics: It is a kind of pessimism resulting from an inability to believe in what people call the Principle of Induction, or the principle of the Uniformity of Nature. Since one cannot prove, or even render probable a priori, that the sun should rise tomorrow, we cannot really believe it shall.”

    Now, alienated from Wiener, Pitts’ despair turned lethal. He began drinking heavily and pulled away from his friends. When he was offered his Ph.D., he refused to sign the paperwork. He set fire to his dissertation along with all of his notes and his papers. Years of work—important work that everyone in the community was eagerly awaiting— he burnt it all, priceless information reduced to entropy and ash. Wiesner offered Lettvin increased support for the lab if he could recover any bits of the dissertation. But it was all gone.

    Pitts remained employed by MIT, but this was little more than a technicality; he hardly spoke to anyone and would frequently disappear. “We’d go hunting for him night after night,” Lettvin said. “Watching him destroy himself was a dreadful experience.” In a way Pitts was still 12 years old. He was still beaten, still a runaway, still hiding from the world in musty libraries. Only now his books took the shape of a bottle.

    With McCulloch, Pitts had laid the foundations for cybernetics and artificial intelligence. They had steered psychiatry away from Freudian analysis and toward a mechanistic understanding of thought. They had shown that the brain computes and that mentation is the processing of information. In doing so, they had also shown how a machine could compute, providing the key inspiration for the architecture of modern computers. Thanks to their work, there was a moment in history when neuroscience, psychiatry, computer science, mathematical logic, and artificial intelligence were all one thing, following an idea first glimpsed by Leibniz—that man, machine, number, and mind all use information as a universal currency. What appeared on the surface to be very different ingredients of the world—hunks of metal, lumps of gray matter, scratches of ink on a page—were profoundly interchangeable.

    There was a catch, though: This symbolic abstraction made the world transparent but the brain opaque. Once everything had been reduced to information governed by logic, the actual mechanics ceased to matter—the tradeoff for universal computation was ontology. Von Neumann was the first to see the problem. He expressed his concern to Wiener in a letter that anticipated the coming split between artificial intelligence on one side and neuroscience on the other. “After the great positive contribution of Turing-cum-Pitts-and-McCulloch is assimilated,” he wrote, “the situation is rather worse than better than before. Indeed these authors have demonstrated in absolute and hopeless generality that anything and everything … can be done by an appropriate mechanism, and specifically by a neural mechanism—and that even one, definite mechanism can be ‘universal.’ Inverting the argument: Nothing that we may know or learn about the functioning of the organism can give, without ‘microscopic,’ cytological work any clues regarding the further details of the neural mechanism.”

    This universality made it impossible for Pitts to provide a model of the brain that was practical, and so his work was dismissed and more or less forgotten by the community of scientists working on the brain. What’s more, the experiment with the frogs had shown that a purely logical, purely brain-centered vision of thought had its limits. Nature had chosen the messiness of life over the austerity of logic, a choice Pitts likely could not comprehend. He had no way of knowing that while his ideas about the biological brain were not panning out, they were setting in motion the age of digital computing, the neural network approach to machine learning, and the so-called connectionist philosophy of mind. In his own mind, he had been defeated.

    On Saturday, April 21, 1969, his hand shaking with an alcoholic’s delirium tremens, Pitts sent a letter from his room at Beth Israel Hospital in Boston to McCulloch’s room down the road at the Cardiac Intensive Care Ward at Peter Bent Brigham Hospital. “I understand you had a light coronary; … that you are attached to many sensors connected to panels and alarms continuously monitored by a nurse, and cannot in consequence turn over in bed. No doubt this is cybernetical. But it all makes me most abominably sad.” Pitts himself had been in the hospital for three weeks, having been admitted with liver problems and jaundice. On May 14, 1969 Walter Pitts died alone in a boarding house in Cambridge, of bleeding esophageal varices, a condition associated with cirrhosis of the liver. Four months later, McCulloch passed away, as if the existence of one without the other were simply illogical, a reverberating loop wrenched open.

    References

    1. All letters retrieved from the McCulloch Papers, BM139, Series I: Correspondence 1931–1968, Folder “Pitts, Walter.”

    2. All Jerome Lettvin quotes taken from: Anderson, J.A. & Rosenfield, E. Talking Nets: An Oral History of Neural Networks MIT Press (2000).

    3. Conway F. & Siegelman J. Dark Hero of the Information Age: In Search of Norbert Wiener, the Father of Cybernetics Basic Books, New York, NY (2006).

    Access to historical letters was provided by the American Philosophical Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

    Advertisements
     
  • richardmitnick 7:44 am on December 12, 2015 Permalink | Reply
    Tags: , Artificial Intelligence,   

    From NYT: “Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors” 

    New York Times

    The New York Times

    DEC. 11, 2015
    JOHN MARKOFF

    1
    Elon Musk, the chief of Tesla, is among the investors behind a new artificial-intelligence research center. Credit Eric Piermont/Agence France-Presse — Getty Images

    A group of prominent Silicon Valley investors and technology companies said on Friday that they would establish an artificial-intelligence research center to develop “digital intelligence” that will benefit humanity.

    The investors — including Elon Musk, Peter Thiel and Reid Hoffman — said they planned to commit $1 billion to the project long term, but would initially spend only a small fraction of that amount in the first few years of the project. But, Mr. Musk said, “Everyone who is listed as a contributor has made a substantial commitment and this should be viewed as at least a billion-dollar project.”

    The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.

    Its long-range goal will be to create an “artificial general intelligence,” a machine capable of performing any intellectual task that a human being can, according to Mr. Musk. He also stressed that the focus was on building technologies that augment rather than replace humans.

    Mr. Musk, who is deploying A.I.-based technologies in some of his products like the Tesla automobile, said that he has had longstanding concerns about the possibility that artificial intelligence could be used to create machines that might turn on humanity.

    He began discussing the issue this year with Mr. Hoffman, Mr. Thiel and Sam Altman, president of the Y Combinator investment group.

    “We discussed what is the best thing we can do to ensure the future is good?” he said. “We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing A.I. in a way that is safe and is beneficial to humanity.”

    “Artificial intelligence is one of the great opportunities for improving the world today,” Mr. Hoffman said in an email. “The specific applications range from self-driving cars, to medical diagnosis and precision personalized medicine, to many other areas of data, analysis, decisioning across industries.”

    Other backers of the project include Jessica Livingston of Y Combinator; Greg Brockman, the former chief technology officer of Stripe, as well as Amazon Web Services, Amazon’s Cloud Services subsidiary; and Infosys, an Indian software consulting and consulting firm. The research effort has also attracted a group of young artificial intelligence researchers.

    The founders said they were not yet ready to provide details on who had donated how much and the rate at which the project money would be spent. They will fund the development of the project on a year-by-year basis. They also said they were not yet ready to describe how quickly the project would grow in terms of funding or staffing.

    The announcement occurs in the same week that one of the main academic gatherings focusing on artificial intelligence, the Conference on Neural Information Processing Systems, is being held in Montreal.

    In recent years the event has grown as major technology corporations like Apple, Facebook, Google, IBM and Microsoft have started competing to hire the most talented researchers in the field. Salaries and hiring incentives have soared.

    The research director of OpenAI will be Ilya Sutskever, a Google expert on machine learning. Mr. Brockman will be the chief technology officer. The group will begin with seven researchers, including graduate researchers who have been standouts at universities like Stanford, the University of California, Berkeley, and New York University.

    “The people on the team have all been offered substantially more to work at other places,” Mr. Musk said.

    Mr. Altman added, “It is lucky for us the best people in any field generally care about what is best for the world.”

    In October 2014, Mr. Musk stirred controversy when, in an interview at M.I.T., he described artificial intelligence as our “biggest existential threat.” He also said, “With artificial intelligence we’re summoning the demon.”

    In October, he donated $10 million to the Future of Life Institute, a Cambridge, Mass., organization focused on developing positive ways for humanity to respond to challenges posed by advanced technologies.

    He said the new organization would be separate from the Future of Life Institute, and that while the new organization did have a broad research plan, it was not yet ready to offer a specific road map.

    In a statement, the group sounded an open-source theme — open-source software can be freely shared without intellectual property restrictions — and said it was committed to ensuring that advanced artificial tools remained publicly available. “Since our research is free from financial obligations, we can better focus on a positive human impact,” the group said. “We believe A.I. should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

    Mr. Musk said he remained concerned that A.I. advances might work against, rather than benefit, humanity.

    “There is always some risk that in actually trying to advance A.I. we may create the thing we are concerned about,” he said.

    In the last two years there has been a race to set up research facilities focused both on advancing A.I. and in assessing its impact.

    In 2014, Paul Allen, Microsoft’s co-founder, established the nonprofit Allen Institute for Artificial Intelligence, which says its mission is “to contribute to humanity through high-impact A.I. research and engineering.”

    Also in 2014, the Microsoft A.I. researcher Eric Horvitz gave an undisclosed amount as a gift to Stanford to study the impact of the technology over the next century.

    Last month, the Toyota Corporation said that it would invest $1 billion in a five-year research effort in artificial intelligence and robotics technologies to be based in a laboratory near Stanford.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:07 am on September 3, 2015 Permalink | Reply
    Tags: , Artificial Intelligence,   

    From Nautilus: “Don’t Worry, Smart Machines Will Take Us With Them” 

    Nautilus

    Nautilus

    September 3, 2015
    Stephen Hsu

    1

    When it comes to artificial intelligence [AI], we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

    AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

    But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.

    It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.

    The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations [Σ] of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

    These two threads—smarter people and smarter machines—will inevitably intersect. Just as machines will be much smarter in 2050, we can expect that the humans who design, build, and program them will also be smarter. Naively, one would expect the rate of advance of machine intelligence to outstrip that of biological intelligence. Tinkering with a machine seems easier than modifying a living species, one generation at a time. But advances in genomics—both in our ability to relate complex traits to the underlying genetic codes, and the ability to make direct edits to genomes—will allow rapid advances in biologically-based cognition. Also, once machines reach human levels of intelligence, our ability to tinker starts to be limited by ethical considerations. Rebooting an operating system is one thing, but what about a sentient being with memories and a sense of free will?

    Therefore, the answer to the question “Will AI or genetic modification have the greater impact in the year 2050?” is yes. Considering one without the other neglects an important interaction.

    2
    A titan at teatime: John Von Neumann talking to graduate students during afternoon tea. Alfred Eisenstaedt/The LIFE Picture Collection/Getty Images

    It has happened before. It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability. Alan Turing and John von Neumann both contributed to the realization of computers whose program is stored in memory and can be modified during execution. This idea appeared originally in the form of the Turing Machine, and was given practical realization in the so-called von Neumann architecture of the first electronic computers, such as the EDVAC.

    1
    The Bombe code-breaking machine Turing devised at Bletchley Park during the Second World War PHOTO: Getty Image

    3
    EDVAC

    While this computing design seems natural, even obvious, to us now, it was at the time a significant conceptual leap.

    Turing and von Neumann were special, and far beyond peers of their era. Both played an essential role in the Allied victory in WWII. Turing famously broke the German Enigma codes, but not before conceptualizing the notion of “mechanized thought” in his Turing Machine, which was to become the main theoretical construct in modern computer science. Before the war, von Neumann placed the new quantum theory on a rigorous mathematical foundation. As a frequent visitor to Los Alamos he made contributions to hydrodynamics and computation that were essential to the United States’ nuclear weapons program. His close colleague, the Nobel Laureate Hans A. Bethe, established the singular nature of his abilities, and the range of possibilities for human cognition, when he said “I always thought von Neumann’s brain indicated that he was from another species, an evolution beyond man.”

    Today, we need geniuses like von Neumann and Turing more than ever before. That’s because we may already be running into the genetic limits of intelligence. In a 1983 interview, Noam Chomsky was asked whether genetic barriers to further progress have become obvious in some areas of art and science.2 He answered:

    You could give an argument that something like this has happened in quite a few fields … I think it has happened in physics and mathematics, for example … In talking to students at MIT, I notice that many of the very brightest ones, who would have gone into physics twenty years ago, are now going into biology. I think part of the reason for this shift is that there are discoveries to be made in biology that are within the range of an intelligent human being. This may not be true in other areas.

    AI research also pushes even very bright humans to their limits. The frontier machine intelligence architecture of the moment uses deep neural nets: multilayered networks of simulated neurons inspired by their biological counterparts. Silicon brains of this kind, running on huge clusters of GPUs (graphical processor units made cheap by research and development and economies of scale in the video game industry), have recently surpassed human performance on a number of narrowly defined tasks, such as image or character recognition. We are learning how to tune deep neural nets using large samples of training data, but the resulting structures are mysterious to us. The theoretical basis for this work is still primitive, and it remains largely an empirical black art. The neural networks researcher and physicist Michael Nielsen puts it this way:3

    … in neural networks there are large numbers of parameters and hyper-parameters, and extremely complex interactions between them. In such extraordinarily complex systems it’s exceedingly difficult to establish reliable general statements. Understanding neural networks in their full generality is a problem that, like quantum foundations, tests the limits of the human mind.

    The detailed inner workings of a complex machine intelligence (or of a biological brain) may turn out to be incomprehensible to our human minds—or at least the human minds of today. While one can imagine a researcher “getting lucky” by stumbling on an architecture or design whose performance surpasses her own capability to understand it, it is hard to imagine systematic improvements without deeper comprehension.

    3
    Minds building minds: Alan Turing (right) at work on an early computer c. 1951.SSPL/Getty Images

    But perhaps we will experience a positive feedback loop: Better human minds invent better machine learning methods, which in turn accelerate our ability to improve human DNA and create even better minds. In my own work, I use methods from machine learning (so-called compressed sensing, or convex optimization in high dimensional geometry) to extract predictive models from genomic data. Thanks to recent advances, we can predict a phase transition in the behavior of these learning algorithms, representing a sudden increase in their effectiveness. We expect this transition to happen within about a decade, when we reach a critical threshold of about 1 million human genomes worth of data. Several entities, including the U.S. government’s Precision Medicine Initiative and the private company Human Longevity Inc. (founded by Craig Venter), are pursuing plans to genotype 1 million individuals or more.

    The feedback loop between algorithms and genomes will result in a rich and complex world, with myriad types of intelligences at play: the ordinary human (rapidly losing the ability to comprehend what is going on around them); the enhanced human (the driver of change over the next 100 years, but perhaps eventually surpassed); and all around them vast machine intellects, some alien (evolved completely in silico) and some strangely familiar (hybrids). Rather than the standard science-fiction scenario of relatively unchanged, familiar humans interacting with ever-improving computer minds, we will experience a future with a diversity of both human and machine intelligences. For the first time, sentient beings of many different types will interact collaboratively to create ever greater advances, both through standard forms of communication and through new technologies allowing brain interfaces. We may even see human minds uploaded into cyberspace, with further hybridization to follow in the purely virtual realm. These uploaded minds could combine with artificial algorithms and structures to produce an unknowable but humanlike consciousness. Researchers have recently linked mouse and monkey brains together, allowing the animals to collaborate—via an electronic connection—to solve problems. This is just the beginning of “shared thought.”

    It may seem incredible, or even disturbing, to predict that ordinary humans will lose touch with the most consequential developments on planet Earth, developments that determine the ultimate fate of our civilization and species. Yet consider the early 20th-century development of quantum mechanics. The first physicists studying quantum mechanics in Berlin—men like Albert Einstein and Max Planck—worried that human minds might not be capable of understanding the physics of the atomic realm. Today, no more than a fraction of a percent of the population has a good understanding of quantum physics, although it underlies many of our most important technologies: Some have estimated that 10-30 percent of modern gross domestic product is based on quantum mechanics. In the same way, ordinary humans of the future will come to accept machine intelligence as everyday technological magic, like the flat screen TV or smartphone, but with no deeper understanding of how it is possible.

    New gods will arise, as mysterious and familiar as the old.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:43 am on August 31, 2015 Permalink | Reply
    Tags: Air Pollution, , Artificial Intelligence, ,   

    From MIT Tech Review: “How Artificial Intelligence Can Fight Air Pollution in China” 

    MIT Technology Review
    M.I.T Technology Review

    August 31, 2015
    Will Knight

    1
    A woman wearing a face mask makes her way along a street in Beijing on January 16, 2014.

    IBM is testing a new way to alleviate Beijing’s choking air pollution with the help of artificial intelligence. The Chinese capital, like many other cities across the country, is surrounded by factories, many fueled by coal, that emit harmful particulates. But pollution levels can vary depending on factors such as industrial activity, traffic congestion, and weather conditions.

    The IBM researchers are testing a computer system capable of learning to predict the severity of air pollution in different parts of the city several days in advance by combining large quantities of data from several different models—an extremely complex computational challenge. The system could eventually offer specific recommendations on how to reduce pollution to an acceptable level—for example, by closing certain factories or temporarily restricting the number of drivers on the road. A comparable system is also being developed for a city in the Hebei province, a badly affected area in the north of the country.

    “We have built a prototype system which is able to generate high-resolution air quality forecasts, 72 hours ahead of time,” says Xiaowei Shen, director of IBM Research China. “Our researchers are currently expanding the capability of the system to provide medium- and long-term (up to 10 days ahead) as well as pollutant source tracking, ‘what-if’ scenario analysis, and decision support on emission reduction actions.”

    The project, dubbed Green Horizon, is an example of how broadly IBM hopes to apply its research on using advanced machine learning to extract insights from huge amounts of data—something the company calls “cognitive computing.” The project also highlights an application of the technology that IBM would like to export to other countries where pollution is a growing problem.

    IBM is currently pushing artificial intelligence in many different industries, from health care to consulting. The cognitive computing effort encompasses natural language processing and statistical techniques originally developed for the Watson computer system, which competed on the game show Jeopardy!, along with many other approaches to machine learning (see “Why IBM Just Bought Millions of Medical Images” and “IBM Pushes Deep Learning with a Watson Upgrade”).

    Predicting pollution is challenging. IBM uses data supplied by the Beijing Environmental Protection Bureau to refine its models, and Shen says the predictions have a resolution of a kilometer and are 30 percent more precise than those derived through conventional approaches. He says the system uses “adaptive machine learning” to determine the best combination of models to use.

    Pollution is a major public health issue in China, accounting for more than a million deaths each year, according to a study conducted by researchers at the University of California, Berkeley. It is also a major subject of public and political debate.

    China has committed to improving air quality 10 percent by 2017 through the Airborne Pollution Prevention and Control Action Plan. This past April, an analysis of 360 Chinese cities by the charity Greenpeace East Asia, based in Beijing, showed that 351 of them had pollution levels exceeding China’s own air quality standards, although levels had improved since the period 12 months before. The average level of airborne particulates measured was more than two and a half times the limit recommended by the World Health Organization.

    See the full article here.

    IBM

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 11:39 am on August 6, 2015 Permalink | Reply
    Tags: Artificial Intelligence,   

    From MIT Tech Review: “Teaching Machines to Understand Us” 

    MIT Technology Review
    M.I.T Technology Review

    August 6, 2015
    Tom Simonite

    The first time Yann LeCun revolutionized artificial intelligence, it was a false dawn. It was 1995, and for almost a decade, the young Frenchman had been dedicated to what many computer scientists considered a bad idea: that crudely mimicking certain features of the brain was the best way to bring about intelligent machines. But LeCun had shown that this approach could produce something strikingly smart—and useful. Working at Bell Labs, he made software that roughly simulated neurons and learned to read handwritten text by looking at many different examples. Bell Labs’ corporate parent, AT&T, used it to sell the first machines capable of reading the handwriting on checks and written forms. To LeCun and a few fellow believers in artificial neural networks, it seemed to mark the beginning of an era in which machines could learn many other skills previously limited to humans. It wasn’t.

    “This whole project kind of disappeared on the day of its biggest success,” says LeCun. On the same day he celebrated the launch of bank machines that could read thousands of checks per hour, AT&T announced it was splitting into three companies dedicated to different markets in communications and computing. LeCun became head of research at a slimmer AT&T and was directed to work on other things; in 2002 he would leave Bell Labs, soon to become a professor at New York University. Meanwhile, researchers elsewhere found that they could not apply LeCun’s breakthrough to other computing problems. The brain-inspired approach to AI went back to being a fringe interest.

    LeCun, now a stocky 55-year-old with a ready smile and a sideways sweep of dark hair touched with gray, never stopped pursuing that fringe interest. And remarkably, the rest of the world has come around. The ideas that he and a few others nurtured in the face of over two decades of apathy and sometimes outright rejection have in the past few years produced striking results in areas like face and speech recognition. Deep learning, as the field is now known, has become a new battleground between Google and other leading technology companies that are racing to use it in consumer services. One such company is Facebook, which hired LeCun from NYU in December 2013 and put him in charge of a new artificial–intelligence research group, FAIR, that today has 50 researchers but will grow to 100. LeCun’s lab is Facebook’s first significant investment in fundamental research, and it could be crucial to the company’s attempts to become more than just a virtual social venue. It might also reshape our expectations of what machines can do.

    Facebook and other companies, including Google, IBM, and Microsoft, have moved quickly to get into this area in the past few years because deep learning is far better than previous AI techniques at getting computers to pick up skills that challenge machines, like understanding photos. Those more established techniques require human experts to laboriously program certain abilities, such as how to detect lines and corners in images. Deep-learning software figures out how to make sense of data for itself, without any such programming. Some systems can now recognize images or faces about as accurately as humans.

    Now LeCun is aiming for something much more powerful. He wants to deliver software with the language skills and common sense needed for basic conversation. Instead of having to communicate with machines by clicking buttons or entering carefully chosen search terms, we could just tell them what we want as if we were talking to another person. “Our relationship with the digital world will completely change due to intelligent agents you can interact with,” he predicts. He thinks deep learning can produce software that understands our sentences and can respond with appropriate answers, clarifying questions, or suggestions of its own.

    Agents that answer factual questions or book restaurants for us are one obvious—if not exactly world-changing—application. It’s also easy to see how such software might lead to more stimulating video-game characters or improve online learning. More provocatively, LeCun says systems that grasp ordinary language could get to know us well enough to understand what’s good for us. “Systems like this should be able to understand not just what people would be entertained by but what they need to see regardless of whether they will enjoy it,” he says. Such feats aren’t possible using the techniques behind the search engines, spam filters, and virtual assistants that try to understand us today. They often ignore the order of words and get by with statistical tricks like matching and counting keywords. Apple’s Siri, for example, tries to fit what you say into a small number of categories that trigger scripted responses. “They don’t really understand the text,” says LeCun. “It’s amazing that it works at all.” Meanwhile, systems that seem to have mastered complex language tasks, such as IBM’s Jeopardy! winner Watson, do it by being super-specialized to a particular format. “It’s cute as a demonstration, but not work that would really translate to any other situation,” he says.

    In contrast, deep-learning software may be able to make sense of language more the way humans do. Researchers at Facebook, Google, and elsewhere are developing software that has shown progress toward understanding what words mean. LeCun’s team has a system capable of reading simple stories and answering questions about them, drawing on faculties like logical deduction and a rudimentary understanding of time.

    However, as LeCun knows firsthand, artificial intelligence is notorious for blips of progress that stoke predictions of big leaps forward but ultimately change very little. Creating software that can handle the dazzling complexities of language is a bigger challenge than training it to recognize objects in pictures. Deep learning’s usefulness for speech recognition and image detection is beyond doubt, but it’s still just a guess that it will master language and transform our lives more radically. We don’t yet know for sure whether deep learning is a blip that will turn out to be something much bigger.

    Deep history

    The roots of deep learning reach back further than LeCun’s time at Bell Labs. He and a few others who pioneered the technique were actually resuscitating a long-dead idea in artificial intelligence.

    When the field got started, in the 1950s, biologists were just beginning to develop simple mathematical theories of how intelligence and learning emerge from signals passing between neurons in the brain. The core idea—still current today—was that the links between neurons are strengthened if those cells communicate frequently. The fusillade of neural activity triggered by a new experience adjusts the brain’s connections so it can understand it better the second time around.

    In 1956, the psychologist Frank Rosenblatt used those theories to invent a way of making simple simulations of neurons in software and hardware. The New York Times announced his work with the headline “Electronic ‘Brain’ Teaches Itself.” Rosenblatt’s perceptron, as he called his design, could learn how to sort simple images into categories—for instance, triangles and squares. Rosenblatt usually implemented his ideas on giant machines thickly tangled with wires, but they established the basic principles at work in artificial neural networks today.

    One computer he built had eight simulated neurons, made from motors and dials connected to 400 light detectors. Each of the neurons received a share of the signals from the light detectors, combined them, and, depending on what they added up to, spit out either a 1 or a 0. Together those digits amounted to the perceptron’s “description” of what it saw. Initially the results were garbage. But Rosenblatt used a method called supervised learning to train a perceptron to generate results that correctly distinguished different shapes. He would show the perceptron an image along with the correct answer. Then the machine would tweak how much attention each neuron paid to its incoming signals, shifting those “weights” toward settings that would produce the right answer. After many examples, those tweaks endowed the computer with enough smarts to correctly categorize images it had never seen before. Today’s deep-learning networks use sophisticated algorithms and have millions of simulated neurons, with billions of connections between them. But they are trained in the same way.

    Rosenblatt predicted that perceptrons would soon be capable of feats like greeting people by name, and his idea became a linchpin of the nascent field of artificial intelligence. Work focused on making perceptrons with more complex networks, arranged into a hierarchy of multiple learning layers. Passing images or other data successively through the layers would allow a perceptron to tackle more complex problems. Unfortunately, Rosenblatt’s learning algorithm didn’t work on multiple layers. In 1969 the AI pioneer Marvin Minsky, who had gone to high school with Rosenblatt, published a book-length critique of perceptrons that killed interest in neural networks at a stroke. Minsky claimed that getting more layers working wouldn’t make perceptrons powerful enough to be useful. Artificial–intelligence researchers abandoned the idea of making software that learned. Instead, they turned to using logic to craft working facets of intelligence—such as an aptitude for chess. Neural networks were shoved to the margins of computer science.

    Nonetheless, LeCun was mesmerized when he read about perceptrons as an engineering student in Paris in the early 1980s. “I was amazed that this was working and wondering why people abandoned it,” he says. He spent days at a research library near Versailles, hunting for papers published before perceptrons went extinct. Then he discovered that a small group of researchers in the United States were covertly working on neural networks again. “This was a very underground movement,” he says. In papers carefully purged of words like “neural” and “learning” to avoid rejection by reviewers, they were working on something very much like Rosenblatt’s old problem of how to train neural networks with multiple layers.

    LeCun joined the underground after he met its central figures in 1985, including a wry Brit named Geoff Hinton, who now works at Google and the University of Toronto. They immediately became friends, mutual admirers—and the nucleus of a small community that revived the idea of neural networking. They were sustained by a belief that using a core mechanism seen in natural intelligence was the only way to build artificial intelligence. “The only method that we knew worked was a brain, so in the long run it had to be that systems something like that could be made to work,” says Hinton.

    LeCun’s success at Bell Labs came about after he, Hinton, and others perfected a learning algorithm for neural networks with multiple layers. It was known as backpropagation, and it sparked a rush of interest from psychologists and computer scientists. But after LeCun’s check-reading project ended, backpropagation proved tricky to adapt to other problems, and a new way to train software to sort data was invented by a Bell Labs researcher down the hall from LeCun. It didn’t involve simulated neurons and was seen as mathematically more elegant. Very quickly it became a cornerstone of Internet companies such as Google, Amazon, and LinkedIn, which use it to train systems that block spam or suggest things for you to buy.

    After LeCun got to NYU in 2003, he, Hinton, and a third collaborator, University of Montreal professor Yoshua Bengio, formed what LeCun calls “the deep-learning conspiracy.” To prove that neural networks would be useful, they quietly developed ways to make them bigger, train them with larger data sets, and run them on more powerful computers. LeCun’s handwriting recognition system had had five layers of neurons, but now they could have 10 or many more. Around 2010, what was now dubbed deep learning started to beat established techniques on real-world tasks like sorting images. Microsoft, Google, and IBM added it to speech recognition systems. But neural networks were still alien to most researchers and not considered widely useful. In early 2012 LeCun wrote a fiery letter—initially published anonymously—after a paper claiming to have set a new record on a standard vision task was rejected by a leading conference. He accused the reviewers of being “clueless” and “negatively biased.”

    Everything changed six months later. Hinton and two grad students used a network like the one LeCun made for reading checks to rout the field in the leading contest for image recognition. Known as the ImageNet Large Scale Visual Recognition Challenge, it asks software to identify 1,000 types of objects as diverse as mosquito nets and mosques. The Toronto entry correctly identified the object in an image within five guesses about 85 percent of the time, more than 10 percentage points better than the second-best system. The deep-learning software’s initial layers of neurons optimized themselves for finding simple things like edges and corners, with the layers after that looking for successively more complex features like basic shapes and, eventually, dogs or people.

    LeCun recalls seeing the community that had mostly ignored neural networks pack into the room where the winners presented a paper on their results. “You could see right there a lot of senior people in the community just flipped,” he says. “They said, ‘Okay, now we buy it. That’s it, now—you won.’”

    Academics working on computer vision quickly abandoned their old methods, and deep learning suddenly became one of the main strands in artificial intelligence. Google bought a company founded by Hinton and the two others behind the 2012 result, and Hinton started working there part time on a research team known as Google Brain. Microsoft and other companies created new projects to investigate deep learning. In December 2013, Facebook CEO Mark Zuckerberg stunned academics by showing up at the largest neural-network research conference, hosting a party where he announced that LeCun was starting FAIR (though he still works at NYU one day a week).

    LeCun still harbors mixed feelings about the 2012 research that brought the world around to his point of view. “To some extent this should have come out of my lab,” he says. Hinton shares that assessment. “It was a bit unfortunate for Yann that he wasn’t the one who actually made the breakthrough system,” he says. LeCun’s group had done more work than anyone else to prove out the techniques used to win the ImageNet challenge. The victory could have been his had student graduation schedules and other commitments not prevented his own group from taking on ImageNet, he says. LeCun’s hunt for deep learning’s next breakthrough is now a chance to even the score.

    1
    LeCun at Bell Labs in 1993, with a computer that could read the handwriting on checks.

    Facebook’s New York office is a three-minute stroll up Broadway from LeCun’s office at NYU, on two floors of a building constructed as a department store in the early 20th century. Workers are packed more densely into the open plan than they are at Facebook’s headquarters in Menlo Park, California, but they can still be seen gliding on articulated skateboards past notices for weekly beer pong. Almost half of LeCun’s team of leading AI researchers works here, with the rest at Facebook’s California campus or an office in Paris. Many of them are trying to make neural networks better at understanding language. “I’ve hired all the people working on this that I could,” says LeCun.

    A neural network can “learn” words by spooling through text and calculating how each word it encounters could have been predicted from the words before or after it. By doing this, the software learns to represent every word as a vector that indicates its relationship to other words—a process that uncannily captures concepts in language. The difference between the vectors for “king” and “queen” is the same as for “husband” and “wife,” for example. The vectors for “paper” and “cardboard” are close together, and those for “large” and “big” are even closer.

    The same approach works for whole sentences (Hinton says it generates “thought vectors”), and Google is looking at using it to bolster its automatic translation service. A recent paper from researchers at a Chinese university and Microsoft’s Beijing lab used a version of the vector technique to make software that beats some humans on IQ-test questions requiring an understanding of synonyms, antonyms, and analogies.

    LeCun’s group is working on going further. “Language in itself is not that complicated,” he says. “What’s complicated is having a deep understanding of language and the world that gives you common sense. That’s what we’re really interested in building into machines.” LeCun means common sense as Aristotle used the term: the ability to understand basic physical reality. He wants a computer to grasp that the sentence “Yann picked up the bottle and walked out of the room” means the bottle left with him. Facebook’s researchers have invented a deep-learning system called a memory network that displays what may be the early stirrings of common sense.

    A memory network is a neural network with a memory bank bolted on to store facts it has learned so they don’t get washed away every time it takes in fresh data. The Facebook AI lab has created versions that can answer simple common-sense questions about text they have never seen before. For example, when researchers gave a memory network a very simplified summary of the plot of Lord of the Rings, it could answer questions such as “Where is the ring?” and “Where was Frodo before Mount Doom?” It could interpret the simple world described in the text despite having never previously encountered many of the names or objects, such as “Frodo” or “ring.”

    The software learned its rudimentary common sense by being shown how to answer questions about a simple text in which characters do things in a series of rooms, such as “Fred moved to the bedroom and Joe went to the kitchen.” But LeCun wants to expose the software to texts that are far better at capturing the complexity of life and the things a virtual assistant might need to do. A virtual concierge called Moneypenny that Facebook is expected to release could be one source of that data. The assistant is said to be powered by a team of human operators who will help people do things like make restaurant reservations. LeCun’s team could have a memory network watch over Moneypenny’s shoulder before eventually letting it learn by interacting with humans for itself.

    Building something that can hold even a basic, narrowly focused conversation still requires significant work. For example, neural networks have shown only very simple reasoning, and researchers haven’t figured out how they might be taught to make plans, says LeCun. But results from the work that has been done with the technology so far leave him confident about where things are going. “The revolution is on the way,” he says.

    Some people are less sure. Deep-learning software so far has displayed only the simplest capabilities required for what we would recognize as conversation, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle. The logic and planning capabilities still needed, he says, are very different from the things neural networks have been doing best: digesting sequences of pixels or acoustic waveforms to decide which image category or word they represent. “The problems of understanding natural language are not reducible in the same way,” he says.

    Gary Marcus, a professor of psychology and neural science at NYU who has studied how humans learn language and recently started an artificial-intelligence company called Geometric Intelligence, thinks LeCun underestimates how hard it would be for existing software to pick up language and common sense. Training the software with large volumes of carefully annotated data is fine for getting it to sort images. But Marcus doubts it can acquire the trickier skills needed for language, where the meanings of words and complex sentences can flip depending on context. “People will look back on deep learning and say this is a really powerful technique—it’s the first time that AI became practical,” he says. “They’ll also say those things required a lot of data, and there were domains where people just never had enough.” Marcus thinks language may be one of those domains. For software to master conversation, it would need to learn more like a toddler who picks it up without explicit instruction, he suggests.

    Deep belief

    At Facebook’s headquarters in California, the West Coast members of LeCun’s team sit close to Mark Zuckerberg and Mike Schroepfer, the company’s CTO. Facebook’s leaders know that LeCun’s group is still some way from building something you can talk to, but Schroepfer is already thinking about how to use it. The future Facebook he describes retrieves and coördinates information, like a butler you communicate with by typing or talking as you might with a human one.

    “You can engage with a system that can really understand concepts and language at a much higher level,” says Schroepfer. He imagines being able to ask that you see a friend’s baby snapshots but not his jokes, for example. “I think in the near term a version of that is very realizable,” he says. As LeCun’s systems achieve better reasoning and planning abilities, he expects the conversation to get less one-sided. Facebook might offer up information that it thinks you’d like and ask what you thought of it. “Eventually it is like this super-intelligent helper that’s plugged in to all the information streams in the world,” says Schroepfer.

    The algorithms needed to power such interactions would also improve the systems Facebook uses to filter the posts and ads we see. And they could be vital to Facebook’s ambitions to become much more than just a place to socialize. As Facebook begins to host articles and video on behalf of media and entertainment companies, for example, it will need better ways for people to manage information. Virtual assistants and other spinouts from LeCun’s work could also help Facebook’s more ambitious departures from its original business, such as the Oculus group working to make virtual reality into a mass–market technology.

    None of this will happen if the recent impressive results meet the fate of previous big ideas in artificial intelligence. Blooms of excitement around neural networks have withered twice already. But while complaining that other companies or researchers are over-hyping their work is one of LeCun’s favorite pastimes, he says there’s enough circumstantial evidence to stand firm behind his own predictions that deep learning will deliver impressive payoffs. The technology is still providing more accuracy and power in every area of AI where it has been applied, he says. New ideas are needed about how to apply it to language processing, but the still-small field is expanding fast as companies and universities dedicate more people to it. “That will accelerate progress,” says LeCun.

    It’s still not clear that deep learning can deliver anything like the information butler Facebook envisions. And even if it can, it’s hard to say how much the world really would benefit from it. But we may not have to wait long to find out. LeCun guesses that virtual helpers with a mastery of language unprecedented for software will be available in just two to five years. He expects that anyone who doubts deep learning’s ability to master language will be proved wrong even sooner. “There is the same phenomenon that we were observing just before 2012,” he says. “Things are starting to work, but the people doing more classical techniques are not convinced. Within a year or two it will be the end.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 4:34 pm on January 18, 2015 Permalink | Reply
    Tags: , Artificial Intelligence, Connectomics,   

    From NYT: “Sebastian Seung’s Quest to Map the Human Brain” 

    New York Times

    The New York Times

    JAN. 8, 2015
    GARETH COOK

    1
    Several distinct neurons in a mouse retina that have been mapped by volunteers playing a game developed by Sebastian Seung. Credit Photo illustration by Danny Jones. Original images from EyeWire.

    In 2005, Sebastian Seung suffered the academic equivalent of an existential crisis. More than a decade earlier, with a Ph.D. in theoretical physics from Harvard, Seung made a dramatic career switch into neuroscience, a gamble that seemed to be paying off. He had earned tenure from the Massachusetts Institute of Technology a year faster than the norm and was immediately named a full professor, an unusual move that reflected the sense that Seung was something of a superstar. His lab was underwritten with generous funding by the elite Howard Hughes Medical Institute. He was a popular teacher who traveled the world — Zurich; Seoul, South Korea; Palo Alto, Calif. — delivering lectures on his mathematical theories of how neurons might be wired together to form the engines of thought.

    And yet Seung, a man so naturally exuberant that he was known for staging ad hoc dance performances with Harvard Square’s street musicians, was growing increasingly depressed. He and his colleagues spent their days arguing over how the brain might function, but science offered no way to scan it for the answers. “It seemed like decades could go by,” Seung told me recently, “and you would never know one way or another whether any of the theories were correct.”

    That November, Seung sought the advice of David Tank, a mentor he met at Bell Laboratories who was attending the annual meeting of the Society for Neuroscience, in Washington. Over lunch in the dowdy dining room of a nearby hotel, Tank advised a radical cure. A former colleague in Heidelberg, Germany, had just built a device that imaged brain tissue with enough resolution to make out the connections between individual neurons. But drawing even a tiny wiring diagram required herculean efforts, as people traced the course of neurons through thousands of blurry black-and-white images. What the field needed, Tank said, was a computer program that could trace them automatically — a way to map the brain’s connections by the millions, opening a new area of scientific discovery. For Seung to tackle the problem, though, it would mean abandoning the work that had propelled him to the top of his discipline in favor of a highly speculative engineering project.

    Back in Cambridge, Seung spoke with two of his graduate students, who, like everyone else in the lab, thought the idea was terrible. Over the next few weeks, as the three of them talked and argued, Seung became convinced that the Heidelberg project was bound to be more interesting, and ultimately less risky, than continuing with the theoretical work he had lost faith in. “Make sure your passports are ready,” he said finally. “We are going to Germany next month.”

    Seung and his two students spent a good part of January 2006 in Germany, learning the finicky ways of high-resolution brain-image analysis from Winfried Denk, the scientist who built the device. The three returned to M.I.T. invigorated, but Seung’s decision looked, for quite a while, like an act of career suicide. Colleagues at M.I.T. whispered that Seung had gone off the rails, and in the more snobbish circles of theoretical neuroscience, the engineering project was seen as, in Seung’s words, “too blue-collar.” In 2010, the Hughes institute pulled the money that funded his lab, and he had to scramble. When his wife went into labor with their daughter in the middle of the night, he was working on a grant application; he wound up staying awake for 36 hours straight. (“Science,” Einstein once wrote, “is a wonderful thing if one does not have to earn one’s living at it.”) As the years passed, the advances out of the Seung lab were met with indifference, which was particularly hard on his graduate students. “Every time they had a success, they were depressed about it, because everyone else thought it was dumb,” Seung said. “It killed me.”

    ‘Think of what we could do if we could capture even a small fraction of the mental effort that goes into Angry Birds.’

    Last spring, eight years after he and his students packed a computer workstation into a piece of luggage and headed to Heidelberg, Seung published a paper in the prestigious journal Nature, demonstrating how the brain’s neural connections can be mapped — and discoveries made — using an ingenious mix of artificial intelligence and a competitive online game. Seung has also become the leading proponent of a plan, which he described in a 2012 book, to create a wiring diagram of all 100 trillion connections between the neurons of the human brain, an unimaginably vast and complex network known as the connectome.

    The race to map the connectome has hardly left the starting line, with only modest funding from the federal government and initial experiments confined to the brains of laboratory animals like fruit flies and mice. But it’s an endeavor heavy with moral and philosophical implications, because to map a human connectome would be, Seung has argued, to capture a person’s very essence: every memory, every skill, every passion. When the brain isn’t wired properly, it can lead to disorders like autism and schizophrenia — “connectopathies” that could be revealed in the map, perhaps suggesting treatments. And if science were to gain the power to record and store connectomes, then it would be natural to speculate, as Seung and others have, that technology might some day enable a recording to play again, thereby reanimating a human consciousness. The mapping of connectomes, its most zealous proponents believe, would confer nothing less than immortality.

    Last year, Seung was lured away from M.I.T. to join the Princeton Neuroscience Institute and Princeton’s Bezos Center for Neural Circuit Dynamics. These days, Seung, who is 48, has an office down the hall from his mentor Tank at the institute, a white building with strips of wraparound glazing that opened last year on the campus’s forested southern fringe. Outside Seung’s first-floor window are athletic fields, where afternoon pickup games of soccer occasionally lure him away. A few boxes lie around, half unpacked. Near a sycamore-veneer built-in desk designed by the building’s architect sits a jumbo jar of mixed nuts from Costco, a habit he picked up from his father, a professor of philosophy at the University of Texas, Austin. With connectome mapping, Seung explained last month, it is possible to start answering questions that theorists have puzzled over for decades, including the ones that prompted him to put aside his own work in frustration. He is planning, among other things, to prove that he can find a specific memory in the brain of a mouse and show how neural connections sustain it. “I am going back to settle old scores,” he said.

    In 1946, the Argentine man of letters Jorge Luis Borges wrote a short story about an empire, unnamed, that set out to construct a perfect map of its territory. A series of maps were drawn, only to be put aside in favor of more ambitious maps. Eventually, Borges wrote, “the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and . . . delivered it up to the Inclemencies of Sun and Winters.”

    With time, Borges’s cautionary parable has become even more relevant for the would-be cartographers of the world, Seung among them. Technological progress has always brought novel ways of seeing the natural world and thus new ways of mapping it. The telescope was what allowed Galileo [Galilei] to sketch, in his book “The Starry Messenger,” a first map of Jupiter’s largest moons. The invention of the microscope, sometime in the late 16th century, led to Robert Hooke’s famous depiction of a flea, its body armored and spiked, as well as the discovery of the cell, an alien world unto itself. Today the pace of invention and the raw power of technology are shocking: A Nobel Prize was awarded last fall for the creation of a microscope with a resolution so extreme that it seems to defy the physical constraints of light itself.

    What has made the early 21st century a particularly giddy moment for scientific mapmakers, though, is the precipitous rise of information technology. Advances in computers have provided a cheap means to collect and analyze huge volumes of data, and Moore’s Law, which predicts regular doublings in computing power, has shown little sign of flagging. Just as important is the fact that machines can now do the grunt work of research automatically, handling samples, measuring and recording data. Set up a robotic system, feed the data to the cloud and the map will practically draw itself. It’s easy to forget Borges’s caution: The question is not whether a map can be made, but what insights it will bring. Will future generations cherish a cartographer’s work or shake their heads and deliver it up to the inclemencies?

    The ur-map of this big science is the one produced by the Human Genome Project, a stem-to-stern accounting of the DNA that provides every cell’s genetic instructions. The genome project was completed faster than anyone expected, thanks to Moore’s Law, and has become an essential scientific tool. In its wake have come a proliferation of projects in the same vein — the proteome (proteins), the foldome (folding of proteins) — each promising a complete description of something or other. (One online listing includes the antiome: “The totality of people who object to the propagation of omes.”) The Brain Initiative3, the United States government’s 12-year, $4.5 billion brain-mapping effort, is a conscious echo of the genome project, but neuroscientists find themselves in a far more tenuous position at the outset. The brain might be mapped in a host of ways, and the initiative is pursuing many at once. In fact, Seung and his colleagues, who are receiving some of the funding, are working at the margins of contemporary neuroscience. Much of the field’s most exciting new technology has sought to track the brain’s activity — like functional M.R.I., with its images of parts of the brain “lighting up” — while the connectome would map the brain’s physical structure.

    2
    Seung discussing his mapping game, EyeWire, at Princeton. Credit Dolly Faibyshev for The New York Times

    To explain what he finds so compelling about the substance of the brain, Seung points to stories of near death. In May 1999, a young doctor named Anna Bagenholm was skiing down a ravine near the Arctic Circle in Norway when a rock snagged her skis, spinning her halfway around and knocking her onto her back. She sped headfirst down the slope, still on her skis, toward a stream covered with ice. It was a sunny day, unusually warm, and when she hit the ice, she went straight through. Rushing meltwater ballooned her clothes and dragged her farther under the ice. She found an air pocket, and her friends fought to free her, but the current was too strong and the ice too hard. They gripped her feet so they wouldn’t lose her. Bagenholm’s body went limp. Her heart stopped.

    By the time a mountain-rescue team freed her, pulling her body through a hole they cut downstream, she had been under for more than an hour. At that point she was clinically dead. The rescue team began CPR, winched her up into a waiting helicopter and ferried her to Tromso University Hospital, a one-hour flight, her body still showing no signs of life. Her temperature measured 57 degrees. Doctors slowly warmed her, and suddenly her heart started. She spent a month in intensive care but recovered remarkably well. Months later, Bagenholm returned to work and was even skiing again.

    What preserved Bagenholm’s memories and abilities, over hours, in a state of clinical death? Scientists believe that every thought, every sensation, is a set of tiny electrical impulses coursing through the brain’s interconnected neurons. But when a little girl learns a word, for example, her brain makes a record by altering the connections themselves. When she learns to ride a bike or sing “Happy Birthday,” a new constellation of connections takes shape. As she grows, every memory — a friend’s name, the feel of skis on virgin powder, a Beethoven sonata — is recorded this way. Taken together, these connections constitute her connectome, the brain’s permanent record of her personality, talents, intelligence, memories: the sum of all that constitutes her “self.” Even after the cold arrested Bagenholm’s heart and hushed her crackling neuronal net to a whisper, the connectome endured.

    What makes the connectome’s relationship to our identity so difficult to understand, Seung told me, is that we associate our “self” with motion. We walk. We sing. We experience thoughts and feelings that bloom into consciousness and then fade. “Psyche” is derived from the Greek “to blow,” evoking the vital breath that defines life. “It seems like a fallacy to talk about our self as some wiring diagram that doesn’t change very quickly,” Seung said. “The connectome is just meat, and people rebel at that.”

    Seung told me to imagine a river, the roiling waters of the Colorado. That, he said, is our experience from moment to moment. Over time, the water leaves its mark on the riverbed, widening bends, tracing patterns in the rock and soil. In a sense, the Grand Canyon is a memory of where the Colorado has been. And of course, that riverbed shapes the flow of the waters today. There are two selves then, river and riverbed. The river is all tumult and drama. The river demands attention. Yet it’s the riverbed that Seung wants to know.

    When Seung was just shy of his 5th birthday, his father took him to their local barbershop, a screen-door joint in Austin where the vending machine served Coke in bottles. While Seung’s father was getting his hair cut, the barber stopped and pointed out an endearing scene: Little Sebastian was pretending to read the paper. “No,” his father said, “I think he’s really reading it.” The barber went over to investigate, and sure enough, the boy was happy to explain what was happening that day in The Austin American-­Statesman. Seung had taught himself to read, in part by asking his father to call out store names and street signs. At 5, he told his father — a man who escaped North Korea on his own as a teenager — that he would no longer be needing toys for Christmas.
    Continue reading the main story

    Growing up, Seung’s primary passions were soccer, mathematics and nonfiction (with an exception made for Greek myths). As a teenager, he was inspired by Carl Sagan’s Cosmos. He took graduate-level physics courses as a 17-year-old Harvard sophomore and went directly into Harvard’s Ph.D. program in theoretical physics. During a 1989 summer internship at Bell Laboratories, though, Seung fell under the spell of a gregarious Israeli named Haim Sompolinsky, who introduced him to a problem in theoretical neuroscience: How can a network of neurons generate something like an “Aha!” moment, when learning leads to sudden understanding. This brought Seung to his own “Aha!” moment: At the fuzzy border between neuroscience and mathematics, he spied a new scientific terrain, thrilling and largely unexplored, giving him the same feeling physicists must have had when the atom first began to yield its secrets.

    Seung became part of a cadre of physicists who deployed sophisticated mathematical techniques to develop an idea dating back as far as Plato and Aristotle, that meaning emerges from the links between things — in this case, the links between neurons. In the 19th century, William James and other psychologists articulated mental processes as associations; for example, seeing a Labrador retriever prompts thoughts of a childhood pet, which leads to musings about a friend who lived next door. As the century closed, the Spanish neuroscientist Santiago Ramón y Cajal was creating illustrations of neurons — long, slim stems and spectacular branches that connected to other neurons with long stems of their own — when people began to wonder whether they were seeing the physical pathways of thought itself.

    The next turn came in more recent decades as a cross-disciplinary group of researchers, including Seung, hit on a new way of thinking that is described as connectionism. The basic idea (which borrows from computer science) is that simple units, connected in the right way, can give rise to surprising abilities (memory, recognition, reasoning). In computer chips, transistors and other basic electronic components are wired together to make powerful processors. In the brain, neurons are wired together — and rewired. Every time a girl sees her dog (wagging tail, chocolate brown fur), a certain set of neurons fire; this churn of activity is like Seung’s Colorado River. When these neurons fire together, the connections between them grow stronger, forming a memory — a part of Seung’s riverbed, the connectome that shapes thought. The notion is deeply counterintuitive: It’s natural to think of a network functioning as a river system does, a set of streams that can carry messages, but downright odd to suggest that there are parts of the riverbed that encode “Labrador retriever.”

    A typical human neuron has thousands of connections; a neuron can be as narrow as one ten-thousandth of a millimeter and yet stretch from one side of the head to the other. Only once have scientists ever managed to map the complete wiring diagram of an animal — a transparent worm called C. elegans, one millimeter long with just 302 neurons — and the work required a stunning display of resolve. Beginning in 1970 and led by the South African Nobel laureate Sydney Brenner, it involved painstakingly slicing the worm into thousands of sections, each one-thousandth the width of a human hair, to be photographed under an electron microscope.

    That was the easy part. To pull a wiring diagram from the stack of images required identifying each neuron and then following it through the sections, a task akin to tracing the full length of every strand of pasta in a bowl of spaghetti and meatballs, using pens and thousands of blurry black-and-white photos. For C. elegans, this process alone consumed more than a dozen years. When Seung started, he estimated that it would take a single tracer roughly a million years to finish a cubic millimeter of human cortex — meaning that tracing an entire human brain would consume roughly one trillion years of labor. He would need a little help.

    In 2012, Seung started EyeWire, an online game that challenges the public to trace neuronal wiring — now using computers, not pens — in the retina of a mouse’s eye. Seung’s artificial-­intelligence algorithms process the raw images, then players earn points as they mark, paint-by-numbers style, the branches of a neuron through a three-dimensional cube. The game has attracted 165,000 players in 164 countries. In effect, Seung is employing artificial intelligence as a force multiplier for a global, all-volunteer army that has included Lorinda, a Missouri grandmother who also paints watercolors, and Iliyan (a.k.a. @crazyman4865), a high-school student in Bulgaria who once played for nearly 24 hours straight. Computers do what they can and then leave the rest to what remains the most potent pattern-recognition technology ever discovered: the human brain.

    3
    Two neurons, mapped by EyeWire players, making contact at a synapse. Credit Photo illustration by Danny Jones. Original images from EyeWire.

    Ultimately, Seung still hopes that artificial intelligence will be able to handle the entire job. But in the meantime, he is working to recruit more help. In August, South Korea’s largest telecom company announced a partnership with EyeWire, running nationwide ads to bring in more players. In the next few years, Seung hopes to go bigger by enticing a company to turn EyeWire into a game with characters and a story line that people play purely for fun. “Think of what we could do,” Seung said, “if we could capture even a small fraction of the mental effort that goes into Angry Birds.”

    The Janelia Research Campus features a serpentine “landscape building” constructed into the side of a hill northwest of Washington. The facility, funded by the Howard Hughes Medical Institute, is nearly 1,000 feet long, and most of the exterior walls are glass, the unusual design a result of a “view preservation” stricture put in place in perpetuity by the previous owners of the land. From the top of the hill, you can see little sign of the $500 million building, except for a pair of humming silver exhaust silos and a modest glass entryway, all rising inexplicably from a field of wild grasses where plovers have begun to nest.

    j
    Janelia Research Campus

    Over the summer, I went to Janelia to meet Seung, who wore a gray polo shirt, blue shorts and a pair of Crocs. He was there to talk about possible collaborations and learn about the technology that others in the field are developing. Inside, he introduced me to Harald Hess, an acknowledged genius at creating new scientific tools. (Hess helped build a prototype in his living room of the extreme-resolution microscope — the one that earned a longtime colleague a Nobel this year.) Hess led us down a wide, arcing service corridor, the ceiling hung with exposed pipes, the wall lined with pallets of fruit-fly food. He unlocked a door and then ushered Seung into a room with white plastic curtains hanging from the 20-foot ceilings. He parted one with a kshreeek of releasing Velcro and said, “This is our ‘act of God’-proof room.”

    The room contained a pair of hulking beige devices, labeled “MERLIN” in black letters — each part of a new brain-imaging system. The system combines slicing and imaging: An electron microscope takes a picture of the brain sample from above, then a beam of ions moves across the top, vaporizing material and revealing the next layer of brain tissue for the microscope. It is, however, a “temperature-­sensitive beast,” said Shan Xu, a scientist at Janelia. If the room warms by even a fraction of a degree, the metal can expand imperceptibly, skewing the ion beam, wrecking the sample and forcing the team to start over. Xu was once within days of completing a monthslong run when a July heat wave caused the air-­conditioning to hiccup. All the work was lost. Xu has since designed elaborate fail-safes, including a system that can (and does) wake him up in the middle of the night; Janelia has also invested several hundred thousand dollars in backup climate control. “We’ve learned more about utilities than you would ever want to know,” Hess said.

    Here at Janelia, connectome science will face its most demanding test. Gerry Rubin, Janelia’s director, said his team hopes to have a complete catalog of high-resolution images­ of the fruit-fly brain in a year or two and a completely traced wiring diagram within a decade. Rubin is a veteran of genome mapping and saw how technological advances enabled a project that critics originally derided as prohibitively difficult and expensive. He is betting that the story of the connectome will follow the same arc. Ken Hayworth, a scientist in Hess’s lab, is developing a way to cleanly cut larger brains into cubes; he calls it “the hot knife.” In other labs, Jeff Lichtman of Harvard and Clay Reid of the Allen Institute for Brain Science are building their own ultrafast imaging systems. Denk, Seung’s longtime collaborator in Heidelberg, is working on a new device to slice and image a mouse’s entire brain, a volume orders of magnitude larger than what has been tried to date. Seung, meanwhile, is improving his tracing software and setting up new experiments — with his mentor Tank and Richard Axel, a Nobel laureate at Columbia — to find memories in the connectome. Still, Rubin admitted, “if we can’t do the fly in 10 years, there is no prayer for the field.”

    At the end of a long day, Seung and I sat on a pair of blue bar stools, sharing some peanuts and sipping on beers at Janelia’s in-house watering hole. Seung was feeling daunted. Even at Janelia, which plans to spend roughly $50 million and has some of the best tool-builders on the planet, the connectome of a fruit fly looks to be a decade away. A fruit fly! Will he live to see the first human connectome? “It could be possible,” he said, “if we assume that I exercise and eat right.”

    Years ago, Seung officiated at his best friend’s wedding, and during the invocation he told the gathering, “My father says that success is never achieved in just one generation.” As he has grown older and had a child of his own, he has felt his perspective shift. When Seung was in his 20s, science for him was solving puzzles, an extension of the math problems he did for fun as a child alone in his room on Saturdays after soccer. Now he finds great satisfaction in encouraging younger scientists, in helping them avoid dead ends that he has already explored. He wants to do something that will allow the community to progress, to build “strong foundations, steppingstones that the next generation can be sure of.”

    The grounds of Janelia have a monastic feel, and while talking with Seung, I couldn’t help thinking of the people who built Europe’s great cathedrals — the carpenters and masons who labored knowing that the work would not be completed until after their deaths. From the bar, we could see through a glass wall to a patio lined with smooth river rocks and a fieldstone wall. A spare shrub garden was set with a trickling stainless-­steel fountain, illuminated by a bank of sapphire lights. “I don’t know how much I’ll accomplish in my lifetime,” Seung said. “But the brain is mysterious, and I want to spend my life in the presence of mystery. It’s as simple as that.”

    As connectomics has gained traction, though, there are the first hints that it may be of interest to more than just monkish academics. In September, at a Brain Initiative conference in the Eisenhower building on the White House grounds, it was announced that Google had started its own connectome project. Tom Dean, a Google research scientist and the former chairman of the Brown University computer-science department, told me he has been assembling a team to improve the artificial intelligence: four engineers in Mountain View, Calif., and a group based in Seattle. To begin, Dean said, Google will be working most closely with the Allen Institute, which is trying to understand how the brain of a mouse processes images from the eye. Yet Dean said they also want to serve as a clearinghouse for Seung and others, applying different variations of artificial intelligence to brain imagery coming out of different labs, to see what works best. Eventually, Dean said, he hopes for a Google Earth of the brain, weaving together many different kinds of maps, across many scales, allowing scientists to behold an entire brain and then zoom in to see the firing of a single neuron, “like lightning in a thunderstorm.”
    Continue reading the main story

    ‘The brain is mysterious, and I want to spend my life in the presence of mystery.’

    It’s possible now to see a virtuous cycle that could build the connectome. The artificial intelligence used at Google, and in EyeWire, is known as deep learning because it takes its central principles from the way networks of neurons function. Over the last few years, deep learning has become a precious commercial tool, bringing unexpected leaps in image and voice recognition, and now it is being deployed to map the brain. This could, in the coming decades, lead to more insights about neural networks, improving deep learning itself — the premise of a new project funded by Iarpa, a blue-sky research arm of the American intelligence community, and perhaps one reason for Google’s interest. Better deep learning, in turn, could be used to accelerate the mapping and understanding of the brain, and so on.

    Even so, the shadow of Borges remains. The first connectome project began in the 1960s with the same intuition that later drove Seung: Sydney Brenner wanted a way to understand how behavior emerges from a biological system and thought that having a complete map of an animal’s nervous system would be essential. Brenner settled on the worm C. elegans for simplicity’s sake; it is small and prospers in a laboratory dish. The results were published in 1986 at book length, taking over the entirety of Philosophical Transactions of the Royal Society of London, science’s oldest journal, the outlet for a young Isaac Newton. Biologists were electrified and still sometimes refer to that 340-page edition of the journal as “the book.”

    Yet nearly three decades later, Brenner’s diagram continues to mystify. Scientists know roughly what individual neurons in C. elegans do and can say, for example, which neurons fire to send the worm wriggling forward or backward. But more complex questions remain unanswered. How does the worm remember? What is constant in the minds of worms? What makes each one individual? In part, these disappointments were a problem of technology, which has made connectome mapping so onerous that until recently nobody considered doing more. In science, it is a great accomplishment to make the first map, but far more useful to have 10, or a million, that can be compared with one another. “C. elegans was a classic case of being too far ahead of your time,” says Gerry Rubin of Janelia.

    The difficulties of interpreting the worm connectome can also be attributed to the fact that it has been particularly difficult to see the worm’s wiring in action, to measure the activity of the worm’s neurons. Without enough activity data, a wiring diagram is fundamentally inscrutable — a problem akin to trying to read the hieroglyphs of ancient Egypt before the Rosetta Stone, with its parallel text in ancient Greek. A connectome is not an answer, but a clue, like a hieroglyphic stele pulled up from the sand, promising insight into an empire but sadly lacking a key.

    In 2000, President Bill Clinton and Prime Minister Tony Blair of Britain held a news conference to announce a complete draft of the human genome, which Clinton called the “most wondrous map ever produced by humankind.” The map has indeed proved full of wonder — modern biology would be impossible without it — but in the years since, it has also become clear how incomplete the cartography is. The genome project identified roughly 20,000 genes, but cells also use a system of switches that turn genes off and on, and this system, called epigenetics, determines what work a cell can do and shapes what diseases a person might be prone to. Recent estimates put the number of switches in the hundreds of thousands, perhaps a million. An international consortium is now trying to map the epigenome, and no one can say when it will be finished.

    Eve Marder, a prominent neuroscientist at Brandeis University, cautions against expecting too much from the connectome. She studies neurons that control the stomachs of crabs and lobsters. In these relatively simple systems of 30 or so neurons, she has shown that neuromodulators — signaling chemicals that wash across regions of the brain, omitted from Seung’s static map — can fundamentally change how a circuit functions. If this is true for the stomach of a crustacean, the mind reels to consider what may be happening in the brain of a mouse, not to mention a human.

    The history of science is a narrative full of characters convinced that they had found the path to understanding everything, only to have the universe unveil a Sisyphean twist. Physicists sought matter’s basic building blocks and discovered atoms, but then found that atoms had their own building blocks, which had their own pieces, which has brought us, today, to string theory, the discipline’s equivalent of a land war in Asia. After the genome delivered up the text of humanity’s genetic code, biologists realized that our genetic machinery is so filled with feedback, and layers built on layers, that their work had only begun. Critics of Seung’s vision therefore see it as naïve, a faith that he can crest the mountain in front of him and not find more imposing peaks beyond. “If we want to understand the brain,” Marder says, “the connectome is absolutely necessary and completely insufficient.”

    Seung agrees but has never seen that as an argument for abandoning the enterprise. Science progresses when its practitioners find answers — this is the way of glory — but also when they make something that future generations rely on, even if they take it for granted. That, for Seung, would be more than good enough. “Necessary,” he said, “is still a pretty strong word, right?”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:32 am on December 5, 2014 Permalink | Reply
    Tags: , Artificial Intelligence,   

    From livescience: “Artificial Intelligence: Friendly or Frightening?” 

    Livescience

    December 04, 2014
    Tanya Lewis

    It’s a Saturday morning in June at the Royal Society in London. Computer scientists, public figures and reporters have gathered to witness or take part in a decades-old challenge. Some of the participants are flesh and blood; others are silicon and binary. Thirty human judges sit down at computer terminals, and begin chatting. The goal? To determine whether they’re talking to a computer program or a real person.

    The event, organized by the University of Reading, was a rendition of the so-called Turing test, developed 65 years ago by British mathematician and cryptographer Alan Turing as a way to assess whether a machine is capable of intelligent behavior indistinguishable from that of a human. The recently released film The Imitation Game, about Turing’s efforts to crack the German Enigma code during World War II, is a reference to the scientist’s own name for his test.

    In the London competition, one computerized conversation program, or chatbot, with the personality of a 13-year-old Ukrainian boy named Eugene Goostman, rose above and beyond the other contestants. It fooled 33 percent of the judges into thinking it was a human being. At the time, contest organizers and the media hailed the performance as an historic achievement, saying the chatbot was the first machine to “pass” the Turing test. [Infographic: History of Artificial Intelligence]

    i
    Decades of research and speculative fiction have led to today’s computerized assistants such as Apple’s Siri.
    Credit: by Karl Tate, Infographics Artist

    When people think of artificial intelligence (AI) — the study of the design of intelligent systems and machines — talking computers like Eugene Goostman often come to mind. But most AI researchers are focused less on producing clever conversationalists and more on developing intelligent systems that make people’s lives easier — from software that can recognize objects and animals, to digital assistants that cater to, and even anticipate, their owners’ needs and desires.

    But several prominent thinkers, including the famed physicist Stephen Hawking and billionaire entrepreneur Elon Musk, warn that the development of AI should be cause for concern.

    Thinking machines

    The notion of intelligent automata, as friend or foe, dates back to ancient times.

    “The idea of intelligence existing in some form that’s not human seems to have a deep hold in the human psyche,” said Don Perlis, a computer scientist who studies artificial intelligence at the University of Maryland, College Park.

    Reports of people worshipping mythological human likenesses and building humanoid automatons date back to the days of ancient Greece and Egypt, Perlis told Live Science. AI has also featured prominently in pop culture, from the sentient computer HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey to Arnold Schwarzenegger’s robot character in The Terminator films. [A Brief History of Artificial Intelligence]

    Since the field of AI was officially founded in the mid-1950s, people have been predicting the rise of conscious machines, Perlis said. Inventor and futurist Ray Kurzweil, recently hired to be a director of engineering at Google, refers to a point in time known as “the singularity,” when machine intelligence exceeds human intelligence. Based on the exponential growth of technology according to Moore’s Law (which states that computing processing power doubles approximately every two years), Kurzweil has predicted the singularity will occur by 2045.

    But cycles of hype and disappointment — the so-called “winters of AI” — have characterized the history of artificial intelligence, as grandiose predictions failed to come to fruition. The University of Reading Turing test is just the latest example: Many scientists dismissed the Eugene Goostman performance as a parlor trick; they said the chatbot had gamed the system by assuming the persona of a teenager who spoke English as a foreign language. (In fact, many researchers now believe it’s time to develop an updated Turing test.)

    Nevertheless, a number of prominent science and technology experts have expressed worry that humanity is not doing enough to prepare for the rise of artificial general intelligence, if and when it does occur. Earlier this week, Hawking issued a dire warning about the threat of AI.

    “The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking has a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig’s disease, and communicates using specialized speech software.)

    And Hawking isn’t alone. Musk told an audience at MIT that AI is humanity’s “biggest existential threat.” He also once tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes.”

    In March, Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the company Vicarious FPC, which aims to create a working artificial brain. At the time, Musk told CNBC that he’d like to “keep an eye on what’s going on with artificial intelligence,” adding, “I think there’s potentially a dangerous outcome there.”

    t
    Fears of AI turning into sinister killing machines, like Arnold Schwarzenegger’s character from the “Terminator” films, are nothing new.
    Credit: Warner Bros.

    But despite the fears of high-profile technology leaders, the rise of conscious machines — known as “strong AI” or “general artificial intelligence” — is likely a long way off, many researchers argue.

    “I don’t see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm,” said Charlie Ortiz, head of AI at the Burlington, Massachusetts-based software company Nuance Communications.”Lots of work needs to be done before computers are anywhere near that level,” he said.

    Machines with benefits

    Artificial intelligence is a broad and active area of research, but it’s no longer the sole province of academics; increasingly, companies are incorporating AI into their products.

    And there’s one name that keeps cropping up in the field: Google. From smartphone assistants to driverless cars, the Bay Area-based tech giant is gearing up to be a major player in the future of artificial intelligence.

    Google has been a pioneer in the use of machine learning — computer systems that can learn from data, as opposed to blindly following instructions. In particular, the company uses a set of machine-learning algorithms, collectively referred to as “deep learning,” that allow a computer to do things such as recognize patterns from massive amounts of data.

    For example, in June 2012, Google created a neural network of 16,000 computers that trained itself to recognize a cat by looking at millions of cat images from YouTube videos, The New York Times reported. (After all, what could be more uniquely human than watching cat videos?)

    The project, called Google Brain, was led by Andrew Ng, an artificial intelligence researcher at Stanford University who is now the chief scientist for the Chinese search engine Baidu, which is sometimes referred to as “China’s Google.”

    Today, deep learning is a part of many products at Google and at Baidu, including speech recognition, Web search and advertising, Ng told Live Science in an email.

    Current computers can already complete many tasks typically performed by humans. But possessing humanlike intelligence remains a long way off, Ng said. “I think we’re still very far from the singularity. This isn’t a subject that most AI researchers are working toward.”

    Gary Marcus, a cognitive psychologist at NYU who has written extensively about AI, agreed. “I don’t think we’re anywhere near human intelligence [for machines],” Marcus told Live Science. In terms of simulating human thinking, “we are still in the piecemeal era.”

    Instead, companies like Google focus on making technology more helpful and intuitive. And nowhere is this more evident than in the smartphone market.

    Artificial intelligence in your pocket

    In the 2013 movie Her, actor Joaquin Phoenix’s character falls in love with his smartphone operating system, “Samantha,” a computer-based personal assistant who becomes sentient. The film is obviously a product of Hollywood, but experts say that the movie gets at least one thing right: Technology will take on increasingly personal roles in people’s daily lives, and will learn human habits and predict people’s needs.

    Anyone with an iPhone is probably familiar with Apple’s digital assistant Siri, first introduced as a feature on the iPhone 4S in October 2011. Siri can answer simple questions, conduct Web searches and perform other basic functions. Microsoft’s equivalent is Cortana, a digital assistant available on Windows phones. And Google has Google Now, an app for the Web browser Chrome as well as Android or iPhones, which bills itself as providing “the information you want, when you need it.”

    For example, Google Now can show traffic information during your daily commute, or give you shopping list reminders while you’re at the store. You can ask the app questions, such as “should I wear a sweater tomorrow?” and it will give you the weather forecast. And, perhaps a bit creepily, you can ask it to “show me all my photos of dogs” (or “cats,” “sunsets” or a even a person’s name), and the app will find photos that fit that description, even if you haven’t labeled them as such.

    Given how much personal data from users Google stores in the form of emails, search histories and cloud storage, the company’s deep investments in artificial intelligence may seem disconcerting. For example, AI could make it easier for the company to deliver targeted advertising, which some users already find unpalatable. And AI-based image recognition software could make it harder for users to maintain anonymity online.

    But the company, whose motto is “Don’t be evil,” claims it can address potential concerns about its work in AI by conducting research in the open and collaborating with other institutions, company spokesman Jason Freidenfelds told Live Science. In terms of privacy concerns, specifically, he said, “Google goes above and beyond to make sure your information is safe and secure,” calling data security a “top priority.”

    While a phone that can learn your commute, answer your questions or recognize what a dog looks like may seem sophisticated, it still pales in comparison with a human being. In some areas, AI is no more advanced than a toddler. Yet, when asked, many AI researchers admit that the day when machines rival human intelligence will ultimately come. The question is, are people ready for it?

    Transcendencet
    In the 2014 film Transcendence, actor Johnny Depp’s character uploads his mind into a computer, but his hunger for power soon threatens the autonomy of his fellow humans.
    Credit: Warner Bros.

    Taking AI seriously

    Hollywood isn’t known for its scientific accuracy, but the film’s themes don’t fall on deaf ears. In April, when Trancendence was released, Hawking and fellow physicist Frank Wilczek, cosmologist Max Tegmark and computer scientist Stuart Russell published an op-ed in The Huffington Post warning of the dangers of AI.

    “It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” Hawking and others wrote in the article.”But this would be a mistake, and potentially our worst mistake ever.”

    Undoubtedly, AI could have many benefits, such as helping to aid the eradication of war, disease and poverty, the scientists wrote. Creating intelligent machines would be one of the biggest achievements in human history, they wrote, but it “might also be [the] last.” Considering that the singularity may be the best or worst thing to happen to humanity, not enough research is being devoted to understanding its impacts, they said.

    As the scientists wrote, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:08 pm on August 15, 2014 Permalink | Reply
    Tags: , Artificial Intelligence, , ,   

    From Harvard: “A self-organizing thousand-robot swarm” 

    Harvard School of Engineering and Applied Sciences
    Harvard School of Engineering and Applied Sciences

    August 14, 2014
    Caroline Perry

    Following simple programmed rules, autonomous robots arrange themselves into vast, complex shapes

    The first thousand-robot flash mob has assembled at Harvard University.

    “Form a sea star shape,” directs a computer scientist, sending the command to 1,024 little bots simultaneously via an infrared light. The robots begin to blink at one another and then gradually arrange themselves into a five-pointed star. “Now form the letter K.”

    The ‘K’ stands for Kilobots, the name given to these extremely simple robots, each just a few centimeters across, standing on three pin-like legs. Instead of one highly-complex robot, a “kilo” of robots collaborate, providing a simple platform for the enactment of complex behaviors.

    Just as trillions of individual cells can assemble into an intelligent organism, or a thousand starlings can form a great flowing murmuration across the sky, the Kilobots demonstrate how complexity can arise from very simple behaviors performed en masse (see video). To computer scientists, they also represent a significant milestone in the development of collective artificial intelligence (AI).

    Given a two-dimensional image, the Kilobots follow simple rules to form the same shape. Visually, the effect is similar to a flock of birds wheeling across the sky. “At some level you no longer even see the individuals; you just see the collective as an entity to itself,” says Radhika Nagpal.

    bots
    (Image courtesy of Mike Rubenstein and Science/AAAS.)

    This self-organizing swarm was created in the lab of Radhika Nagpal, Fred Kavli Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard University. The advance is described in the August 15 issue of Science.

    “The beauty of biological systems is that they are elegantly simple—and yet, in large numbers, accomplish the seemingly impossible,” says Nagpal. “At some level you no longer even see the individuals; you just see the collective as an entity to itself.”

    “Biological collectives involve enormous numbers of cooperating entities—whether you think of cells or insects or animals—that together accomplish a single task that is a magnitude beyond the scale of any individual,” says lead author Michael Rubenstein, a research associate at Harvard SEAS and the Wyss Institute.

    He cites, for example, the behavior of a colony of army ants. By linking together, they can form rafts and bridges to cross difficult terrain. Social amoebas do something similar at a microscopic scale: when food is scarce, they join together to create a fruiting body capable of escaping the local environment. In cuttlefish, color changes at the level of individual cells can help the entire organism blend into its surroundings. (And as Nagpal points out—with a smile—a school of fish in the movie Finding Nemo also collaborate when they form the shape of an arrow to point Nemo toward the jet stream.)

    “We are especially inspired by systems where individuals can self-assemble together to solve problems,” says Nagpal. Her research group made news in February 2014 with a group of termite-inspired robots that can collaboratively perform construction tasks using simple forms of coordination.

    But the algorithm that instructs those TERMES robots has not yet been demonstrated in a very large swarm. In fact, only a few robot swarms to date have exceeded 100 individuals, because of the algorithmic limitations on coordinating such large numbers, and the cost and labor involved in fabricating the physical devices.

    The research team overcame both of these challenges through thoughtful design.

    Most notably, the Kilobots require no micromanagement or intervention once an initial set of instructions has been delivered. Four robots mark the origin of a coordinate system, all the other robots receive a 2D image that they should mimic, and then using very primitive behaviors—following the edge of a group, tracking a distance from the origin, and maintaining a sense of relative location—they take turns moving towards an acceptable position. With coauthor Alejandro Cornejo, a postdoctoral fellow at Harvard SEAS and the Wyss Institute, they demonstrated a mathematical proof that the individual behaviors would lead to the right global result.

    The Kilobots also correct their own mistakes. If a traffic jam forms or a robot moves off-course—errors that become much more common in a large group—nearby robots sense the problem and cooperate to fix it.

    swarm
    In a swarm of a thousand simple robots, errors like traffic jams (second from left) and imprecise positioning (far right) are common, so the algorithm incorporates rules that can help correct for these. (Photo courtesy of Mike Rubenstein and Science/AAAS.)

    To keep the cost of the Kilobot down, each robot moves using two vibrating motors that allow it to slide across a surface on its rigid legs. An infrared transmitter and receiver allow it to communicate with a few of its neighbors and measure their proximity—but the robots are myopic and have no access to a bird’s-eye view. These design decisions come with tradeoffs, as Rubenstein explains: “These robots are much simpler than many conventional robots, and as a result, their abilities are more variable and less reliable,” he says. “For example, the Kilobots have trouble moving in a straight line, and the accuracy of distance sensing can vary from robot to robot.”

    Yet, at scale, the smart algorithm overcomes these individual limitations and guarantees—both physically and mathematically—that the robots can complete a human-specified task, in this case assembling into a particular shape. That’s an important demonstration for the future of distributed robotics, says Nagpal.

    “Increasingly, we’re going to see large numbers of robots working together, whether its hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways,” she says. “Understanding how to design ‘good’ systems at that scale will be critical.”

    For now, the Kilobots provide an essential test bed for AI algorithms.

    The thousand-Kilobot swarm provides a valuable platform for testing future collective AI algorithms. (Photo courtesy of Mike Rubenstein and Science/AAAS.)

    “We can simulate the behavior of large swarms of robots, but a simulation can only go so far,” says Nagpal. “The real-world dynamics—the physical interactions and variability—make a difference, and having the Kilobots to test the algorithm on real robots has helped us better understand how to recognize and prevent the failures that occur at these large scales.”

    The Kilobot robot design and software, originally created in Nagpal’s group at Harvard, are available open-source for non-commercial use. The Kilobots have also been licensed by Harvard’s Office of Technology Development to K-Team, a manufacturer of small mobile robots.

    This research was supported in part by the Wyss Institute and by the National Science Foundation (CCF-0926148, CCF-0643898).

    See the full article here.

    Through research and scholarship, the Harvard School of Engineering and Applied Sciences (SEAS) will create collaborative bridges across Harvard and educate the next generation of global leaders. By harnessing the power of engineering and applied sciences we will address the greatest challenges facing our society.

    Specifically, that means that SEAS will provide to all Harvard College students an introduction to and familiarity with engineering and technology as this is essential knowledge in the 21st century.

    Moreover, our concentrators will be immersed in the liberal arts environment and be able to understand the societal context for their problem solving, capable of working seamlessly withothers, including those in the arts, the sciences, and the professional schools. They will focus on the fundamental engineering and applied science disciplines for the 21st century; as we will not teach legacy 20th century engineering disciplines.

    Instead, our curriculum will be rigorous but inviting to students, and be infused with active learning, interdisciplinary research, entrepreneurship and engineering design experiences. For our concentrators and graduate students, we will educate “T-shaped” individuals – with depth in one discipline but capable of working seamlessly with others, including arts, humanities, natural science and social science.

    To address current and future societal challenges, knowledge from fundamental science, art, and the humanities must all be linked through the application of engineering principles with the professions of law, medicine, public policy, design and business practice.

    In other words, solving important issues requires a multidisciplinary approach.

    With the combined strengths of SEAS, the Faculty of Arts and Sciences, and the professional schools, Harvard is ideally positioned to both broadly educate the next generation of leaders who understand the complexities of technology and society and to use its intellectual resources and innovative thinking to meet the challenges of the 21st century.

    Ultimately, we will provide to our graduates a rigorous quantitative liberal arts education that is an excellent launching point for any career and profession.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: