Tagged: AI Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:04 pm on June 20, 2017 Permalink | Reply
    Tags: AI, , Cyber security,   

    From SA: “World’s Most Powerful Particle Collider Taps AI to Expose Hack Attacks” 

    Scientific American

    Scientific American

    June 19, 2017
    Jesse Emspak

    1
    A general view of the CERN Computer / Data Center and server farm. Credit: Dean Mouhtaropoulos Getty Images

    Thousands of scientists worldwide tap into CERN’s computer networks each day in their quest to better understand the fundamental structure of the universe. Unfortunately, they are not the only ones who want a piece of this vast pool of computing power, which serves the world’s largest particle physics laboratory. The hundreds of thousands of computers in CERN’s grid are also a prime target for hackers who want to hijack those resources to make money or attack other computer systems. But rather than engaging in a perpetual game of hide-and-seek with these cyber intruders via conventional security systems, CERN scientists are turning to artificial intelligence to help them outsmart their online opponents.

    Current detection systems typically spot attacks on networks by scanning incoming data for known viruses and other types of malicious code. But these systems are relatively useless against new and unfamiliar threats. Given how quickly malware changes these days, CERN is developing new systems that use machine learning to recognize and report abnormal network traffic to an administrator. For example, a system might learn to flag traffic that requires an uncharacteristically large amount of bandwidth, uses the incorrect procedure when it tries to enter the network (much like using the wrong secret knock on a door) or seeks network access via an unauthorized port (essentially trying to get in through a door that is off-limits).

    CERN’s cybersecurity department is training its AI software to learn the difference between normal and dubious behavior on the network, and to then alert staff via phone text, e-mail or computer message of any potential threat. The system could even be automated to shut down suspicious activity on its own, says Andres Gomez, lead author of a paper [Intrusion Prevention and Detection in GridComputing – The ALICE Case] describing the new cybersecurity framework.

    CERN’s Jewel

    CERN—the French acronym for the European Organization for Nuclear Research lab, which sits on the Franco-Swiss border—is opting for this new approach to protect a computer grid used by more than 8,000 physicists to quickly access and analyze large volumes of data produced by the Large Hadron Collider (LHC).

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    The LHC’s main job is to collide atomic particles at high-speed so that scientists can study how particles interact. Particle detectors and other scientific instruments within the LHC gather information about these collisions, and CERN makes it available to laboratories and universities worldwide for use in their own research projects.

    The LHC is expected to generate a total of about 50 petabytes of data (equal to 15 million high-definition movies) in 2017 alone, and demands more computing power and data storage than CERN itself can provide. In anticipation of that type of growth the laboratory in 2002 created its Worldwide LHC Computing Grid, which connects computers from more than 170 research facilities across more than 40 countries. CERN’s computer network functions somewhat like an electrical grid, which relies on a network of generating stations that create and deliver electricity as needed to a particular community of homes and businesses. In CERN’s case the community consists of research labs that require varying amounts of computing resources, based on the type of work they are doing at any given time.

    Grid Guardians

    One of the biggest challenges to defending a computer grid is the fact that security cannot interfere with the sharing of processing power and data storage. Scientists from labs in different parts of the world might end up accessing the same computers to do their research if demand on the grid is high or if their projects are similar. CERN also has to worry about whether the computers of the scientists’ connecting into the grid are free of viruses and other malicious software that could enter and spread quickly due to all the sharing. A virus might, for example, allow hackers to take over parts of the grid and use those computers either to generate digital currency known as bitcoins or to launch cyber attacks against other computers. “In normal situations, antivirus programs try to keep intrusions out of a single machine,” Gomez says. “In the grid we have to protect hundreds of thousands of machines that already allow” researchers outside CERN to use a variety of software programs they need for their different experiments. “The magnitude of the data you can collect and the very distributed environment make intrusion detection on [a] grid far more complex,” he says.

    Jarno Niemelä, a senior security researcher at F-Secure, a company that designs antivirus and computer security systems, says CERN’s use of machine learning to train its network defenses will give the lab much-needed flexibility in protecting its grid, especially when searching for new threats. Still, artificially intelligent intrusion detection is not without risks—and one of the biggest is whether Gomez and his team can develop machine-learning algorithms that can tell the difference between normal and harmful activity on the network without raising a lot of false alarms, Niemelä says.

    CERN’s AI cybersecurity upgrades are still in the early stages and will be rolled out over time. The first test will be protecting the portion of the grid used by ALICE (A Large Ion Collider Experiment)—a key LHC project to study the collisions of lead nuclei. If tests on ALICE are successful, CERN’s machine learning–based security could then be used to defend parts of the grid used by the institution’s six other detector experiments.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

    Advertisements
     
  • richardmitnick 7:53 am on June 10, 2017 Permalink | Reply
    Tags: AI, , , Common Crawl, Implicit Association Test (IAT), Princeton researchers discover why AI become racist and sexist, Word-Embedding Association Test (WEAT)   

    From ars technica: “Princeton researchers discover why AI become racist and sexist” 

    Ars Technica
    ars technica

    19/4/2017
    Annalee Newitz

    Study of language bias has implications for AI as well as human cognition.

    1
    No image caption or credit

    Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

    The implicit bias test

    Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.

    People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you’d like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more. Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT’s usefulness in discovering people’s latent stereotypes about each other. (It’s worth noting that there is some debate among social scientists about the IAT’s accuracy.)

    Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The “word-embedding” part of the test comes from a project at Stanford called GloVe, which packages words together into “vector representations,” basically lists of associated terms. So the word “dog,” if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds. The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like “girl” or “mother.” To keep things simple, the researchers limited each concept to 300 vectors.

    To see how concepts get associated with each other online, the WEAT looks at a variety of factors to measure their “closeness” in text. At a basic level, Caliskan told Ars, this means how many words apart the two concepts are, but it also accounts for other factors like word frequency. After going through an algorithmic transform, closeness in the WEAT is equivalent to the time it takes for a person to categorize a concept in the IAT. The further apart the two concepts, the more distantly they are associated in people’s minds.

    The WEAT worked beautifully to discover biases that the IAT had found before. “We adapted the IAT to machines,” Caliskan said. And what that tool revealed was that “if you feed AI with human data, that’s what it will learn. [The data] contains biased information from language.” That bias will affect how the AI behaves in the future, too. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender.

    Imagine an army of bots unleashed on the Internet, replicating all the biases that they learned from humanity. That’s the future we’re looking at if we don’t build some kind of corrective for the prejudices in these systems.

    A problem that AI can’t solve

    Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. In one test, they found strong associations between the concept of woman and the concept of nursing. This reflects a truth about reality, which is that nursing is a majority female profession.

    “Language reflects facts about the world,” Caliskan told Ars. She continued:

    Removing bias or statistical facts about the world will make the machine model less accurate. But you can’t easily remove bias, so you have to learn how to work with it. We are self-aware, we can decide to do the right thing instead of the prejudiced option. But machines don’t have self awareness. An expert human might be able to aid in [the AIs’] decision-making process so the outcome isn’t stereotyped or prejudiced for a given task.”

    The solution to the problem of human language is… humans. “I can’t think of many cases where you wouldn’t need a human to make sure that the right decisions are being made,” concluded Caliskan. “A human would know the edge cases for whatever the application is. Once they test the edge cases they can make sure it’s not biased.”

    So much for the idea that bots will be taking over human jobs. Once we have AIs doing work for us, we’ll need to invent new jobs for humans who are testing the AIs’ results for accuracy and prejudice. Even when chatbots get incredibly sophisticated, they are still going to be trained on human language. And since bias is built into language, humans will still be necessary as decision-makers.

    In a recent paper for Science about their work, the researchers say the implications are far-reaching. “Our findings are also sure to contribute to the debate concerning the Sapir Whorf hypothesis,” they write. “Our work suggests that behavior can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages.” If you watched the movie Arrival, you’ve probably heard of Sapir Whorf—it’s the hypothesis that language shapes consciousness. Now we have an algorithm that suggests this may be true, at least when it comes to stereotypes.

    Caliskan said her team wants to branch out and try to find as-yet-unknown biases in human language. Perhaps they could look for patterns created by fake news or look into biases that exist in specific subcultures or geographical locations. They would also like to look at other languages, where bias is encoded very differently than it is in English.

    “Let’s say in the future, someone suspects there’s a bias or stereotype in a certain culture or location,” Caliskan mused. “Instead of testing with human subjects first, which takes time, money, and effort, they can get text from that group of people and test to see if they have this bias. It would save so much time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon
    Stem Education Coalition
    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

     
  • richardmitnick 2:52 pm on May 30, 2017 Permalink | Reply
    Tags: AI, , , Creative Destruction Lab (CDL) at U of T’s Rotman School of Management, , ,   

    From U Toronto: “U of T’s Creative Destruction Lab goes quantum” 

    U Toronto Bloc

    University of Toronto

    May 26, 2017
    Chris Sorensen

    1
    Startups participating in CDL’s new quantum machine learning stream will have cloud access to Vancouver-based D-Wave’s quantum computer (photo courtesy D-Wave Systems)

    Startup accelerator launches new quantum machine learning stream for startups.

    The roughly 25 startups lucky enough to be accepted to a new quantum machine learning stream at a University of Toronto accelerator are about to become part of a very exclusive club.

    The Creative Destruction Lab (CDL) at U of T’s Rotman School of Management said Thursday that it will provide the startups with access to the world’s only commercially available quantum computers, built by Vancouver’s D-Wave Systems, beginning in September.

    To date, only a handful of U.S.-based organizations have had the tens of millions to invest in D-Wave’s bleeding edge technology. They include Google, Lockheed Martin and Los Alamos National Laboratory.

    “We’re removing the barriers to entry to what’s available in terms of quantum computing – and that will hopefully spawn new, interesting applications from early-stage startups,” said Daniel Mulet, an associate director at the CDL accelerator in Toronto, which focuses on scaling science-based startups with artificial intelligence, or AI, technologies.

    It’s yet another example of how U of T has emerged as a hotbed of computer science research that’s spawning a host of futuristic, AI-equipped companies, ranging from legal research firm ROSS Intelligence to medical startup Deep Genomics. In just the past few months, the university also helped launch the Vector Institute for Artificial Intelligence and saw star AI researcher Raquel Urtasun form a partnership with ride-sharing giant Uber, which plans to set up a driverless car lab in Toronto.

    Making D-Wave’s quantum machines available to CDL startups follows in the footsteps of other U of T efforts to ensure researchers have access to the latest and most powerful computing tools. The university is one of several members of the Southern Ontario Smart Computing Innovation Platform (SOSCIP), which offers researchers access to several powerful computing platforms, including IBM’s Watson.

    Mulet said CDL, which recently announced a bold cross-Canada expansion, is now hoping to lay the groundwork for the next phase of AI development by combining machine learning – computers capable of learning without explict human instructions – with the nascent, but potentially game-changing field of quantum computing.

    “Canada, with companies like D-Wave, and groups like IQC and Perimeter, has all the elements to seed a quantum computing and quantum machine learning software industry,” he said, referring to the University of Waterloo’s Institute for Quantum Computing and the Perimeter Institute for Theoretical Physics. “We would like it to happen here before it happens somewhere else in the world.”

    For those unfamiliar with quantum computers, D-Wave’s machine will sound like something straight out of a science fiction movie. It’s a giant black box, about the size of a garden shed, that surrounds a core cooled 180 times below the temperature of deep space. The heavily shielded, otherworldly interior is necessary to allow the quantum bits, or qubits, to exhibit their quantum properties.

    So what, exactly, is a quantum computer and what does it have to do with machine learning?

    The idea is to harness the mind-bending properties of quantum mechanics to achieve an exponential increase in computational power. That includes the quantum principle of superposition, which allows quantum particles to exist in more than one state simultaneously. One oft-used explanation (and the one cited by Prime Minister Justin Trudeau last year): classical computer bits are binary, with a value of either one or zero – on or off – whereas a quantum qubit can be both one and zero – on and off – at the same time.

    D-Wave co-founder Eric Ladizinsky offered a more visual explanation during a 2014 conference in London. Imagine, he said, trying to find an X scribbled inside one of the 50 million books in the U.S. Library of Congress. A traditional computer functions like a person trying to systimatically open each book and flip through its pages, he continued, “but what if, somehow, I could put you in this magical state of quantum superposition, so you were in 50 million parallel realities and in each one you could try opening a different book?”

    “We are making a bet that in the next five years quantum speedup useful for machine learning will be achieved,” Mulet said. “When you apply that to creating intelligent systems, those systems become that much more powerful.”

    Vern Brownell, the CEO of D-Wave, said the partnership with CDL, and the prospect of building an ecosystem of quantum AI and machine learning startups, spoke directly to the company’s vision of “bringing quantum computing out of the research lab and into the real world.”

    While CDL won’t have one of D-Wave’s $15 million 2000Q computers on location at U of T, Mulet says the up to 40 individuals accepted to the program – the application deadline is July 24 – will have access to its computational power through the cloud. The startups will also receive training from the same teams that D-Wave dispatches to its large corporate customers, and will participate in an intensive “bootcamp” led by Peter Wittek, a Barcelona-based research scientist who wrote the first textbook on quantum machine learning.

    CDL said three Silicon Valley-based venture capital firms – Bloomberg Beta, Data Collective and Spectrum 28 – will offer to invest pre-seed capital in every company admitted to, or formed in, the program, so long as they meet certain basic criteria.

    Mulet added the arrangement with D-Wave, whose founder Geordie Rose is a CDL Fellow, was three years in the making.

    It should be noted, however, that D-Wave’s vision of quantum computing isn’t shared by everyone in the field. The company focuses on a particular type of quantum function known as quantum annealing, which can only be used to solve certain types of optimization problems – and even then D-Wave’s machines don’t always outperform traditional computers. By contrast, other researchers, including those at IBM, are striving to build a univerisal quantum machine that could handle various types of complex calculations that would take classical computers months or even years to solve.

    In the meantime, a growing number of researchers are experimenting with D-Wave’s machines. One example: Scientists at Volkswagen recently used a similar cloud-based version of D-Wave’s machine to figure out the fastest way to send 10,000 Beijing taxi cabs to the nearest airport without creating a traffic jam. Other problems that D-Wave claims can be tackled with its system include optimizing cancer radio therapy, developing new drug types and designing more efficient water networks.

    D-Wave has also courted controversy in the past because it wasn’t always clear to researchers whether its machines truly demonstrated quantum properties.

    Mulet, however, says the academic debate surrounding D-Wave’s approach is less interesting to CDL than what its startups do with it. “We’re proponents of building impactful companies,” he said, adding that CDL plans to incorporate other types of quantum computers when they become commercially available. “So it doesn’t really matter where your science and technology comes from as long as it creates value for customers.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Toronto Campus

    Established in 1827, the University of Toronto has one of the strongest research and teaching faculties in North America, presenting top students at all levels with an intellectual environment unmatched in depth and breadth on any other Canadian campus.

    Established in 1827, the University of Toronto has one of the strongest research and teaching faculties in North America, presenting top students at all levels with an intellectual environment unmatched in depth and breadth on any other Canadian campus.

     
  • richardmitnick 8:53 pm on May 20, 2017 Permalink | Reply
    Tags: AI, , ,   

    From McGill: “How Montreal aims to become a world centre of artificial intelligence” 

    McGill University

    McGill University

    1

    Montreal Gazette

    May 20, 2017
    Bertrand Marotte

    2
    Doina Precup, associate professor in computer sciences at McGill University, is the recipient of a Google research award. “People didn’t really care for this type of research,” she says of the early days of AI. John Mahoney / Montreal Gazette

    It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

    Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

    And hopes are high that a three-day conference starting May 24 — AI Forum — will help burnish Montreal’s reputation as one of the world’s emerging AI advanced research centres and top talent pools in the suddenly very hot tech trend.

    Topics and issues on the agenda include the evolution of AI in Montreal and the transformative impact AI can have on business, industry and the economy.

    For example, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

    Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

    “It’s ambitious,” allowed Gagné, one of the keynote speakers at the AI Forum, held in partnership with the annual C2 Montréal international gabfest.

    3
    Jean-François Gagné is co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched in Montreal last year. John Mahoney / Montreal Gazette

    The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

    “Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

    The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

    Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

    The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

    But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

    Bengio’s Montreal Institute for Learning Algorithms (MILA) got $4.5 million last November from Alphabet Inc.’s Google, an aggressive backer of research in machine learning.

    (Machine learning makes computers smarter and able to learn from data-based information rather than simply responding to static instructions. It involves the creation of computer neural networks that mimic human brain activity and can program themselves to solve complex problems rather than having to be programmed.)

    Google has also launched a deep learning — a subfield of machine learning — and AI research lab at its existing offices in Montreal.

    Microsoft has launched a new venture fund whose first investment — an undisclosed amount — is in Element AI.

    The Redmond, Wash.-based software giant also plans to double its AI R&D team in Montreal to about 90 people over the next year, said Microsoft Canada spokeswoman Lisa Gibson.

    Montreal-based AI startups are involved in a variety of niche areas, including medical diagnostics like radiology — machines are now able to detect cancerous tumours better than radiologists — translation and voice mimicry.

    4
    Government support and a relatively low cost of living have helped establish Montreal as an emerging AI advanced research centre, says McGill’s Doina Precup. John Mahoney / Montreal Gazette

    Lyrebird, founded by three U de M PhD students, has developed speech synthesis software that can copy anyone’s voice and make it say anything. Possible applications include using fake famous voices in audio-book readings and creating idiosyncratic voices for automated personal assistants.

    Botler AI, founded by Iranian-born engineer Amir Moravej, uses AI to help immigrants navigate the labyrinthine immigration process. The product uses actual cases and government guidelines to help steer users seeking admission to Quebec’s foreign workers and student program.

    U de M and McGill University are the academic bedrocks on which Montreal’s AI sector has been built. About 150 AI researchers toil at the two institutions, making the city one of the world’s largest basic deep learning centres.

    “We stuck to academia, which helped us build big labs with a lot of graduate students,” said Doina Precup, associate professor in computer sciences at McGill and recipient of a Google research award.

    “The training and the research started much before (AI) was popular, since the early 2000s, when people didn’t really care for this type of research.”

    Government backing over the years and Montreal’s relatively low cost of living compared with places like San Francisco have also been a boon, said Precup.

    Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

    “It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

    Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

    Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

    5
    CEO Jean-François Gagné consults with engineer Philippe Mathieu at Element AI offices in Montreal. “We want to be part of that conversation — shaping what AI is going to look like,” Gagné says of the startup. John Mahoney / Montreal Gazette

    Musbah doesn’t view Toronto — which holds bragging rights to also being a significant global AI centre — as a threat. “There’s a productive competitive relationship between Toronto and Montreal,” he said.

    “So far, (the rivalry) has been contributing positively to Canada” as well as to efforts to reverse the AI brain drain to the U.S. over the past several years and retain the best AI minds here at home, he said.

    Element AI aims to have 100 employees by the end June, which will make it the largest private AI group in Canada, said Gagné.

    The organization, a private-sector/academia hybrid, wants to help companies get access to cutting-edge technology, invest in startups and generally act as a counterweight to the massive heft of titans like Facebook, Google, Apple, Amazon, Microsoft and China’s Baidu, he said.

    Regulatory and ethical concerns will be among the topics to be discussed at this month’s AI Forum. “We want to be part of that conversation, shaping what AI is going to look like,” said Gagné.

    Two issues he singles out as critical are the potential for loss of privacy and job disruption resulting from AI technology.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    All about McGill

    With some 300 buildings, more than 38,500 students and 250,000 living alumni, and a reputation for excellence that reaches around the globe, McGill has carved out a spot among the world’s greatest universities.
    Founded in Montreal, Quebec, in 1821, McGill is a leading Canadian post-secondary institution. It has two campuses, 11 faculties, 11 professional schools, 300 programs of study and some 39,000 students, including more than 9,300 graduate students. McGill attracts students from over 150 countries around the world, its 8,200 international students making up 21 per cent of the student body.

     
  • richardmitnick 1:39 pm on May 20, 2017 Permalink | Reply
    Tags: , AI, ,   

    From aeon: “Creative blocks” How Close are we to AI 

    1

    aeon

    1
    Illustration by Sam Green

    The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?

    Undated
    David Deutsch

    It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

    But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

    Why? Because, as an unknown sage once remarked, ‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either). I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

    Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. The first people to guess this and to grapple with its ramifications were the 19th-century mathematician Charles Babbage and his assistant Ada, Countess of Lovelace. It remained a guess until the 1980s, when I proved it using the quantum theory of computation.

    Babbage came upon universality from an unpromising direction. He had been much exercised by the fact that tables of mathematical functions (such as logarithms and cosines) contained mistakes. At the time they were compiled by armies of clerks, known as ‘computers’, which is the origin of the word. Being human, the computers were fallible. There were elaborate systems of error correction, but even proofreading for typographical errors was a nightmare. Such errors were not merely inconvenient and expensive: they could cost lives. For instance, the tables were extensively used in navigation. So, Babbage designed a mechanical calculator, which he called the Difference Engine. It would be programmed by initialising certain cogs. The mechanism would drive a printer, in order to automate the production of the tables. That would bring the error rate down to negligible levels, to the eternal benefit of humankind.

    Unfortunately, Babbage’s project-management skills were so poor that despite spending vast amounts of his own and the British government’s money, he never managed to get the machine built. Yet his design was sound, and has since been implemented by a team led by the engineer Doron Swade at the Science Museum in London.

    2
    Slow but steady: a detail from Charles Babbage’s Difference Engine, assembled nearly 170 years after it was designed. Courtesy Science Museum

    Here was a cognitive task that only humans had been able to perform. Nothing else in the known universe even came close to matching them, but the Difference Engine would perform better than the best humans. And therefore, even at that faltering, embryonic stage of the history of automated computation — before Babbage had considered anything like AGI — we can see the seeds of a philosophical puzzle that is controversial to this day: what exactly is the difference between what the human ‘computers’ were doing and what the Difference Engine could do? What type of cognitive task, if any, could either type of entity perform that the other could not in principle perform too?

    One immediate difference between them was that the sequence of elementary steps (of counting, adding, multiplying by 10, and so on) that the Difference Engine used to compute a given function did not mirror those of the human ‘computers’. That is to say, they used different algorithms. In itself, that is not a fundamental difference: the Difference Engine could have been modified with additional gears and levers to mimic the humans’ algorithm exactly. Yet that would have achieved nothing except an increase in the error rate, due to increased numbers of glitches in the more complex machinery. Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the Difference Engine’s method — and doing so would have been just as perverse. It would not have copied the Engine’s main advantage, its accuracy, which was due to hardware not software. It would only have made an arduous, boring task even more arduous and boring, which would have made errors more likely, not less.

    For humans, that difference in outcomes — the different error rate — would have been caused by the fact that computing exactly the same table with two different algorithms felt different. But it would not have felt different to the Difference Engine. It had no feelings. Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans. Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general. In fact, its repertoire was confined to evaluating a tiny class of specialised mathematical functions (basically, power series in a single variable).

    Thinking about how he could enlarge that repertoire, Babbage first realised that the programming phase of the Engine’s operation could itself be automated: the initial settings of the cogs could be encoded on punched cards. And then he had an epoch-making insight. The Engine could be adapted to punch new cards and store them for its own later use, making what we today call a computer memory. If it could run for long enough — powered, as he envisaged, by a steam engine — and had an unlimited supply of blank cards, its repertoire would jump from that tiny class of mathematical functions to the set of all computations that can possibly be performed by any physical object. That’s universality.

    Babbage called this improved machine the Analytical Engine. He and Lovelace understood that its universality would give it revolutionary potential to improve almost every scientific endeavour and manufacturing process, as well as everyday life. They showed remarkable foresight about specific applications. They knew that it could be programmed to do algebra, play chess, compose music, process images and so on. Unlike the Difference Engine, it could be programmed to use exactly the same method as humans used to make those tables. And prove that the two methods must give the same answers, and do the same error-checking and proofreading (using, say, optical character recognition) as well.

    But could the Analytical Engine feel the same boredom? Could it feel anything? Could it want to better the lot of humankind (or of Analytical Enginekind)? Could it disagree with its programmer about its programming? Here is where Babbage and Lovelace’s insight failed them. They thought that some cognitive functions of the human brain were beyond the reach of computational universality. As Lovelace wrote, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.’

    And yet ‘originating things’, ‘following analysis’, and ‘anticipating analytical relations and truths’ are all behaviours of brains and, therefore, of the atoms of which brains are composed. Such behaviours obey the laws of physics. So it follows inexorably from universality that, with the right program, an Analytical Engine would undergo them too, atom by atom and step by step. True, the atoms in the brain would be emulated by metal cogs and levers rather than organic material — but in the present context, inferring anything substantive from that distinction would be rank racism.

    Despite their best efforts, Babbage and Lovelace failed almost entirely to convey their enthusiasm about the Analytical Engine to others. In one of the great might-have-beens of history, the idea of a universal computer languished on the back burner of human thought. There it remained until the 20th century, when Alan Turing arrived with a spectacular series of intellectual tours de force, laying the foundations of the classical theory of computation, establishing the limits of computability, participating in the building of the first universal classical computer and, by helping to crack the Enigma code, contributing to the Allied victory in the Second World War.

    5
    Alan Turing. GIZMODO

    Turing fully understood universality. In his 1950 paper ‘Computing Machinery and Intelligence’, he used it to sweep away what he called ‘Lady Lovelace’s objection’, and every other objection both reasonable and unreasonable. He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

    This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular.

    But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.

    Why? I call the core functionality in question creativity: the ability to produce new explanations. For example, suppose that you want someone to write you a computer program to convert temperature measurements from Centigrade to Fahrenheit. Even the Difference Engine could have been programmed to do that. A universal computer like the Analytical Engine could achieve it in many more ways. To specify the functionality to the programmer, you might, for instance, provide a long list of all inputs that you might ever want to give it (say, all numbers from -89.2 to +57.8 in increments of 0.1) with the corresponding correct outputs, so that the program could work by looking up the answer in the list on each occasion. Alternatively, you might state an algorithm, such as ‘divide by five, multiply by nine, add 32 and round to the nearest 10th’. The point is that, however the program worked, you would consider it to meet your specification — to be a bona fide temperature converter — if, and only if, it always correctly converted whatever temperature you gave it, within the stated range.

    Now imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

    Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!

    3
    I’m sorry Dave, I’m afraid I can’t do that: HAL, the computer intelligence from Stanley Kubrick’s 2001: A Space Odyssey. Courtesy MGM

    Traditionally, discussions of AGI have evaded that issue by imagining only a test of the program, not its specification — the traditional test having been proposed by Turing himself. It was that (human) judges be unable to detect whether the program is human or not, when interacting with it via some purely textual medium so that only its cognitive abilities would affect the outcome. But that test, being purely behavioural, gives no clue for how to meet the criterion. Nor can it be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves. (For how I think biological evolution gave us the ability in the first place, see my book The Beginning of Infinity.)

    And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

    The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

    Such a theory is beyond present-day knowledge. What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile. Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible. I myself remember, for example, observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’. Not only was I not surprised, I fully expected that there would be an interval of 17,000 years until the next such ‘19’, a period that neither I nor any other human being had previously experienced even once.

    How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given the explanation, those drastic ‘changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.

    So, why is it still conventional wisdom that we get our theories by induction? For some reason, beyond the scope of this article, conventional wisdom adheres to a trope called the ‘problem of induction’, which asks: ‘How and why can induction nevertheless somehow be done, yielding justified true beliefs after all, despite being impossible and invalid respectively?’ Thanks to this trope, every disproof (such as that by Popper and David Miller back in 1988), rather than ending inductivism, simply causes the mainstream to marvel in even greater awe at the depth of the great ‘problem of induction’.

    In regard to how the AGI problem is perceived, this has the catastrophic effect of simultaneously framing it as the ‘problem of induction’, and making that problem look easy, because it casts thinking as a process of predicting that future patterns of sensory experience will be like past ones. That looks like extrapolation — which computers already do all the time (once they are given a theory of what causes the data). But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself.

    Now, the truth is that knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data. And therefore, attempts to work towards creating an AGI that would do the latter are just as doomed as an attempt to bring life to Mars by praying for a Creation event to happen there.

    Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.

    Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI. A computer program either has that yet-to-be-fully-understood logic, in which case it can perform human-type thinking about anything, including its own thinking and how to improve it, or it doesn’t, in which case it is in no sense an AGI. Consequently, another hopeless approach to AGI is to start from existing knowledge of how to program specific tasks — such as playing chess, performing statistical analysis or searching databases — and then to try to improve those programs in the hope that this will somehow generate AGI as a side effect, as happened to Skynet in the Terminator films.

    Nowadays, an accelerating stream of marvellous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvellous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do.

    The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviourist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up. Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess.

    An AGI is qualitatively, not quantitatively, different from all other computer programs. The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality. Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.

    In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been.

    This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.

    The very term ‘AGI’ is an example of one. The field used to be called ‘AI’ — artificial intelligence. But ‘AI’ was gradually appropriated to describe all sorts of unrelated computer programs such as game players, search engines and chatbots, until the G for ‘general’ was added to make it possible to refer to the real thing again, but now with the implication that an AGI is just a smarter species of chatbot.

    Another class of rationalisations runs along the general lines of: AGI isn’t that great anyway; existing software is already as smart or smarter, but in a non-human way, and we are too vain or too culturally biased to give it due credit. This gets some traction because it invokes the persistently popular irrationality of cultural relativism, and also the related trope that: ‘We humans pride ourselves on being the paragon of animals, but that pride is misplaced because they, too, have language, tools …

    … And self-awareness.’

    Remember the significance attributed to Skynet’s becoming ‘self-aware’? That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

    Perhaps the reason that self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Kurt Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. So has consciousness. And here we have the problem of ambiguous terminology again: the term ‘consciousness’ has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (‘qualia’), which is intimately connected with the problem of AGI. At the other, ‘consciousness’ is simply what we lose when we are put under general anaesthetic. Many animals certainly have that.

    AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.

    Ironically, that group of rationalisations (AGI has already been done/is trivial/ exists in apes/is a cultural conceit) are mirror images of arguments that originated in the AGI-is-impossible camp. For every argument of the form ‘You can’t do AGI because you’ll never be able to program the human soul, because it’s supernatural’, the AGI-is-easy camp has the rationalisation, ‘If you think that human cognition is qualitatively different from that of apes, you must believe in a supernatural soul.’

    ‘Anything we don’t yet know how to program is called human intelligence,’ is another such rationalisation. It is the mirror image of the argument advanced by the philosopher John Searle (from the ‘impossible’ camp), who has pointed out that before computers existed, steam engines and later telegraph systems were used as metaphors for how the human mind must work. Searle argues that the hope for AGI rests on a similarly insubstantial metaphor, namely that the mind is ‘essentially’ a computer program. But that’s not a metaphor: the universality of computation follows from the known laws of physics.

    Some, such as the mathematician Roger Penrose, have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. To explain why I, and most researchers in the quantum theory of computation, disagree that this is a plausible source of the human brain’s unique functionality is beyond the scope of this essay. (If you want to know more, read Litt et al’s 2006 paper ‘Is the Brain a Quantum Computer?’, published in the journal Cognitive Science.)

    That AGIs are people has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI. Using non-cognitive attributes (such as percentage carbon content) to define personhood would, again, be racist. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything.

    Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.

    For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist. Once an AGI program is running in a computer, to deprive it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (ie before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote?

    Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word ‘programming’. To treat AGIs like any other computer programs would constitute brainwashing, slavery, and tyranny. And cruelty to children, too, for ‘programming’ an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. To ignore the rights and personhood of AGIs would not only be the epitome of evil, but also a recipe for disaster: creative beings cannot be enslaved forever.

    Some people are wondering whether we should welcome our new robot overlords. Some hope to learn how we can rig their programming to make them constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good.

    These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of ‘good’ needs continual improvement. How should society be organised so as to promote that improvement? ‘Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better.

    One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

    I do not highlight all these philosophical issues because I fear that AGIs will be invented before we have developed the philosophical sophistication to understand them and to integrate them into civilisation. It is for almost the opposite reason: I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that is essential to their future integration is also a prerequisite for developing them in the first place.

    The lack of progress in AGI is due to a severe logjam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.

    Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

    Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 5:20 pm on March 30, 2017 Permalink | Reply
    Tags: AI, , Quantum computers use quantum bits (or qubits), , Rigetti Computing   

    From Futurism: “This Startup Plans to Revolutionize Quantum Computing Technology Faster Than Ever” 

    futurism-bloc

    Futurism

    3.30.17
    Dom Galeon

    Investor Interest

    Since Rigetti Computing launched three years ago, the Berekely and Fremont-based startup has attracted a host of investors — including private American venture capital firm, Andreessen Horowitz (also known as A16Z). As of this week, Rigetting Computing has raised a total of $64 million after successfully hosting a Series A and Series B round of funding.

    1
    2

    The startup is attracting investors primarily because it promises to revolutionize quantum computing technology: “Rigetti has assembled an impressive team of scientists and engineers building the combination of hardware and software that has the potential to finally unlock quantum computing for computational chemistry, machine learning and much more,” Vijay Pande, a general partner at A16Z, said when the fundraising was announced.

    Quantum Problem Solving

    Quantum computers are expected to change computing forever in large part due to their speed and processing power. Instead of processing information the way existing systems do — relying on bits of 0s and 1s operating on miniature transistors — quantum computers use quantum bits (or qubits) that can both be a 0 or a 1 at the same time. This is thanks to a quantum phenomenon called superposition. In existing versions of quantum computers, this has been achieved using individual photons.

    “Quantum computing will enable people to tackle a whole new set of problems that were previously unsolvable,” said Chad Rigetti, the startup’s founder and CEO. “This is the next generation of advanced computing technology. The potential to make a positive impact on humanity is enormous.” This translates to computing system that are capable of handling problems deemed too difficult for today’s computers. Such applications could be found everywhere from advanced medical research to even improved encryption and cybersecurity.

    How is Rigetti Computing planning to revolutionize the technology? For starters, they’re building a quantum computing platform for artificial intelligence and computational chemistry. This can help overcome the logistical challenges that currently plague quantum computer development. They also have an API for quantum computing in the cloud, called Forest, that’s recently opened up private beta testing.

    Rigetti expects it will be at least two more years before their technology can be applied to real world problems. But for interested investors, investing in such a technological game-changer sooner rather than later makes good business sense.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Futurism covers the breakthrough technologies and scientific discoveries that will shape humanity’s future. Our mission is to empower our readers and drive the development of these transformative technologies towards maximizing human potential.

     
  • richardmitnick 2:36 pm on January 17, 2017 Permalink | Reply
    Tags: AI, , It's a bad time to be a physicist, Physicists run to Silicon Valley, ,   

    From WIRED: “Move Over, Coders—Physicists Will Soon Rule Silicon Valley” 

    Wired logo

    Wired

    1
    Oscar Boykin.Ariel Zambelich/WIRED

    It’s a bad time to be a physicist.

    At least, that’s what Oscar Boykin says. He majored in physics at the Georgia Institute of Technology and in 2002 he finished a physics PhD at UCLA. But four years ago, physicists at the Large Hadron Collider in Switzerland discovered the Higgs boson, a subatomic particle first predicted in the 1960s. As Boykin points out, everyone expected it. The Higgs didn’t mess with the theoretical models of the universe. It didn’t change anything or give physcists anything new to strive for. “Physicists are excited when there’s something wrong with physics, and we’re in a situation now where there’s not a lot that’s wrong,” he says. “It’s a disheartening place for a physicist to be in.” Plus, the pay isn’t too good.

    Boykin is no longer a physicist. He’s a Silicon Valley software engineer. And it’s a very good time to be one of those.

    Boykin works at Stripe, a $9-billion startup that helps businesses accept payments online. He helps build and operate software systems that collect data from across the company’s services, and he works to predict the future of these services, including when, where, and how the fraudulent transactions will come. As a physicist, he’s ideally suited to the job, which requires both extreme math and abstract thought. And yet, unlike a physicist, he’s working in a field that now offers endless challenges and possibilities. Plus, the pay is great.

    If physics and software engineering were subatomic particles, Silicon Valley has turned into the place where the fields collide. Boykin works with three other physicists at Stripe. In December, when General Electric acquired the machine learning startup Wise.io, CEO Jeff Immelt boasted that he had just grabbed a company packed with physicists, most notably UC Berkeley astrophysicist Joshua Bloom. The open source machine learning software H20, used by 70,000 data scientists across the globe, was built with help from Swiss physicist Arno Candel, who once worked at the SLAC National Accelerator Laboratory. Vijay Narayanan, Microsoft’s head of data science, is an astrophysicist, and several other physicists work under him.

    It’s not on purpose, exactly. “We didn’t go into the physics kindergarten and steal a basket of children,” says Stripe president and co-founder John Collison. “It just happened.” And it’s happening across Silicon Valley. Because structurally and technologically, the things that just about every internet company needs to do are more and more suited to the skill set of a physicist.

    The Naturals

    Of course, physicists have played a role in computer technology since its earliest days, just as they’ve played a role in so many other fields. John Mauchly, who helped design the ENIAC, one of the earliest computers, was a physicist. Dennis Ritchie, the father of the C programming language, was too.

    But this is a particularly ripe moment for physicists in computer tech, thanks to the rise of machine learning, where machines learn tasks by analyzing vast amounts of data. This new wave of data science and AI is something that suits physicists right down to their socks.

    Among other things, the industry has embraced neural networks, software that aims to mimic the structure of the human brain. But these neural networks are really just math on an enormous scale, mostly linear algebra and probability theory. Computer scientists aren’t necessarily trained in these areas, but physicists are. “The only thing that is really new to physicists is learning how to optimize these neural networks, training them, but that’s relatively straightforward,” Boykin says. “One technique is called ‘Newton’s method.’ Newton the physicist, not some other Newton.”

    Chris Bishop, who heads Microsoft’s Cambridge research lab, felt the same way thirty years ago, when deep neural networks first started to show promise in the academic world. That’s what led him from physics into machine learning. “There is something very natural about a physicist going into machine learning,” he says, “more natural than a computer scientist.”

    The Challenge Space

    Ten years ago, Boykin says, so many of his old physics pals were moving into the financial world. That same flavor of mathematics was also enormously useful on Wall Street as a way of predicting where the markets would go. One key method was The Black-Scholes Equation, a means of determining the value of a financial derivative. But Black-Scholes helped foment the great crash of 2008, and now, Boykin and others physicists say that far more of their colleagues are moving into data science and other kinds of computer tech.

    Earlier this decade, physicists arrived at the top tech companies to help build so-called Big Data software, systems that juggle data across hundreds or even thousands of machines. At Twitter, Boykin helped build one called Summingbird, and three guys who met in the physics department at MIT built similar software at a startup called Cloudant. Physicists know how to handle data—at MIT, Cloudant’s founders handled massive datasets from the the Large Hadron Collider—and building these enormously complex systems requires its own breed of abstract thought. Then, once these systems were built, so many physicists have helped use the data they harnessed.

    In the early days of Google, one of the key people building the massively distributed systems in the company’s engine room was Yonatan Zunger, who has a PhD in string theory from Stanford. And when Kevin Scott joined the Google’s ads team, charged with grabbing data from across Google and using it to predict which ads were most likely to get the most clicks, he hired countless physicists. Unlike many computer scientists, they were suited to the very experimental nature of machine learning. “It was almost like lab science,” says Scott, now chief technology officer at LinkedIn.

    Now that Big Data software is commonplace—Stripe uses an open source version of what Boykin helped build at Twitter—it’s helping machine learning models drive predictions inside so many other companies. That provides physicists with any even wider avenue into the Silicon Valley. At Stripe, Boykin’s team also includes Roban Kramer (physics PhD, Columbia), Christian Anderson (physics master’s, Harvard), and Kelley Rivoire (physics bachelor’s, MIT). They come because they’re suited to the work. And they come because of the money. As Boykin says: “The salaries in tech are arguably absurd.” But they also come because there are so many hard problems to solve.

    Anderson left Harvard before getting his PhD because he came to view the field much as Boykin does—as an intellectual pursuit of diminishing returns. But that’s not the case on the internet. “Implicit in ‘the internet’ is the scope, the coverage of it,” Anderson says. “It makes opportunities much greater, but it also enriches the challenge space, the problem space. There is intellectual upside.”

    The Future

    Today, physicists are moving into Silicon Valley companies. But in the years come, a similar phenomenon will spread much further. Machine learning will change not only how the world analyzes data but how it builds software. Neural networks are already reinventing image recognition, speech recognition, machine translation, and the very nature of software interfaces. As Microsoft’s Chris Bishop says, software engineering is moving from handcrafted code based on logic to machine learning models based on probability and uncertainty. Companies like Google and Facebook are beginning to retrain their engineers in this new way of thinking. Eventually, the rest of the computing world will follow suit.

    In other words, all the physicists pushing into the realm of the Silicon Valley engineer is a sign of a much bigger change to come. Soon, all the Silicon Valley engineers will push into the realm of the physicist.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:25 am on September 2, 2016 Permalink | Reply
    Tags: AI, , , Psychiatry,   

    From The Atlantic: “How Artificial Intelligence Could Help Diagnose Mental Disorders” 

    Atlantic Magazine

    The Atlantic Magazine

    Aug 23, 2016
    Joseph Frankel

    People convey meaning by what they say as well as how they say it: Tone, word choice, and the length of a phrase are all crucial cues to understanding what’s going on in someone’s mind. When a psychiatrist or psychologist examines a person, they listen for these signals to get a sense of their wellbeing, drawing on past experience to guide their judgment. Researchers are now applying that same approach, with the help of machine learning, to diagnose people with mental disorders.

    In 2015, a team of researchers developed an AI model that correctly predicted [Nature Partner Journal] which members of a group of young people would develop psychosis—a major feature of schizophrenia—by analyzing transcripts of their speech. This model focused on tell-tale verbal tics of psychosis: short sentences, confusing, frequent use of words like “this,” “that,” and “a,” as well as a muddled sense of meaning from one sentence to the next.

    Now, Jim Schwoebel, an engineer and CEO of NeuroLex Diagnostics, wants to build on that work to make a tool for primary-care doctors to screen their patients for schizophrenia. NeuroLex’s product would take a recording from a patient during the appointment via a smartphone or other device (Schwoebel has a prototype Amazon Alexa app) mounted out of sight on a nearby wall.

    1
    Adriane Ohanesian / Reuters

    Using the same model from the psychosis paper, the product would then search a transcript of the patient’s speech for linguistic clues. The AI would present its findings as a number—like a blood-pressure reading—that a psychiatrist could take into account when making a diagnosis. And as the algorithm is “trained” on more and more patients, that reading could better reflect a patient’s state of mind.

    In addition to the schizophrenia screener, an idea that earned Schwoebel an award from the American Psychiatric Association, NeuroLex is hoping to develop a tool for psychiatric patients who are already being treated in hospitals. Rather than trying to help diagnose a mental disorder from a single sample, the AI would examine a patient’s speech over time to track their progress.

    For Schwoebel, this work is personal: he thinks this approach may help solve problems his older brother faced in seeking treatment for schizophrenia. Before his first psychotic break, Schwoebel’s brother would send short, one-word responses, or make cryptic to references to going “there” or “here”—worrisome abnormalities that “all made sense” after his brother’s first psychotic episode, he said.

    According to Schwoebel, it took over 10 primary-care appointments before his brother was referred to a psychiatrist and eventually received a diagnosis. After that, he was put on one medication that didn’t work for him, and then another. In the years it took to get Schwoebel’s brother diagnosed and on an effective regimen, he experienced three psychotic breaks. For cases that call for medication, this led Schwoebel to wonder how to get a person on the right prescription, and at the right dose, faster.

    To find out, NeuroLex is planning a “pre-post study” on people who’ve been hospitalized for mental disorders “to see how their speech patterns change during a psychotic stay or a depressive stay in a hospital.” Ideally, the AI would analyze sample recordings from a person under a mental health provider’s care “to see which drugs are working the best” in order “to reduce the time in the hospital,” Schwoebel said.

    If a person’s speech shows fewer signs of depression or bipolar disorder after being given one medication, this tool could help show that it’s working. If there are no changes, the AI might suggest trying another medication sooner, sparing the patient undue suffering. And, once it’s gathered enough data, it could recommend a medication based on what worked for other people with similar speech profiles. Automated approaches to diagnosis have been anticipated in the greater field of medicine for decades: one company claims that its algorithm recognizes lung cancer with 50 percent more accuracy than human radiologists.

    The possibility of bolstering a mental health clinician’s judgment with a more “objective,” “quantitative” assessment appeals to the Massachusetts General Hospital psychiatrist Arshya Vahabzadeh, who has served as a mentor for a start-up accelerator Schwoebel cofounded. “Schizophrenia refers to a cluster of observable or elicitable symptoms” rather than a catchall diagnosis, he said. With a large enough data set, an AI might be able to split diagnoses like schizophrenia into sharper, more helpful categories based off the common patterns it perceives among patients. “I think the data will help us subtype some of these conditions in ways we couldn’t do before.”

    As with any medical intervention, AI aids “have to be researched and validated. That’s my big kind of asterisk,” he said, echoing a sentiment I heard from Schwoebel. And while the psychosis predictor study demonstrates that speech analysis can predict psychosis reasonably well, it’s still just one study. And no one has yet published a proof-of-concept for depression or bipolar disorder.

    Machine learning is a hot field, but it still has a ways to go—both in and outside of medicine. To take one example, Siri has struggled for years to handle questions and commands from Scottish users. For mental health care, small errors like these could be catastrophic. “If you tell me that a piece of technology is wrong 20 percent of the time”—or 80 percent accurate—“I’m not going to want to deploy it to a patient,” Vahabzadeh said.

    This risk becomes more disturbing when considering age, gender, ethnicity, race, or region. If an AI is trained on speech samples that are all from one demographic group, normal samples outside that group might result in false positives.

    “If you’re from a certain culture, you might speak softer and at a lower pitch,” which an AI “might interpret as depression when it’s not,” Schwoebel said.

    Still, Vahabzadeh believes technology like this could someday help clinicians treat more people, and treat them more efficiently. And that could be crucial, given the shortage of mental-health-care providers throughout the U.S., he says. “If humans aren’t going to be the cost-effective solution, we have to leverage tech in some way to extend and augment physicians’ reach.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:51 pm on May 22, 2016 Permalink | Reply
    Tags: AI, , ,   

    From SA: “Unveiling the Hidden Layers of Deep Learning” 

    Scientific American

    Scientific American

    May 20, 2016
    Amanda Montañez

    1
    Credit: Daniel Smilkov and Shan Carter

    In a recent Scientific American article entitled Springtime for AI: The Rise of Deep Learning, computer scientist Yoshua Bengio explains why complex neural networks are the key to true artificial intelligence as people have long envisioned it. It seems logical that the way to make computers as smart as humans is to program them to behave like human brains. However, given how little we know of how the brain functions, this task seems more than a little daunting. So how does deep learning work?

    This visualization by Jen Christiansen explains the basic structure and function of neural networks.

    1
    Graphic by Jen Christiansen; PUNCHSTOCK (faces)

    Evidently, these so-called “hidden layers” play a key role in breaking down visual components to decode the image as a whole. And we know there is an order to how the layers act: from input to output, each layer handles increasingly complex information. But beyond that, the hidden layers—as their name suggests—are shrouded in mystery.

    As part of a recent collaborative project called Tensor Flow, Daniel Smilkov and Shan Carter created a neural network playground, which aims to demystify the hidden layers by allowing users to interact and experiment with them.

    2
    Visualization by Daniel Smilkov and Shan Carter
    Click the image to launch the interactive [in the original article].

    There is a lot going on in this visualization, and I was recently fortunate enough to hear Fernanda Viégas and Martin Wattenberg break some of it down in their keynote talk at OpenVisConf. (Fernanda and Martin were part of the team behind Tensor Flow, which is a much more complex, open-source tool for using neural networks in real-world applications.)

    Rather than something as complicated as faces, the neural network playground uses blue and orange points scattered within a field to “teach” the machine how to find and echo patterns. The user can select different dot-arrangements of varying degrees of complexity, and manipulate the learning system by adding new hidden layers, as well as new neurons within each layer. Then, each time the user hits the “play” button, she can watch as the background color gradient shifts to approximate the arrangement of blue and orange dots. As the pattern becomes more complex, additional neurons and layers help the machine to complete the task more successfully.

    3

    4
    The machine struggles to decode this more complex spiral pattern.

    Besides the neuron layers, the machine has other meaningful features, such as the connections among the neurons. The connections appear as either blue or orange lines, blue being positive—that is, the output for each neuron is the same as its content—and orange being negative—meaning the output is the opposite of each neuron’s values. Additionally, the thickness and opacity of the connection lines indicate the confidence of the prediction each neuron is making, much like the connections in our brains strengthen as we advance through a learning process.

    Interestingly, as we get better at building neural networks for machines, we may end up revealing new information about how our own brains work. Visualizing and playing with the hidden layers seems like a great way to facilitate this process while also making the concept of deep learning accessible to a wider audience.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 6:55 am on May 17, 2016 Permalink | Reply
    Tags: AI, ,   

    From COSMOS: “AI learns Nobel prize-winning quantum experiment” 

    Cosmos Magazine bloc

    COSMOS

    17 May 2016
    Cathal O’Connell

    1
    Bose-Einstein condensate. NIST.
    ______________________________________________________________________

    Physicists using artificial intelligence to run a complex experiment could be putting themselves out of a job.

    A team of Australian physicists has employed a new research assistant in the form of an artificial intelligence (AI) algorithm to help set up experiments in quantum mechanics.

    For its first task, the algorithm took control of a delicate experiment to create a Bose-Einstein condensate – a weird state of matter that can form in certain atoms at ultracold temperatures.

    The algorithm didn’t need specific training and was able to learn on the job. It developed its own model of the process and tweaking the parameters to get them just right.

    “I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the Australian National University in Canberra.

    The work is the latest example of scientists turning to AI as a collaborator in research.

    Jürgen Schmidhuber, a German computer scientist whose algorithms are central to Google’s speech recognition, is working on his ambition is to build an optimal AI scientist, then retire (leaving the AI to replace him).

    Meanwhile, at the University of Vienna, physicists are using a computer program to devise new quantum experiments they could not have thought of themselves. They see it as a way to get past the non-intuitive nature of quantum mechanics.

    And now Australian physicists are letting a machine control their instruments to replicate an experiment that won the 2001 Nobel prize.

    A Bose-Einstein condensate is like atomic “groupthink” – a bunch of atoms behaves as if it was a single atom. This only happens thanks to quantum effects in certain elements, but only when they are incredibly cold – typically less than a billionth of a degree above absolute zero (-273.15 ºC).

    Getting down to that temperature is a finicky business involving trapping the atoms between two laser beams.

    The Australian team developed their AI algorithm to control the lasers during cooling. And it achieved the condensation ten times faster than using a regular, non-AI program.

    “This is the first application of AI like this, where it’s controlling an experiment and optimising it on its own,” says Michael Hush, a physicist at the University of New South Wales in Sydney who co-led the work.

    In the experiment, the physicists trapped about 40 million rubidium atoms at the intersection of two laser beams. They then used magnetic fields to cool the atoms down about five millionths of a degree above absolute zero. Pretty cold, but still too warm to condense.

    For the final and most delicate cooling stage, AI was put in the driver seat.

    It carefully tuned the power of the two lasers to allow the most energetic atoms to escape, but without losing hold of the coldest ones – and did it with surprising success.

    1
    The orange cloud, just right of centre, is the Bose-Einstein condensate of supercold atoms – thanks to AI.Credit: Stuart Hay / ANU
    ________________________________________________________________

    “It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” Wigley says.

    The AI could be used for any experiment that involves optimising parameters, such as achieving perfect focus in high-resolution microscopy.

    “The exciting thing about the AI is that it requires no prior knowledge of the system, making it quite general,” Wigley says.

    The paper is published in the journal Scientific Reports today.

    And no, the AI is not included on the list of authors.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: