Tagged: AI Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:55 am on September 4, 2018 Permalink | Reply
    Tags: AI, ,   

    From École Polytechnique Fédérale de Lausanne: “Artificial intelligence helps create at the right time” 

    EPFL bloc

    From École Polytechnique Fédérale de Lausanne

    04.09.18
    Cécilia Carron

    1
    Ana Manasovska is working on improving the semantic recognition and recommendation platform for the inventors. © 2018 Alain Herzog

    Student project (9/9). By using artificial intelligence to comb through the vast array of published research and detect the findings most relevant for invention, engineers can magnify their creative ability and invent faster and more disruptively than has been previously possible. This is the approach that Ana Manasovska helped develop as a Master’s student at EPFL, and the one used by creative Artificial Intelligence firm Iprova, based at EPFL’s Innovation Park, to come up with a wide range of inventions. Manasovska, whose Master’s research involved testing different phrase recognition methods, now works for the firm.

    Inventions like sensors for self-driving cars that can monitor passengers’ health, a geolocation system that can help smooth out passenger traffic on public transportation, and a smartphone feature for virtually painting the light ambience of a room involve pulling together data from several research fields in an inventive way. The ever increasing amount of information in the world, spread across many different industries and markets, makes this an increasingly difficult task. To make it possible for inventors to sense the inventive signal in this ever increasing amount of noise, AI researchers and software developers at Iprova – the innovation creation firm that came up with the aforementioned inventions – have developed an artificial intelligence platform that includes sophisticated semantic analysis algorithms. Ana Manasovska helped create this program as part of her Master’s degree in computer science at EPFL, in association with the school’s Artificial Intelligence Laboratory. She now works for the company, which is based at EPFL’s Innovation Park, to further develop the software that makes it easier for engineers to invent faster and more disruptively..

    Millions of publications sifted

    Millions of research, industry news and other articles are published around the world every year. One part of Iprova’s artificial intelligence platform works by performing a semantic analysis of the terms in published articles. Manasovska’s thesis on summarization methods contributed to this by testing various phrase recognition methods, which she did by representing individual phrases as vectors. If two phrases have a close virtual spatial location, then their meanings are probably similar. This technique can be used to generate better summaries by measuring phrases’ semantic similarity.

    What Manasovska found when comparing the different methods is that the more complicated architectures weren’t necessarily better suited to this task. “Even with a simple architecture, we got excellent results in identifying phrases with similar meanings,” she says. “We also learned that the best way to generate the kinds of summaries that Iprova needs is to approach them from an inventor’s perspective. Most conventional summary-generation methods don’t do that, which is why we wanted to develop our own,” she adds.

    Today Manasovska is working on further improving the semantic recognition and recommendation platform. She works closely with inventors, aiming to find out what kind of data they need and how they plan to use it. She has developed programs allowing engineers to create inventions using input data that they wouldn’t have been able to easily get otherwise. An example of this is the linking of information from inventively relevant, but otherwise disparate, research fields that open up entirely new invention opportunities. “What I really like about my work is that it lets me stay on top of the latest developments in machine learning and natural language processing (NLP) – two fields that are advancing rapidly. I have the opportunity to use the latest technology and the power of data to help people spot relevant new findings more efficiently,” she says.

    “Traditional inventors were scientists or engineers with a deep understanding of a specific technical field. This only gave the inventor access to a limited amount of research insight. Even collaborative inventing through teamwork only provides insight into a handful of additional fields, since it’s just a team of specialists. With such approaches to invention, researchers can only dig deeper into specific areas rather than offering genuine innovation by taking the field in a different direction.

    Iprova does this on a massive scale – in real-time – by using data from across the spectrum of human knowledge to make connections between ideas from different fields of study.”says Julian Nolan, CEO of Iprova. The company is combining AI with a team of creative scientific minds – the invention developers – to accelerate the development of tomorrow’s products and services. Its customers include some of the best known technology companies in Silicon Valley, Japan and Europe. Hundreds of patents have been filed based on its inventions, which are cited by companies including Google, Microsoft and Amazon.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    EPFL campus

    EPFL is Europe’s most cosmopolitan technical university. It receives students, professors and staff from over 120 nationalities. With both a Swiss and international calling, it is therefore guided by a constant wish to open up; its missions of teaching, research and partnership impact various circles: universities and engineering schools, developing and emerging countries, secondary schools and gymnasiums, industry and economy, political circles and the general public.

    Advertisements
     
  • richardmitnick 7:51 am on August 13, 2018 Permalink | Reply
    Tags: , AI, , Computers can’t have needs cravings or desires,   

    From aeon: “Robot says: Whatever” 

    1

    From aeon

    8.13.18
    Margaret Boden

    1
    Chief priest Bungen Oi holds a robot AIBO dog prior to its funeral ceremony at the Kofukuji temple in Isumi, Japan, on 26 April 2018. Photo by Nicolas Datiche /AFP/Getty
    https://www.headlines24.nl/nieuwsartikel/135259/201808/robot–whatever

    What stands in the way of all-powerful AI isn’t a lack of smarts: it’s that computers can’t have needs, cravings or desires.

    In Henry James’s intriguing novella The Beast in the Jungle (1903), a young man called John Marcher believes that he is marked out from everyone else in some prodigious way. The problem is that he can’t pinpoint the nature of this difference. Marcher doesn’t even know whether it is good or bad. Halfway through the story, his companion May Bartram – a wealthy, New-England WASP, naturally – realises the answer. But by now she is middle-aged and terminally ill, and doesn’t tell it to him. On the penultimate page, Marcher (and the reader) learns what it is. For all his years of helpfulness and dutiful consideration towards May, detailed at length in the foregoing pages, not even she had ever really mattered to him.

    That no one really mattered to Marcher does indeed mark him out from his fellow humans – but not from artificial intelligence (AI) systems, for which nothing matters. Yes, they can prioritise: one goal can be marked as more important or more urgent than another. In the 1990s, the computer scientists Aaron Sloman and Ian Wright even came up with a computer model of a nursemaid in charge of several unpredictable and demanding babies, in order to illustrate aspects of Sloman’s theory about anxiety in humans who must juggle multiple goals. But this wasn’t real anxiety: the computer couldn’t care less.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Since 2012, Aeon has established itself as a unique digital magazine, publishing some of the most profound and provocative thinking on the web. We ask the big questions and find the freshest, most original answers, provided by leading thinkers on science, philosophy, society and the arts.

    Aeon has three channels, and all are completely free to enjoy:

    Essays – Longform explorations of deep issues written by serious and creative thinkers

    Ideas – Short provocations, maintaining Aeon’s high editorial standards but in a more nimble and immediate form. Our Ideas are published under a Creative Commons licence, making them available for republication.

    Video – A mixture of curated short documentaries and original Aeon productions

    Through our Partnership program, we publish pieces from university research groups, university presses and other selected cultural organisations.

    Aeon was founded in London by Paul and Brigid Hains. It now has offices in London, Melbourne and New York. We are a not-for-profit, registered charity operated by Aeon Media Group Ltd. Aeon is endorsed as a Deductible Gift Recipient (DGR) organisation in Australia and, through its affiliate Aeon America, registered as a 501(c)(3) charity in the US.

    We are committed to big ideas, serious enquiry and a humane worldview. That’s it.

     
  • richardmitnick 8:18 am on August 8, 2018 Permalink | Reply
    Tags: "2062: The World that AI Made", AI, Toby Walsh,   

    From University of New South Wales: “Humanity confronts a defining question: how will AI change us?” 

    U NSW bloc

    From University of New South Wales

    07 Aug 2018
    Penny Jones

    In his new book 2062: The World that AI Made, UNSW artificial intelligence expert Toby Walsh urges us to choose wisely as we define the effects on our lives of the Fourth Industrial Revolution.

    1
    Scientia Professor Toby Walsh with Baxter.

    What will happen when we’ve built machines as intelligent as us? According to the experts this incredible feat will be achieved in the year 2062 – a mere 44 years away – which certainly begs the question: what will the world, our jobs, the economy, politics, war, and everyday life and death, look like then?

    Fortunately, Toby Walsh, Scientia Professor of Artificial Intelligence (AI) at UNSW has done the research for us.

    An avid sci-fi fan from childhood, Walsh, who also leads the Algorithmic Decision Theory group at Data61 – Australia’s Centre of Excellence for ICT Research, has long been fascinated by robots, machines and the future. In 2017, he published his first book, It’s Alive!, in which he tells the story of AI and how it is already affecting our societies, economies and interactions.

    “After I published It’s Alive!, people started asking me lots of questions about the social impact of AI, in particular the increasing concerns about how it’s encroaching into our lives,” he says. “That’s why I wrote my second book, 2062: The World that AI Made, which ignores the technology, and focuses instead on the reality and impact of where AI is going to take us.”

    According to Walsh, (and, he says, the vast majority of his colleagues) this future looks less like the dystopian world of The Terminator and more like the sensitive world of Short Circuit.

    “Most of the movies from Hollywood featuring AI paint a very disturbing picture of the future. But there is one movie that seems to get it right,” Walsh continues.

    He is referring to the 2013 American sci-fi movie Her, where the protagonist falls in love with his intelligent computer operating system.

    “One thing Her does really well is to demonstrate our deepening relationship with machines,” Walsh explains.

    “As the Internet of Things gets more established and our devices become interconnected, things like your front door, washing machine, fridge and TV, will all be voice activated. You’ll just walk into a room and start talking and the room will obey your commands,” he says.

    While Walsh makes a series of predications based on the way the technology is heading, he is very careful to emphasise that the future isn’t fixed. There is no technological determinism and what happens next in AI is very much the product of the choices we make today. In other words, we must consciously choose the future we want.

    “We are at a critical junction in history where there’s a lot to play for. It’s rightly called the Fourth Industrial Revolution, and we need to start making choices so that it turns out to be a revolution that everyone can benefit from,” he says.

    The book covers topics including privacy, education, equality, politics, warfare and work and although Walsh says there is absolutely no need to be worried about a Hollywood version of the future where the robots rise up and take over the world, there are a few things we do need to urgently address.

    Jobs is one area that is already seeing major disruptions. From drivers to pilots and from medicine to journalism, AI is infiltrating every industry. Another concern is the impact AI will have on warfare.

    “Think about the implications of handing over the reins to decide who lives and who dies to machines,” says Walsh, who was one of more than 100 global tech leaders who signed an open letter in 2017 calling on the United Nations to ban killer robots.

    Another big concern, says Walsh, is that we may unconsciously build algorithms with many of the biases we are currently struggling with – racism, sexism, ageism, etc. “We have to be careful not to bake these into algorithms and take ourselves backwards,” he says.

    Walsh says he is a pessimist in the short term but an optimist in the long term. “I wrote 2062: The World that AI Made to stimulate the conversation I think the whole of society should be having. We are in for a period of struggle as the world changes but, ultimately, I think technology will deliver on things like climate change and the other crises we are experiencing,” he says.

    “If we make the right decisions now, we can build a future where we let the machines take the sweat and we can focus on the more important things in life, our families and art, for example. Just think about it. This could usher in the next Renaissance, a great flaring of creativity,” he says.

    2062: The World that AI Made by Toby Walsh was published this week and is available online and in all good book shops.

    Toby Walsh is also part of a panel discussion entitled “Good Robot/Bad Robot: Living with Intelligent Machines” at the Sydney Opera House on 12 August. Tickets are available here.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
  • richardmitnick 7:08 am on July 21, 2018 Permalink | Reply
    Tags: AI, , ,   

    From Exascale Computing Project: “ECP Announces New Co-Design Center to Focus on Exascale Machine Learning Technologies” 

    From Exascale Computing Project

    07/20/18

    The Exascale Computing Project has initiated its sixth Co-Design Center, ExaLearn, to be led by Principal Investigator Francis J. Alexander, Deputy Director of the Computational Science Initiative at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.

    1
    Francis J. Alexander. BNL


    ExaLearn is a co-design center for Exascale Machine Learning (ML) Technologies and is a collaboration initially consisting of experts from eight multipurpose DOE labs.

    Brookhaven National Laboratory (Francis J. Alexander)
    Argonne National Laboratory (Ian Foster)
    Lawrence Berkeley National Laboratory (Peter Nugent)
    Lawrence Livermore National Laboratory (Brian van Essen)
    Los Alamos National Laboratory (Aric Hagberg)
    Oak Ridge National Laboratory (David Womble)
    Pacific Northwest National Laboratory (James Ang)
    Sandia National Laboratories (Michael Wolf)

    Rapid growth in the amount of data and computational power is driving a revolution in machine learning (ML) and artificial intelligence (AI). Beyond the highly visible successes in machine-based natural language translation, these new ML technologies have profound implications for computational and experimental science and engineering and the exascale computing systems that DOE is deploying to support those disciplines.

    To address these challenges, the ExaLearn co-design center will provide exascale ML software for use by ECP Applications projects, other ECP Co-Design Centers and DOE experimental facilities and leadership class computing facilities. The ExaLearn Co-Design Center will also collaborate with ECP PathForward vendors on the development of exascale ML software.

    The timeliness of ExaLearn’s proposed work ties into the critical national need to enhance economic development through science and technology. It is increasingly clear that advances in learning technologies have profound societal implications and that continued U.S. economic leadership requires a focused effort, both to increase the performance of those technologies and to expand their applications. Linking exascale computing and learning technologies represents a timely opportunity to address those goals.

    The practical end product will be a scalable and sustainable ML software framework that allows application scientists and the applied mathematics and computer science communities to engage in co-design for learning. The new knowledge and services to be provided by ExaLearn are imperative for the nation to remain competitive in computational science and engineering by making effective use of future exascale systems.

    “Our multi-laboratory team is very excited to have the opportunity to tackle some of the most important challenges in machine learning at the exascale,” Alexander said. “There is, of course, already a considerable investment by the private sector in machine learning. However, there is still much more to be done in order to enable advances in very important scientific and national security work we do at the Department of Energy. I am very happy to lead this effort on behalf of our collaborative team.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About ECP

    The ECP is a collaborative effort of two DOE organizations – the Office of Science and the National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 9:22 am on June 28, 2018 Permalink | Reply
    Tags: AI, NASA FDL - Frontier DEvelopment Lab,   

    From SETI Institute: “NASA FDL Leverages Public/Private Partnership to Push New Boundaries of Space Science with Artificial Intelligence” 

    SETI Logo new
    From SETI Institute

    Jun 26, 2018

    James Parr
    FDL Director
    Frontier Development Lab
    james@frontierdevelopmentlab.org
    http://www.frontierdevelopmentlab.org

    Rebecca McDonald
    Director of Communications
    SETI Institute
    189 Bernardo Ave., Suite 200
    Mountain View, CA 94043
    rmcdonald@seti.org
    http://www.seti.org

    1

    AI Accelerator to Focus on Key Challenges in Space Resources, Astrobiology, Space Weather, And Exoplanets to Benefit the Space Program and Humanity.

    The NASA Frontier Development Lab (FDL) has announced it will apply artificial intelligence (AI) to four key space challenges. FDL is an AI/machine learning research accelerator powered by a public/private partnership between NASA, the SETI Institute, commercial leaders in AI, and pioneers in the private space industry.

    Entering its third year, FDL is building on a successful track record by expanding its focus to four key research areas: Space Resources, Astrobiology, Exoplanets, and Solar Weather. The final results of FDL 2018 will be presented at Intel in Santa Clara on August 16th.

    “This year, we have 50 phenomenally talented researchers and mentors from AI and the space sciences tackling these critical challenges to the space program,” said Dr. Dan Rasky, Chief, Space Portal, NASA’s Ames Research Center in Silicon Valley. “We’re excited that NASA is able to convene a group like this. AI is a game changer for space exploration and we’re looking forward to some fascinating results.”

    There is mounting excitement around the potential for great progress within each of the designated challenge areas. According to Bill Diamond, President and CEO at the SETI Institute, “The NASA FDL researchers are taking on some fascinating challenges. For example, can we find a way to predict solar weather? Can we get better at discovering new exoplanets and possibly even new forms of life? Can we enable multiple rovers to work together to effectively explore for resources like water on the moon?” Diamond continued, “These are great questions. We are excited to see some answers begin to emerge in this year’s program.”

    NASA FDL is a compelling example of how public/private partnerships can yield significant results. Hosted at the SETI Institute, the NASA FDL program pairs researchers from the space sciences with data scientists for an intense eight-week concentrated sprint, supported by leaders in AI, such as Intel, Google, Kx Systems, IBM and NVIDIA and key players in private space such as SpaceResourcesLu, Lockheed Martin, KBRWyle and XPRIZE.

    NASA FDL has consistently demonstrated the potential of applied AI to create useful results in accelerated time periods. According FDL Director James Parr, “We are proud of our achievements to date, and we expect even more from this year’s challenges. NASA FDL researchers have shared their results at numerous professional conferences, in both the AI and space science domains, and papers from 2016 and 2017 are being published in peer-reviewed journals. Moreover, functioning AI workflows from FDL are being deployed on NASA funded activities – including detecting long period comets.”

    The Frontier Development Lab is the latest NASA sponsored activity to push the boundaries of state-of-the-art in computing – specifically applied artificial intelligence – to assist in solving knowledge gaps in space science and exploration relevant to NASA and humankind.

    The space program and computing have a long history of advancement for mutual benefit. The push for miniaturization in the late 60’s, helped catalyze the development of the microprocessor which took humanity to the Moon. The Apollo Moonshots were also responsible for error-free software architectures which made computers reliable in deep space, but also credible everyday tools. More recently, NASA originated ‘camera-on-a-chip’ technology resides inside many smartphones and now, self-driving cars.

    To learn more about FDL, the 2018 challenge questions and to follow our 2018 program please visit the FDL website at http://www.frontierdevelopmentlab.org.

    About the NASA Frontier Development Lab (FDL)

    Hosted in Silicon Valley by the SETI Institute, the NASA FDL is an applied artificial intelligence research accelerator developed in partnership with NASA’s Ames Research Center. Founded in 2016, the NASA FDL aims to apply AI technologies to challenges in space exploration by pairing machine learning expertise with space science and exploration researchers from academia and industry. These interdisciplinary teams address tightly defined problems and the format encourages rapid iteration and prototyping to create outputs with meaningful application to the space program and humanity.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    SETI Institute – 189 Bernardo Ave., Suite 100
    Mountain View, CA 94043
    Phone 650.961.6633 – Fax 650-961-7099
    Privacy PolicyQuestions and Comments

     
  • richardmitnick 2:37 pm on May 5, 2018 Permalink | Reply
    Tags: 000 - not exactly inspirational, AI, Cash prizes of US$12000 $8000 and $5000 - not exactly inspirational, , Hosted by Google-owned company Kaggle, , Too much data for existing computing assets, TrackML challenge   

    From Nature: “Particle physicists turn to AI to cope with CERN’s collision deluge” 

    Nature Mag
    Nature

    04 May 2018
    No writer credit found

    1
    The pixel detector at CERN’s CMS experiment records particles that emerge from collisions.Credit: CERN

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    Physicists at the world’s leading atom smasher are calling for help. In the next decade, they plan to produce up to 20 times more particle collisions in the Large Hadron Collider (LHC) than they do now, but current detector systems aren’t fit for the coming deluge. So this week, a group of LHC physicists has teamed up with computer scientists to launch a competition to spur the development of artificial-intelligence techniques that can quickly sort through the debris of these collisions. Researchers hope these will help the experiment’s ultimate goal of revealing fundamental insights into the laws of nature.

    At the LHC at CERN, Europe’s particle-physics laboratory near Geneva, two bunches of protons collide head-on inside each of the machine’s detectors 40 million times a second. Every proton collision can produce thousands of new particles, which radiate from a collision point at the centre of each cathedral-sized detector. Millions of silicon sensors are arranged in onion-like layers and light up each time a particle crosses them, producing one pixel of information every time. Collisions are recorded only when they produce potentially interesting by-products. When they are, the detector takes a snapshot that might include hundreds of thousands of pixels from the piled-up debris of up to 20 different pairs of protons. (Because particles move at or close to the speed of light, a detector cannot record a full movie of their motion.)

    From this mess, the LHC’s computers reconstruct tens of thousands of tracks in real time, before moving on to the next snapshot. “The name of the game is connecting the dots,” says Jean-Roch Vlimant, a physicist at the California Institute of Technology in Pasadena who is a member of the collaboration that operates the CMS detector at the LHC.

    2
    The yellow lines depict reconstructed particle trajectories from collisions recorded by CERN’s CMS detector.Credit: CERN

    CERN CMS Higgs Event

    After future planned upgrades, each snapshot is expected to include particle debris from 200 proton collisions. Physicists currently use pattern-recognition algorithms to reconstruct the particles’ tracks. Although these techniques would be able to work out the paths even after the upgrades, “the problem is, they are too slow”, says Cécile Germain, a computer scientist at the University of Paris South in Orsay. Without major investment in new detector technologies, LHC physicists estimate that the collision rates will exceed the current capabilities by at least a factor of 10.

    Researchers suspect that machine-learning algorithms could reconstruct the tracks much more quickly. To help find the best solution, Vlimant and other LHC physicists teamed up with computer scientists including Germain to launch the TrackML challenge. For the next three months, data scientists will be able to download 400 gigabytes of simulated particle-collision data — the pixels produced by an idealized detector — and train their algorithms to reconstruct the tracks.

    Participants will be evaluated on the accuracy with which they do this. The top three performers of this phase hosted by Google-owned company Kaggle, will receive cash prizes of US$12,000, $8,000 and $5,000. A second competition will then evaluate algorithms on the basis of speed as well as accuracy, Vlimant says.

    Prize appeal

    Such competitions have a long tradition in data science, and many young researchers take part to build up their CVs. “Getting well ranked in challenges is extremely important,” says Germain. Perhaps the most famous of these contests was the 2009 Netflix Prize. The entertainment company offered US$1 million to whoever worked out the best way to predict what films its users would like to watch, going on their previous ratings. TrackML isn’t the first challenge in particle physics, either: in 2014, teams competed to ‘discover’ the Higgs boson in a set of simulated data (the LHC discovered the Higgs, long predicted by theory, in 2012). Other science-themed challenges have involved data on anything from plankton to galaxies.

    From the computer-science point of view, the Higgs challenge was an ordinary classification problem, says Tim Salimans, one of the top performers in that race (after the challenge, Salimans went on to get a job at the non-profit effort OpenAI in San Francisco, California). But the fact that it was about LHC physics added to its lustre, he says. That may help to explain the challenge’s popularity: nearly 1,800 teams took part, and many researchers credit the contest for having dramatically increased the interaction between the physics and computer-science communities.

    TrackML is “incomparably more difficult”, says Germain. In the Higgs case, the reconstructed tracks were part of the input, and contestants had to do another layer of analysis to ‘find’ the particle. In the new problem, she says, you have to find in the 100,000 points something like 10,000 arcs of ellipse. She thinks the winning technique might end up resembling those used by the program AlphaGo, which made history in 2016 when it beat a human champion at the complex game of Go. In particular, they might use reinforcement learning, in which an algorithm learns by trial and error on the basis of ‘rewards’ that it receives after each attempt.

    Vlimant and other physicists are also beginning to consider more untested technologies, such as neuromorphic computing and quantum computing. “It’s not clear where we’re going,” says Vlimant, “but it looks like we have a good path.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 4:00 pm on April 24, 2018 Permalink | Reply
    Tags: AI, , , , , , ,   

    From UC Santa Cruz: “Face recognition for galaxies: Artificial intelligence brings new tools to astronomy” 

    UC Santa Cruz

    UC Santa Cruz

    April 23, 2018
    Tim Stephens
    stephens@ucsc.edu

    A ‘deep learning’ algorithm trained on images from cosmological simulations has been surprisingly successful at classifying real galaxies in Hubble images

    1
    A ‘deep learning’ algorithm trained on images from cosmological simulations is surprisingly successful at classifying real galaxies in Hubble images. Top row: High-resolution images from a computer simulation of a young galaxy going through three phases of evolution (before, during, and after the “blue nugget” phase). Middle row: The same images from the computer simulation of a young galaxy in three phases of evolution as it would appear if observed by the Hubble Space Telescope. Bottom row: Hubble Space Telescope images of distant young galaxies classified by a deep learning algorithm trained to recognize the three phases of galaxy evolution. The width of each image is approximately 100,000 light years. [Image credits for top two rows: Greg Snyder, Space Telescope Science Institute, and Marc Huertas-Company, Paris Observatory. For bottom row: The HST images are from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS).

    A machine learning method called “deep learning,” which has been widely used in face recognition and other image- and speech-recognition applications, has shown promise in helping astronomers analyze images of galaxies and understand how they form and evolve.

    In a new study, accepted for publication in The Astrophysical Journal, researchers used computer simulations of galaxy formation to train a deep learning algorithm, which then proved surprisingly good at analyzing images of galaxies from the Hubble Space Telescope.

    The researchers used output from the simulations to generate mock images of simulated galaxies as they would look in observations by the Hubble Space Telescope. The mock images were used to train the deep learning system to recognize three key phases of galaxy evolution previously identified in the simulations. The researchers then gave the system a large set of actual Hubble images to classify.

    The results showed a remarkable level of consistency in the neural network’s classifications of simulated and real galaxies.

    “We were not expecting it to be all that successful. I’m amazed at how powerful this is,” said coauthor Joel Primack, professor emeritus of physics and a member of the Santa Cruz Institute for Particle Physics (SCIPP) at UC Santa Cruz. “We know the simulations have limitations, so we don’t want to make too strong a claim. But we don’t think this is just a lucky fluke.”

    Galaxies are complex phenomena, changing their appearance as they evolve over billions of years, and images of galaxies can provide only snapshots in time. Astronomers can look deeper into the universe and thereby “back in time” to see earlier galaxies (because of the time it takes light to travel cosmic distances), but following the evolution of an individual galaxy over time is only possible in simulations. Comparing simulated galaxies to observed galaxies can reveal important details of the actual galaxies and their likely histories.

    Blue nuggets

    In the new study, the researchers were particularly interested in a phenomenon seen in the simulations early in the evolution of gas-rich galaxies, when big flows of gas into the center of a galaxy fuel formation of a small, dense, star-forming region called a “blue nugget.” (Young, hot stars emit short “blue” wavelengths of light, so blue indicates a galaxy with active star formation, whereas older, cooler stars emit more “red” light.)

    In both simulated and observational data, the computer program found that the “blue nugget” phase only occurs in galaxies with masses within a certain range. This is followed by quenching of star formation in the central region, leading to a compact “red nugget” phase. The consistency of the mass range was an exciting finding, because it suggests the deep learning algorithm is identifying on its own a pattern that results from a key physical process happening in real galaxies.

    “It may be that in a certain size range, galaxies have just the right mass for this physical process to occur,” said coauthor David Koo, professor emeritus of astronomy and astrophysics at UC Santa Cruz.

    The researchers used state-of-the-art galaxy simulations (the VELA simulations) developed by Primack and an international team of collaborators, including Daniel Ceverino (University of Heidelberg), who ran the simulations, and Avishai Dekel (Hebrew University), who led analysis and interpretation of them and developed new physical concepts based on them. All such simulations are limited, however, in their ability to capture the complex physics of galaxy formation.

    In particular, the simulations used in this study did not include feedback from active galactic nuclei (injection of energy from radiation as gas is accreted by a central supermassive black hole). Many astronomers consider this process to be an important factor regulating star formation in galaxies. Nevertheless, observations of distant, young galaxies appear to show evidence of the phenomenon leading to the blue nugget phase seen in the simulations.

    CANDELS

    CANDELS Cosmic Assembly Near Infrared Deep Extragalactic Legacy Survey

    For the observational data, the team used images of galaxies obtained through the CANDELS project (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey), the largest project in the history of the Hubble Space Telescope. First author Marc Huertas-Company, an astronomer at the Paris Observatory and Paris Diderot University, had already done pioneering work applying deep learning methods to galaxy classifications using publicly available CANDELS data.

    Koo, a CANDELS co-investigator, invited Huertas-Company to visit UC Santa Cruz to continue this work. Google has provided support for their work on deep learning in astronomy through gifts of research funds to Koo and Primack, allowing Huertas-Company to spend the past two summers in Santa Cruz, with plans for another visit in the summer of 2018.

    “This project was just one of several ideas we had,” Koo said. “We wanted to pick a process that theorists can define clearly based on the simulations, and that has something to do with how a galaxy looks, then have the deep learning algorithm look for it in the observations. We’re just beginning to explore this new way of doing research. It’s a new way of melding theory and observations.”

    For years, Primack has been working closely with Koo and other astronomers at UC Santa Cruz to compare his team’s simulations of galaxy formation and evolution with the CANDELS observations. “The VELA simulations have had a lot of success in terms of helping us understand the CANDELS observations,” Primack said. “Nobody has perfect simulations, though. As we continue this work, we will keep developing better simulations.”

    According to Koo, deep learning has the potential to reveal aspects of the observational data that humans can’t see. The downside is that the algorithm is like a “black box,” so it is hard to know what features in the data the machine is using to make its classifications. Network interrogation techniques can identify which pixels in an image contributed most to the classification, however, and the researchers tested one such method on their network.

    “Deep learning looks for patterns, and the machine can see patterns that are so complex that we humans don’t see them,” Koo said. “We want to do a lot more testing of this approach, but in this proof-of-concept study, the machine seemed to successfully find in the data the different stages of galaxy evolution identified in the simulations.”

    In the future, he said, astronomers will have much more observational data to analyze as a result of large survey projects and new telescopes such as the Large Synoptic Survey Telescope, the James Webb Space Telescope, and the Wide-Field Infrared Survey Telescope.

    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    NASA/ESA/CSA Webb Telescope annotated

    NASA/WFIRST

    Deep learning and other machine learning methods could be powerful tools for making sense of these massive datasets.

    “This is the beginning of a very exciting time for using advanced artificial intelligence in astronomy,” Koo said.

    In addition to Primack, Koo, and Huertas-Company, the coauthors of the paper include Avishai Dekel at Hebrew University in Jerusalem (and a visiting researcher at UC Santa Cruz); Sharon Lapiner at Hebrew University; Daniel Ceverino at University of Heidelberg; Raymond Simons at Johns Hopkins University; Gregory Snyder at Space Telescope Science Institute; Mariangela Bernardi and H. Dominquez Sanchez at University of Pennsylvania; Zhu Chen at Shanghai Normal University; Christoph Lee at UC Santa Cruz; and Berta Margalef-Bentabol and Diego Tuccillo at the Paris Observatory.

    In addition to support from Google, this work was partly supported by grants from France-Israel PICS, US-Israel Binational Science Foundation, U.S. National Science Foundation, and Hubble Space Telescope. The VELA computer simulations were run on NASA’s Pleiades supercomputer and at DOE’s National Energy Research Scientific Computer Center (NERSC).

    NASA SGI Intel Advanced Supercomputing Center Pleiades Supercomputer

    NERSC Cray XC40 Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UCO Lick Shane Telescope
    UCO Lick Shane Telescope interior
    Shane Telescope at UCO Lick Observatory, UCSC

    Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA

    Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA

    UC Santa Cruz campus
    The University of California, Santa Cruz, opened in 1965 and grew, one college at a time, to its current (2008-09) enrollment of more than 16,000 students. Undergraduates pursue more than 60 majors supervised by divisional deans of humanities, physical & biological sciences, social sciences, and arts. Graduate students work toward graduate certificates, master’s degrees, or doctoral degrees in more than 30 academic fields under the supervision of the divisional and graduate deans. The dean of the Jack Baskin School of Engineering oversees the campus’s undergraduate and graduate engineering programs.

    UCSC is the home base for the Lick Observatory.

    Lick Observatory's Great Lick 91-centimeter (36-inch) telescope housed in the South (large) Dome of main building
    Lick Observatory’s Great Lick 91-centimeter (36-inch) telescope housed in the South (large) Dome of main building

    Search for extraterrestrial intelligence expands at Lick Observatory
    New instrument scans the sky for pulses of infrared light
    March 23, 2015
    By Hilary Lebow
    1
    The NIROSETI instrument saw first light on the Nickel 1-meter Telescope at Lick Observatory on March 15, 2015. (Photo by Laurie Hatch) UCSC Lick Nickel telescope

    Astronomers are expanding the search for extraterrestrial intelligence into a new realm with detectors tuned to infrared light at UC’s Lick Observatory. A new instrument, called NIROSETI, will soon scour the sky for messages from other worlds.

    “Infrared light would be an excellent means of interstellar communication,” said Shelley Wright, an assistant professor of physics at UC San Diego who led the development of the new instrument while at the University of Toronto’s Dunlap Institute for Astronomy & Astrophysics.

    Wright worked on an earlier SETI project at Lick Observatory as a UC Santa Cruz undergraduate, when she built an optical instrument designed by UC Berkeley researchers. The infrared project takes advantage of new technology not available for that first optical search.

    Infrared light would be a good way for extraterrestrials to get our attention here on Earth, since pulses from a powerful infrared laser could outshine a star, if only for a billionth of a second. Interstellar gas and dust is almost transparent to near infrared, so these signals can be seen from great distances. It also takes less energy to send information using infrared signals than with visible light.

    5
    UCSC alumna Shelley Wright, now an assistant professor of physics at UC San Diego, discusses the dichroic filter of the NIROSETI instrument. (Photo by Laurie Hatch)

    Frank Drake, professor emeritus of astronomy and astrophysics at UC Santa Cruz and director emeritus of the SETI Institute, said there are several additional advantages to a search in the infrared realm.

    “The signals are so strong that we only need a small telescope to receive them. Smaller telescopes can offer more observational time, and that is good because we need to search many stars for a chance of success,” said Drake.

    The only downside is that extraterrestrials would need to be transmitting their signals in our direction, Drake said, though he sees this as a positive side to that limitation. “If we get a signal from someone who’s aiming for us, it could mean there’s altruism in the universe. I like that idea. If they want to be friendly, that’s who we will find.”

    Scientists have searched the skies for radio signals for more than 50 years and expanded their search into the optical realm more than a decade ago. The idea of searching in the infrared is not a new one, but instruments capable of capturing pulses of infrared light only recently became available.

    “We had to wait,” Wright said. “I spent eight years waiting and watching as new technology emerged.”

    Now that technology has caught up, the search will extend to stars thousands of light years away, rather than just hundreds. NIROSETI, or Near-Infrared Optical Search for Extraterrestrial Intelligence, could also uncover new information about the physical universe.

    “This is the first time Earthlings have looked at the universe at infrared wavelengths with nanosecond time scales,” said Dan Werthimer, UC Berkeley SETI Project Director. “The instrument could discover new astrophysical phenomena, or perhaps answer the question of whether we are alone.”

    NIROSETI will also gather more information than previous optical detectors by recording levels of light over time so that patterns can be analyzed for potential signs of other civilizations.

    “Searching for intelligent life in the universe is both thrilling and somewhat unorthodox,” said Claire Max, director of UC Observatories and professor of astronomy and astrophysics at UC Santa Cruz. “Lick Observatory has already been the site of several previous SETI searches, so this is a very exciting addition to the current research taking place.”

    NIROSETI will be fully operational by early summer and will scan the skies several times a week on the Nickel 1-meter telescope at Lick Observatory, located on Mt. Hamilton east of San Jose.

    The NIROSETI team also includes Geoffrey Marcy and Andrew Siemion from UC Berkeley; Patrick Dorval, a Dunlap undergraduate, and Elliot Meyer, a Dunlap graduate student; and Richard Treffers of Starman Systems. Funding for the project comes from the generous support of Bill and Susan Bloomfield.

     
  • richardmitnick 5:24 pm on April 1, 2018 Permalink | Reply
    Tags: AI, , Computer searches telescope data for evidence of distant planets, ,   

    From MIT: “Computer searches telescope data for evidence of distant planets” 

    MIT News

    MIT Widget

    MIT News

    March 29, 2018
    Larry Hardesty

    1
    A young sun-like star encircled by its planet-forming disk of gas and dust.
    Image: NASA/JPL-Caltech

    Machine-learning system uses physics principles to augment data from NASA crowdsourcing project.

    As part of an effort to identify distant planets hospitable to life, NASA has established a crowdsourcing project in which volunteers search telescopic images for evidence of debris disks around stars, which are good indicators of exoplanets.

    Using the results of that project, researchers at MIT have now trained a machine-learning system to search for debris disks itself. The scale of the search demands automation: There are nearly 750 million possible light sources in the data accumulated through NASA’s Wide-Field Infrared Survey Explorer (WISE) mission alone.

    NASA/WISE Telescope

    In tests, the machine-learning system agreed with human identifications of debris disks 97 percent of the time. The researchers also trained their system to rate debris disks according to their likelihood of containing detectable exoplanets. In a paper describing the new work in the journal Astronomy and Computing, the MIT researchers report that their system identified 367 previously unexamined celestial objects as particularly promising candidates for further study.

    The work represents an unusual approach to machine learning, which has been championed by one of the paper’s coauthors, Victor Pankratius, a principal research scientist at MIT’s Haystack Observatory. Typically, a machine-learning system will comb through a wealth of training data, looking for consistent correlations between features of the data and some label applied by a human analyst — in this case, stars circled by debris disks.

    But Pankratius argues that in the sciences, machine-learning systems would be more useful if they explicitly incorporated a little bit of scientific understanding, to help guide their searches for correlations or identify deviations from the norm that could be of scientific interest.

    “The main vision is to go beyond what A.I. is focusing on today,” Pankratius says. “Today, we’re collecting data, and we’re trying to find features in the data. You end up with billions and billions of features. So what are you doing with them? What you want to know as a scientist is not that the computer tells you that certain pixels are certain features. You want to know ‘Oh, this is a physically relevant thing, and here are the physics parameters of the thing.’”

    Classroom conception

    The new paper grew out of an MIT seminar that Pankratius co-taught with Sara Seager, the Class of 1941 Professor of Earth, Atmospheric, and Planetary Sciences, who is well-known for her exoplanet research. The seminar, Astroinformatics for Exoplanets, introduced students to data science techniques that could be useful for interpreting the flood of data generated by new astronomical instruments. After mastering the techniques, the students were asked to apply them to outstanding astronomical questions.

    For her final project, Tam Nguyen, a graduate student in aeronautics and astronautics, chose the problem of training a machine-learning system to identify debris disks, and the new paper is an outgrowth of that work. Nguyen is first author on the paper, and she’s joined by Seager, Pankratius, and Laura Eckman, an undergraduate majoring in electrical engineering and computer science.

    From the NASA crowdsourcing project, the researchers had the celestial coordinates of the light sources that human volunteers had identified as featuring debris disks. The disks are recognizable as ellipses of light with slightly brighter ellipses at their centers. The researchers also used the raw astronomical data generated by the WISE mission.

    To prepare the data for the machine-learning system, Nguyen carved it up into small chunks, then used standard signal-processing techniques to filter out artifacts caused by the imaging instruments or by ambient light. Next, she identified those chunks with light sources at their centers, and used existing image-segmentation algorithms to remove any additional sources of light. These types of procedures are typical in any computer-vision machine-learning project.

    Coded intuitions

    But Nguyen used basic principles of physics to prune the data further. For one thing, she looked at the variation in the intensity of the light emitted by the light sources across four different frequency bands. She also used standard metrics to evaluate the position, symmetry, and scale of the light sources, establishing thresholds for inclusion in her data set.

    In addition to the tagged debris disks from NASA’s crowdsourcing project, the researchers also had a short list of stars that astronomers had identified as probably hosting exoplanets. From that information, their system also inferred characteristics of debris disks that were correlated with the presence of exoplanets, to select the 367 candidates for further study.

    “Given the scalability challenges with big data, leveraging crowdsourcing and citizen science to develop training data sets for machine-learning classifiers for astronomical observations and associated objects is an innovative way to address challenges not only in astronomy but also several different data-intensive science areas,” says Dan Crichton, who leads the Center for Data Science and Technology at NASA’s Jet Propulsion Laboratory. “The use of the computer-aided discovery pipeline described to automate the extraction, classification, and validation process is going to be helpful for systematizing how these capabilities can be brought together. The paper does a nice job of discussing the effectiveness of this approach as applied to debris disk candidates. The lessons learned are going to be important for generalizing the techniques to other astronomy and different discipline applications.”

    “The Disk Detective science team has been working on its own machine-learning project, and now that this paper is out, we’re going to have to get together and compare notes,” says Marc Kuchner, a senior astrophysicist at NASA’s Goddard Space Flight Center and leader of the crowdsourcing disk-detection project known as Disk Detective. “I’m really glad that Nguyen is looking into this because I really think that this kind of machine-human cooperation is going to be crucial for analyzing the big data sets of the future.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:23 pm on November 8, 2017 Permalink | Reply
    Tags: AI, CSAIL-MIT’s Computer Science and Artificial Intelligence Lab, Daniela Rus, , More Evidence that Humans and Machines Are Better When They Team Up, ,   

    From M.I.T Technology Review: Women in STEM- Daniela Rus”More Evidence that Humans and Machines Are Better When They Team Up” 

    MIT Technology Review
    M.I.T Technology Review

    November 8, 2017
    Will Knight

    By worrying about job displacement, we might end up missing a huge opportunity for technological amplification.

    1
    MIT computer scientist Daniela Rus. Justin Saglio

    Instead of just fretting about how robots and AI will eliminate jobs, we should explore new ways for humans and machines to collaborate, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).

    “I believe people and machines should not be competitors, they should be collaborators,” Rus said during her keynote at EmTech MIT 2017, an annual event hosted by MIT Technology Review.

    How technology will impact employment in coming years has become a huge question for economists, policy-makers, and technologists. And, as one of the world’s preeminent centers of robotics and artificial intelligence, CSAIL has a big stake in driving coming changes.

    There is some disagreement among experts about how significantly jobs will be affected by automation and AI, and about how this will be offset by the creation of new business opportunities. Last week, Rus and others at MIT organized an event called AI and the Future of Work, where some speakers gave more dire warnings about the likely upheaval ahead (see “Is AI About to Decimate White Collar Jobs?”).

    The potential for AI to augment human skills is often mentioned, but it has been researched relatively little. Rus talked about a study by researchers from Harvard University comparing the ability of expert doctors and AI software to diagnose cancer in patients. They found that doctors perform significantly better than the software, but doctors together with software were better still.

    Rus pointed to the potential for AI to augment human capabilities in law and in manufacturing, where smarter automated systems might enable the production of goods to be highly customized and more distributed.

    Robotics might end up augmenting human abilities in some surprising ways. For instance, Rus pointed to a project at MIT that involves using the technology in self-driving cars to help people with visual impairment to navigate. She also speculated that brain-computer interfaces, while still relatively crude today, might have a huge impact on future interactions with robots.

    Although Rus is bullish on the future of work, she said two economic phenomena do give her cause for concern. One is the decreasing quality of many jobs, something that is partly shaped by automation; and the other is the flat gross domestic product of the United States, which impacts the emergence of new economic opportunities.

    But because AI is still so limited, she said she expects it to mostly eliminate routing and boring elements of work. “There is still a lot to be done in this space,” Rus said. “I am wildly excited about offloading my routine tasks to machines so I can focus on things that are interesting.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 9:54 am on November 1, 2017 Permalink | Reply
    Tags: AI, Deep neural networks, Google Researchers Have a New Alternative to Traditional Neural Networks,   

    From M.I.T Technology Review: “Google Researchers Have a New Alternative to Traditional Neural Networks” 

    MIT Technology Review
    M.I.T Technology Review

    November 1st, 2017
    Jamie Condliffe

    1
    Image credit: Jingyi Wang

    Say hello to the capsule network.

    AI has enjoyed huge growth in the past few years, and much of that success is owed to deep neural networks, which provide the smarts behind some of AI’s most impressive tricks like image recognition. But there is growing concern that some of the fundamental tenets that have made those systems so successful may not be able to overcome the major problems facing AI—perhaps the biggest of which is a need for huge quantities of data from which to learn.

    Seemingly Google’s Geoff Hinton is among those who are concerned. Because Wired reports that he has now unveiled a new take on the traditional neural networks that he calls capsule networks. In a pair of new papers—one published on the arXIv, the other on OpenReview—Hinton and a handful of colleagues explain how they work.

    Their approach uses small groups of neurons, collectively known as capsules, which are organized into layers to identify things in video or images. When several capsules in one layer agree on having detected something, they activate a capsule at a higher level—and so on, until the network is able to make a judgement about what it sees. Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like from varying angles.

    Hinton claims that the approach, which has been in the making for decades, should enable his networks to require less data than regular neural nets in order to recognize objects in new situations. In the papers published so far, capsule networks have been shown to keep up with regular neural networks when it comes to identifying handwritten characters, and make fewer errors when trying to recognize previously observed toys from different angles. In other words, he’s published the results because he’s got his capsules to work as well as, or slightly better than, regular ones (albeit more slowly, for now).

    Now, then, comes the interesting part. Will these systems provide a compelling alternative to traditional neural networks, or will they stall? Right now it’s impossible to tell, but we can expect the machine learning community to implement the work, and fast, in order to find out. Either way, those concerned about the limitations of current AI systems can be heartened by the fact that researchers are pushing the boundaries to build new deep learning alternatives.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: