Tagged: AI-Artificial Intelligence Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:13 am on June 20, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, , Ted Chiang   

    From Science Node: “Why does AI fascinate us?” 

    Science Node bloc

    10 June, 2019
    Alisa Alering

    1

    Ted Chiang talks about how our love for fictional AI interacts with the real-world use of artificial intelligence.

    Why do you think we are fascinated by AI?

    People have been interested in artificial beings for a very long time. Ever since we’ve had lifelike statues, we’ve imagined how they might behave if they were actually alive. More recently, our ideas of how robots might act are shaped by our perception of how good computers are at certain tasks. The earliest calculating machines did things like computing logarithm tables more accurately than people could. The fact that machines became capable of doing a task which we previously associated with very smart people made us think that the machines were, in some sense, like very smart people.

    How does our—let’s call it shared human mythology—of AI interact with the real forms of artificial intelligence we encounter in the world today?

    The fact that we use the term “artificial intelligence” creates associations in the public imagination which might not exist if the software industry used some other term. AI has, in science fiction, referred to a certain trope of androids and robots, so when the software industry uses the same term, it encourages us to personify software even more than we normally would.

    Is there a big difference between our fictional imaginary consumption of AI and what’s actually going on in current technology?

    2
    Intelligent machines. ‘Maria’ was the first robot to be depicted on film, in Fritz Lang’s Metropolis (1927). Courtesy Jeremy Tarling. (CC BY-SA 2.0)

    I think there’s a huge difference. In our fictional imagination “artificial intelligence” refers to something that is, in many ways, like a person. It’s a very rigid person, but we still think of it as a person. But nothing that we have in the software industry right now is remotely like a person—not even close. It’s very easy for us to attribute human-like characteristics to software, but that’s more of a reflection of our cognitive biases. It doesn’t say anything about the properties that the software itself possesses.

    What’s happening now or in the near future with intelligent systems that really captures your interest?

    What I find most interesting is not typically described as AI, but with the phrase ‘artificial life.’ Some researchers are creating digital organisms with bodies and sense organs that allow them to move around and navigate their environment. Usually there’s some mechanism where they can give rise to slightly different versions of themselves, and thus evolve over time. This avenue of research is really interesting because it could eventually result in software entities which have a lot of the properties that we associate with living organisms. It’s still going to be a long ways from anything that we consider intelligent, but it’s a very interesting avenue of research.

    Over time, these entities might come to have the intelligence of an insect. Even that would be pretty impressive, because even an insect is good at a lot of things which Watson (IBM’s AI supercomputer) can’t do at all. An insect can navigate its environment and look for food and avoid danger. A lot of the things that we call common sense are outgrowths of the fact that we have bodies and live in the physical world. If a digital organism could have some of that, that would be a way of laying the groundwork for an artificial intelligence to eventually have common sense.

    How do we teach an artificial intelligence the things we consider common sense?

    Alan Turing once wrote that he didn’t know what would be the best way to create a thinking machine; it might involve teaching it abstract activities like chess, or it might involve giving it eyes and a body and teaching it the way you’d teach a child. He thought both would be good avenues to explore.

    Historically, we’ve only tried that first route, and that has led to this idea that common sense is hard to teach or that artificial intelligence lack common sense. I think if we had gone with the second route, we’d have a different view of things.

    If you want an AI to be really good at playing chess, we have got that problem licked. But if you want something that can navigate your living room without constantly bumping into a coffee table, that’s a completely different challenge. If you want to solve that one, you’re going to need a different approach than what we’ve used for solving the grandmaster-level chess-playing problem.

    My cat’s really good in the living room but not so good at chess.

    Exactly. Because your cat grew up with eyes and a physical body.

    Since you’re someone who (presumably) spends a lot of time thinking about the social and philosophical aspects of AI, what do you think the creators of artificial beings should be concerned about?

    I think it’s important for all of us to think about the greater context in which the work we do takes place. When people say, “I was just doing my job,” we tend not to consider that a good excuse when doing that job leads to bad moral outcomes.

    When you as a technologist are being asked how to solve a problem, it’s worth thinking about, “Why am I being asked to solve this problem? In whose interest is it to solve this problem?” That’s something we all need to be thinking about no matter what sort of work we do.

    Otherwise, if everyone simply keeps their head down and just focuses narrowly on the task at hand, then nothing changes.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 7:00 am on April 10, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, Alán Aspuru-Guzik, Bo Wang, CIFAR-Canadian Institute for Advanced Research, Daniel Roy, David Fleet,   

    From University of Toronto: “U of T researchers named new CIFAR chairs in artificial intelligence” 

    U Toronto Bloc

    From University of Toronto


    https://www.cifar.ca/

    1
    U of T’s new CIFAR chairs in artificial intelligence (clockwise from top left): David Fleet, Bo Wang, Alán Aspuru-Guzik and Daniel Roy

    April 09, 2019
    Geoffrey Vendeville

    The new U of T chairs – Alán Aspuru-Guzik, David Fleet, Daniel Roy and Bo Wang – join another eight at U of T who were named among the inaugural group last fall and are all associated with the Vector Institute for Artificial Intelligence.

    The four new chairs are expected to play an important part in developing new technologies, from new, state-of-the-art materials to predictive software that promises to improve patient care.

    “Today’s announcement will help U of T increase the number of outstanding artificial intelligence researchers and skilled graduates it produces,” said Vivek Goel, U of T’s vice-president of research and innovation.

    “It will also help heighten U of T’s, and Canada’s, international profile in AI research and training.”

    CIFAR is at the heart of the $125-million pan-Canadian AI strategy, which is intended to attract and retain the best minds in the rapidly growing and competitive field. CIFAR supports AI researchers across the country based at three institutes: Toronto’s Vector Institute, Edmonton’s Alberta Machine Intelligence Institute (Amii) and the Quebec Artificial Intelligence Institute (Mila).

    Aspuru-Guzik, who came to U of T from Harvard University, works at the intersection of theoretical chemistry and computational physics. In his Matter Lab, he and his colleagues use AI to simulate and classify molecules for application in new materials including cleantech and optoelectronics.

    Traditionally, developing new materials required endless experimentation and calculation. “It’s like searching for a needle in a haystack, where the needle is the function of the molecule that you need,” Aspuru-Guzik said. He added that, in the past, experimenters used a scattershot approach but, with AI, they can become sharpshooters.

    “Typically to make a material, it takes $10 million and 10 years of effort,” Aspuru-Guzik said. “So what I’m aiming to do in my research is to bring it down to at least to $1 million and one year per material, and even $100,000 and one month.”

    Canada’s support for AI research is crucial to maintain its early advantage in a field where other countries are looking to be leaders, he added.

    “The AI race is on. This is a race where everybody is putting [forward] their best wheels and race cars. They are pushing their pedals to the limit.”

    As for Wang, he worked as a senior AI consultant for the U.S. biotech company Genentech after obtaining a master’s degree at U of T. He later returned to Toronto to join U of T’s Faculty of Medicine as an assistant professor.

    Wang is also the lead artificial intelligence scientist at the Techna Institute and Peter Munk Cardiac Centre at the University Health Network. He applies his research in machine learning, computational biology, and computer vision to refining patient care.

    Fleet, a professor in the department computer and mathematical sciences at U of T Scarborough, said Canada is reaping the rewards of investments in AI going back to the 1980s. That includes supporting pioneers like Geoffrey Hinton, who recently won an A.M. Turing Award, Hector Levesque and Ray Reiter, who made influential contributions to AI, knowledge representation and databases, and theorem-proving.

    “There was the core group that they built the program around in Toronto that was world renowned,” said Fleet, who completed his PhD in computer science at U of T in 1990 and whose research spans computer vision, machine learning, image processing and visual neuroscience.

    U of T Scarborough is also home to Roy, an assistant professor who combines expertise in computer science, statistics and probability theory. His interest in computers started as a child after his parents brought home a Tandy personal computer. He learned how computer games worked and built his own.

    Fast forward to today and Roy is working on the theoretical frontier of machine learning by helping researchers build more reliable and efficient systems.

    Research in AI is advancing so quickly that it’s impossible to predict where it will lead, Roy said. “It’s difficult for me to anticipate what surprising things will show up at the main conferences next year – let alone five years from now.

    “It repeatedly blows my mind.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded in 1827, the University of Toronto has evolved into Canada’s leading institution of learning, discovery and knowledge creation. We are proud to be one of the world’s top research-intensive universities, driven to invent and innovate.

    Our students have the opportunity to learn from and work with preeminent thought leaders through our multidisciplinary network of teaching and research faculty, alumni and partners.

    The ideas, innovations and actions of more than 560,000 graduates continue to have a positive impact on the world.

     
  • richardmitnick 1:20 pm on March 4, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, , Completely doing away with wind variability is next to impossible, , , Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful, Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind, Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified, , , Unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector   

    From Geospatial World: “Google and DeepMind predict wind energy output using AI” 

    From Geospatial World

    03/04/2019
    Aditya Chaturvedi

    1
    Image Courtesy: Unsplash

    Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind.

    Renewable energy is the way towards lowering carbon emissions and sustainability, so it is imperative that we focus on yielding optimum energy outputs from renewable energy.

    Renewable technologies will be at the forefront of climate change mitigation and addressing global warming, however, the complete potential is yet to be harnessed owing to a slew of obstructions. Wind energy has emerged as a crucial source of renewable energy in the past decade due to a decline in the cost of turbines that has led to the gradual mainstreaming of wind power. Though, unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector.

    Google and DeepMind project will change this forever by overcoming this limitation that has hobbled wind energy adoption.

    With the help of DeepMind’s Machine Learning algorithms, Google has been able to predict the wind energy output of the farms that it uses for its Green Energy initiatives.

    “DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms—part of Google’s global fleet of renewable energy projects—collectively generate as much electricity as is needed by a medium-sized city”, the blog says.

    Google is optimistic that it can accurately predict and schedule energy output, which certainly would have an upper hand over non-time based deliveries.

    3
    Image Courtesy: Google/ DeepMind

    Taking a neural network that makes uses of weather forecasts and turbine data history, DeepMind system has been configured to predict wind power output 36 hours in advance.

    Taking a cue from these predictions, the advanced model recommends the best possible method to fulfill, and even exceed, delivery commitments 24 hrs in advance. Its importance can be estimated from the fact that energy sources that deliver a particular amount of power over a defined period of time are usually more vulnerable to the grid.

    Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified. Till date, Google claims that Machine Learning algorithms have boosted wind energy generated by 20%, ‘compared to the to the baseline scenario of no time-based commitments to the grid’, the blog says.

    4
    Image Courtesy: Google

    Completely doing away with wind variability is next to impossible, but Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful.

    This unique approach would surely open up new avenues and make wind farm data more reliable and precise. When the productivity of wind power farms in greatly increased and their output can be predicted as well as calculated, wind will have the capability to match conventional electricity sources.

    Google is hopeful that the power of Machine Learning and AI would boost the mass adoption of wind power and turn it into a popular alternative to traditional sources of electricity over the years.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    http://www.geospatialworld.net

    With an average of 55,000+ unique visitors per month, http://www.geospatialworld.net is easily the number one media portal in geospatial domain; and is a reliable source of information for professionals in 150+ countries. The website, which integrates text, graphics and video elements, is an interactive medium for geospatial industry stakeholders to connect through several innovative features, including news, videos, guest blogs, case studies, articles, interviews, business listings and events.

    600,000+ annual unique visitors

     
  • richardmitnick 11:58 am on January 30, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, Algorithm could help autonomous underwater vehicles explore risky but scientifically-rewarding environments, , , , Engineers program marine robots to take calculated risks, ,   

    From MIT News: “Engineers program marine robots to take calculated risks” 

    MIT News
    MIT Widget

    From MIT News

    January 30, 2019
    Jennifer Chu

    1
    MIT engineers have now developed an algorithm that lets autonomous underwater vehicles weigh the risks and potential rewards of exploring an unknown region.
    Image: stock image.

    Algorithm could help autonomous underwater vehicles explore risky but scientifically-rewarding environments.

    We know far less about the Earth’s oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate.

    But what if the reward for traversing such places was worth the risk?

    MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unknown region. For instance, if a vehicle tasked with identifying underwater oil seeps approached a steep, rocky trench, the algorithm could assess the reward level (the probability that an oil seep exists near this trench), and the risk level (the probability of colliding with an obstacle), if it were to take a path through the trench.

    “If we were very conservative with our expensive vehicle, saying its survivability was paramount above all, then we wouldn’t find anything of interest,” says Benjamin Ayton, a graduate student in MIT’s Department of Aeronautics and Astronautics. “But if we understand there’s a tradeoff between the reward of what you gather, and the risk or threat of going toward these dangerous geographies, we can take certain risks when it’s worthwhile.”

    Ayton says the new algorithm can compute tradeoffs of risk versus reward in real time, as a vehicle decides where to explore next. He and his colleagues in the lab of Brian Williams, professor of aeronautics and astronautics, are implementing this algorithm and others on AUVs, with the vision of deploying fleets of bold, intelligent robotic explorers for a number of missions, including looking for offshore oil deposits, investigating the impact of climate change on coral reefs, and exploring extreme environments analogous to Europa, an ice-covered moon of Jupiter that the team hopes vehicles will one day traverse.

    “If we went to Europa and had a very strong reason to believe that there might be a billion-dollar observation in a cave or crevasse, which would justify sending a spacecraft to Europa, then we would absolutely want to risk going in that cave,” Ayton says. “But algorithms that don’t consider risk are never going to find that potentially history-changing observation.”

    Ayton and Williams, along with Richard Camilli of the Woods Hole Oceanographic Institution, will present their new algorithm at the Association for the Advancement of Artificial Intelligence conference this week in Honolulu.

    A bold path

    The team’s new algorithm is the first to enable “risk-bounded adaptive sampling.” An adaptive sampling mission is designed, for instance, to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do so by finding paths with a concrete, acceptable level of risk. For instance, AUVs may be programmed to only chart paths with a chance of collision that doesn’t exceed 5 percent.

    But the researchers found that accounting for risk alone could severely limit a mission’s potential rewards.

    “Before we go into a mission, we want to specify the risk we’re willing to take for a certain level of reward,” Ayton says. “For instance, if a path were to take us to more hydrothermal vents, we would be willing to take this amount of risk, but if we’re not going to see anything, we would be willing to take less risk.”

    The team’s algorithm takes in bathymetric data, or information about the ocean topography, including any surrounding obstacles, along with the vehicle’s dynamics and inertial measurements, to compute the level of risk for a certain proposed path. The algorithm also takes in all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

    If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV goes ahead with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

    The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They used bathymetric data collected from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

    In the scenario with the lowest acceptable risk, meaning that the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that also did not have any high rewards — in this case, high temperatures. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and ultimately to a high-reward region.

    The team also ran the algorithm through 10,000 numerical simulations, generating random environments in each simulation through which to plan a path, and found that the algorithm “trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.”

    A risky slope

    Last December, Ayton, Williams, and others spent two weeks on a cruise off the coast of Costa Rica, deploying underwater gliders, on which they tested several algorithms, including this newest one. For the most part, the algorithm’s path planning agreed with those proposed by several onboard geologists who were looking for the best routes to find oil seeps.

    Ayton says there was a particular moment when the risk-bounded algorithm proved especially handy. An AUV was making its way up a precarious slump, or landslide, where the vehicle couldn’t take too many risks.

    “The algorithm found a method to get us up the slump quickly, while being the most worthwhile,” Ayton says. “It took us up a path that, while it didn’t help us discover oil seeps, it did help us refine our understanding of the environment.”

    “What was really interesting was to watch how the machine algorithms began to ‘learn’ after the findings of several dives, and began to choose sites that we geologists might not have chosen initially,” says Lori Summa, a geologist and guest investigator at the Woods Hole Oceanographic Institution, who took part in the cruise. “This part of the process is still evolving, but it was exciting to watch the algorithms begin to identify the new patterns from large amounts of data, and couple that information to an efficient, ‘safe’ search strategy.”

    In their long-term vision, the researchers hope to use such algorithms to help autonomous vehicles explore environments beyond Earth.

    “If we went to Europa and weren’t willing to take any risks in order to preserve a probe, then the probability of finding life would be very, very low,” Ayton says. “You have to risk a little to get more reward, which is generally true in life as well.”

    This research was supported, in part, by Exxon Mobile, as part of the MIT Energy Initiative, and by NASA.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:49 pm on October 16, 2018 Permalink | Reply
    Tags: AI-Artificial Intelligence, , , , , Deep Skies Lab, Galaxy Zoo-Citizen Science, Gravitational lenses, , , ,   

    From Symmetry: “Studying the stars with machine learning” 

    Symmetry Mag
    From Symmetry

    10/16/18
    Evelyn Lamb

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    To keep up with an impending astronomical increase in data about our universe, astrophysicists turn to machine learning.

    Kevin Schawinski had a problem.

    In 2007 he was an astrophysicist at Oxford University and hard at work reviewing seven years’ worth of photographs from the Sloan Digital Sky Survey—images of more than 900,000 galaxies. He spent his days looking at image after image, noting whether a galaxy looked spiral or elliptical, or logging which way it seemed to be spinning.

    Technological advancements had sped up scientists’ ability to collect information, but scientists were still processing information at the same rate. After working on the task full time and barely making a dent, Schawinski and colleague Chris Lintott decided there had to be a better way to do this.

    There was: a citizen science project called Galaxy Zoo. Schawinski and Lintott recruited volunteers from the public to help out by classifying images online. Showing the same images to multiple volunteers allowed them to check one another’s work. More than 100,000 people chipped in and condensed a task that would have taken years into just under six months.

    Citizen scientists continue to contribute to image-classification tasks. But technology also continues to advance.

    The Dark Energy Spectroscopic Instrument, scheduled to begin in 2019, will measure the velocities of about 30 million galaxies and quasars over five years.

    LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA

    The Large Synoptic Survey Telescope, scheduled to begin in the early 2020s, will collect more than 30 terabytes of data each night—for a decade.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “The volume of datasets [from those surveys] will be at least an order of magnitude larger,” says Camille Avestruz, a postdoctoral researcher at the University of Chicago.

    To keep up, astrophysicists like Schawinski and Avestruz have recruited a new class of non-scientist scientists: machines.

    Researchers are using artificial intelligence to help with a variety of tasks in astronomy and cosmology, from image analysis to telescope scheduling.

    Superhuman scheduling, computerized calibration

    Artificial intelligence is an umbrella term for ways in which computers can seem to reason, make decisions, learn, and perform other tasks that we associate with human intelligence. Machine learning is a subfield of artificial intelligence that uses statistical techniques and pattern recognition to train computers to make decisions, rather than programming more direct algorithms.

    In 2017, a research group from Stanford University used machine learning to study images of strong gravitational lensing, a phenomenon in which an accumulation of matter in space is dense enough that it bends light waves as they travel around it.

    Gravitational Lensing NASA/ESA

    Because many gravitational lenses can’t be accounted for by luminous matter alone, a better understanding of gravitational lenses can help astronomers gain insight into dark matter.

    In the past, scientists have conducted this research by comparing actual images of gravitational lenses with large numbers of computer simulations of mathematical lensing models, a process that can take weeks or even months for a single image. The Stanford team showed that machine learning algorithms can speed up this process by a factor of millions.

    3
    Greg Stewart, SLAC National Accelerator Laboratory

    Schawinski, who is now an astrophysicist at ETH Zürich, uses machine learning in his current work. His group has used tools called generative adversarial networks, or GAN, to recover clean versions of images that have been degraded by random noise. They recently published a paper [Astronomy and Astrophysics]about using AI to generate and test new hypotheses in astrophysics and other areas of research.

    Another application of machine learning in astrophysics involves solving logistical challenges such as scheduling. There are only so many hours in a night that a given high-powered telescope can be used, and it can only point in one direction at a time. “It costs millions of dollars to use a telescope for on the order of weeks,” says Brian Nord, a physicist at the University of Chicago and part of Fermilab’s Machine Intelligence Group, which is tasked with helping researchers in all areas of high-energy physics deploy AI in their work.

    Machine learning can help observatories schedule telescopes so they can collect data as efficiently as possible. Both Schawinski’s lab and Fermilab are using a technique called reinforcement learning to train algorithms to solve problems like this one. In reinforcement learning, an algorithm isn’t trained on “right” and “wrong” answers but through differing rewards that depend on its outputs. The algorithms must strike a balance between the safe, predictable payoffs of understood options and the potential for a big win with an unexpected solution.

    4
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    A growing field

    When computer science graduate student Shubhendu Trivedi of the Toyota Technological Institute at University of Chicago started teaching a graduate course on deep learning with one of his mentors, Risi Kondor, he was pleased with how many researchers from the physical sciences signed up for it. They didn’t know much about how to use AI in their research, and Trivedi realized there was an unmet need for machine learning experts to help scientists in different fields find ways of exploiting these new techniques.

    The conversations he had with researchers in his class evolved into collaborations, including participation in the Deep Skies Lab, an astronomy and artificial intelligence research group co-founded by Avestruz, Nord and astronomer Joshua Peek of the Space Telescope Science Institute. Earlier this month, they submitted their first peer-reviewed paper demonstrating the efficiency of an AI-based method to measure gravitational lensing in the Cosmic Microwave Background [CMB].

    Similar groups are popping up across the world, from Schawinski’s group in Switzerland to the Centre for Astrophysics and Supercomputing in Australia. And adoption of machine learning techniques in astronomy is increasing rapidly. In an arXiv search of astronomy papers, the terms “deep learning” and “machine learning” appear more in the titles of papers from the first seven months of 2018 than from all of 2017, which in turn had more than 2016.

    “Five years ago, [machine learning algorithms in astronomy] were esoteric tools that performed worse than humans in most circumstances,” Nord says. Today, more and more algorithms are consistently outperforming humans. “You’d be surprised at how much low-hanging fruit there is.”

    But there are obstacles to introducing machine learning into astrophysics research. One of the biggest is the fact that machine learning is a black box. “We don’t have a fundamental theory of how neural networks work and make sense of things,” Schawinski says. Scientists are understandably nervous about using tools without fully understanding how they work.

    Another related stumbling block is uncertainty. Machine learning often depends on inputs that all have some amount of noise or error, and the models themselves make assumptions that introduce uncertainty. Researchers using machine learning techniques in their work need to understand these uncertainties and communicate those accurately to each other and the broader public.

    The state of the art in machine learning is changing so rapidly that researchers are reluctant to make predictions about what will be coming even in the next five years. “I would be really excited if as soon as data comes off the telescopes, a machine could look at it and find unexpected patterns,” Nord says.

    No matter exactly the form future advances take, the data keeps coming faster and faster, and researchers are increasingly convinced that artificial intelligence is going to be necessary to help them keep up.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 6:53 pm on September 11, 2018 Permalink | Reply
    Tags: AI-Artificial Intelligence, , , , , , , , , The notorious repeating fast radio source FRB 121102,   

    From Breakthrough Listen via Science Alert: “Astronomers Have Detected an Astonishing 72 New Mystery Radio Bursts From Space “ 

    From Breakthrough Listen Project

    via

    ScienceAlert

    Science Alert

    11 SEP 2018
    MICHELLE STARR

    A massive number of new signals have been discovered coming from the notorious repeating fast radio source FRB 121102 – and we can thank artificial intelligence for these findings.

    Researchers at the search for extraterrestrial intelligence (SETI) project Breakthrough Listen applied machine learning to comb through existing data, and found 72 fast radio bursts that had previously been missed.

    Fast radio bursts (FRBs) are among the most mysterious phenomena in the cosmos. They are extremely powerful, generating as much energy as hundreds of millions of Suns. But they are also extremely short, lasting just milliseconds; and most of them only occur once, without warning.

    This means they can’t be predicted; so it’s not like astronomers are able to plan observations. They are only picked up later in data from other radio observations of the sky.

    Except for one source. FRB 121102 is a special individual – because ever since its discovery in 2012, it has been caught bursting again and again, the only FRB source known to behave this way.

    Because we know FRB 121102 to be a repeating source of FRBs, this means we can try to catch it in the act. This is exactly what researchers at Breakthrough Listen did last year. On 26 August 2017, they pointed the Green Bank Telescope in West Virginia at its location for five hours.

    In the 400 terabytes of data from that observation, the researchers discovered 21 FRBs using standard computer algorithms, all from within the first hour. They concluded that the source goes through periods of frenzied activity and quiescence.

    But the powerful new algorithm used to reanalyse that August 26 data suggests that FRB 121102 is a lot more active and possibly complex than originally thought. Researchers trained what is known as a convolutional neural network to look for the signals, then set it loose on the data like a truffle pig.

    It returned triumphant with 72 previously undetected signals, bringing the total number that astronomers have observed from the object to around 300.

    “This work is only the beginning of using these powerful methods to find radio transients,” said astronomer Gerry Zhang of the University of California Berkeley, which runs Breakthrough Listen.

    “We hope our success may inspire other serious endeavours in applying machine learning to radio astronomy.”

    The new result has helped us learn a little more about FRB 121102, putting constraints on the periodicity of the bursts. It suggests that, the researchers said, there’s no pattern to the way we receive them – unless the pattern is shorter than 10 milliseconds.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Listen

    Breakthrough Listen is the largest ever scientific research program aimed at finding evidence of civilizations beyond Earth. The scope and power of the search are on an unprecedented scale:

    The program includes a survey of the 1,000,000 closest stars to Earth. It scans the center of our galaxy and the entire galactic plane. Beyond the Milky Way, it listens for messages from the 100 closest galaxies to ours.

    The instruments used are among the world’s most powerful. They are 50 times more sensitive than existing telescopes dedicated to the search for intelligence.

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    UCSC Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA



    GBO radio telescope, Green Bank, West Virginia, USA

    The radio surveys cover 10 times more of the sky than previous programs. They also cover at least 5 times more of the radio spectrum – and do it 100 times faster. They are sensitive enough to hear a common aircraft radar transmitting to us from any of the 1000 nearest stars.

    We are also carrying out the deepest and broadest ever search for optical laser transmissions. These spectroscopic searches are 1000 times more effective at finding laser signals than ordinary visible light surveys. They could detect a 100 watt laser (the energy of a normal household bulb) from 25 trillion miles away.

    Listen combines these instruments with innovative software and data analysis techniques.

    The initiative will span 10 years and commit a total of $100,000,000.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: