Tagged: AI-Artificial Intelligence Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:24 pm on October 2, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, , Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer; cosmology; brain imaging and materials science among others., , Bigger and better telescopes and accelerators and of course supercomputers on which they could run larger multiscale simulations, , , The influx of massive data sets and the computing power to sift sort and analyze it., The size of the simulations we are running is so big the problems that we are trying to solve are getting bigger so that these AI methods can no longer be seen as a luxury but as must-have technology.   

    From Argonne Leadership Computing Facility: “Artificial Intelligence: Transforming science, improving lives” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    September 30, 2019
    Mary Fitzpatrick
    John Spizzirri

    Commitment to developing artificial intelligence (AI) as a national research strategy in the United States may have unequivocally defined 2019 as the Year of AI — particularly at the federal level, more specifically throughout the U.S. Department of Energy (DOE) and its national laboratory complex.

    In February, the White House established the Executive Order on Maintaining American Leadership in Artificial Intelligence (American AI Initiative) to expand the nation’s leadership role in AI research. Its goals are to fuel economic growth, enhance national security and improve quality of life.

    The initiative injects substantial and much-needed research dollars into federal facilities across the United States, promoting technology advances and innovation and enhancing collaboration with nongovernment partners and allies abroad.

    In response, DOE has made AI — along with exascale supercomputing and quantum computing — a major element of its $5.5 billion scientific R&D budget and established the Artificial Intelligence and Technology Office, which will serve to coordinate AI work being done across the DOE.

    At DOE facilities like Argonne National Laboratory, researchers have already begun using AI to design better materials and processes, safeguard the nation’s power grid, accelerate treatments in brain trauma and cancer and develop next-generation microelectronics for applications in AI-enabled devices.

    Over the last two years, Argonne has made significant strides toward implementing its own AI initiative. Leveraging the Laboratory’s broad capabilities and world-class facilities, it has set out to explore and expand new AI techniques; encourage collaboration; automate traditional research methods, as well as lab facilities and drive discovery.

    In July, it hosted an AI for Science town hall, the first of four such events that also included Oak Ridge and Lawrence Berkeley national laboratories and DOE’s Office of Science.

    3

    Engaging nearly 350 members of the AI community, the town hall served to stimulate conversation around expanding the development and use of AI, while addressing critical challenges by using the initiative framework called AI for Science.

    “AI for Science requires new research and infrastructure, and we have to move a lot of data around and keep track of thousands of models,” says Rick Stevens, Associate Laboratory Director for Argonne’s Computing, Environment and Life Sciences (CELS) Directorate and a professor of computer science at the University of Chicago.

    1
    Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, is helping to develop the CANDLE computer architecture on the patient level, which is meant to help guide drug treatment choices for tumors based on a much wider assortment of data than currently used.

    “How do we distribute this production capability to thousands of people? We need to have system software with different capabilities for AI than for simulation software to optimize workflows. And these are just a few of the issues we have to begin to consider.”

    The conversation has just begun and continues through Laboratory-wide talks and events, such as a recent AI for Science workshop aimed at growing interest in AI capabilities through technical hands-on sessions.

    Argonne also will host DOE’s Innovation XLab Artificial Intelligence Summit in Chicago, meant to showcase the assets and capabilities of the national laboratories and facilitate an exchange of information and ideas between industry, universities, investors and end-use customers with Lab innovators and experts.
    What exactly is AI?

    Ask any number of researchers to define AI and you’re bound to get — well, first, a long pause and perhaps a chuckle — a range of answers from the more conventional ​“utilizing computing to mimic the way we interpret data but at a scale not possible by human capability” to ​“a technology that augments the human brain.”

    Taken together, AI might well be viewed as a multi-component toolbox that enables computers to learn, recognize patterns, solve problems, explore complex datasets and adapt to changing conditions — much like humans, but one day, maybe better.

    While the definitions and the tools may vary, the goals remain the same: utilize or develop the most advanced AI technologies to more effectively address the most pressing issues in science, medicine and technology, and accelerate discovery in those areas.

    At Argonne, AI has become a critical tool for modeling and prediction across almost all areas where the Laboratory has significant domain expertise: chemistry, materials, photon science, environmental and manufacturing sciences, biomedicine, genomics and cosmology.

    A key component of Argonne’s AI toolbox is a technique called machine learning and its derivatives, such as deep learning. The latter is built on neural networks comprising many layers of artificial neurons that learn internal representations of data, mimicking human information-gathering-processing systems like the brain.

    “Deep learning is the use of multi-layered neural networks to do machine learning, a program that gets smarter or more accurate as it gets more data to learn from. It’s very successful at learning to solve problems,” says Stevens.

    A staunch supporter of AI, particularly deep learning, Stevens is principal investigator on a multi-institutional effort that is developing the deep neural network application CANDLE (CANcer Distributed Learning Environment), that integrates deep learning with novel data, modeling and simulation techniques to accelerate cancer research.

    Coupled with the power of Argonne’s forthcoming exascale computer Aurora — which has the capacity to deliver a billion billion calculations per second — the CANDLE environment will enable a more personalized and effective approach to cancer treatment.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    And that is just a small sample of AI’s potential in science. Currently, all across Argonne, researchers are involved in more than 60 AI-related investigations, many of them driven by machine learning.

    Argonne Distinguished Fellow Valerie Taylor’s work looks at how applications execute on computers and large-scale, high-performance computing systems. Using machine learning, she and her colleagues model an execution’s behavior and then use that model to provide feedback on how to best modify the application for better performance.

    “Better performance may be shorter execution time or, using generated metrics such as energy, it may be reducing the average power,” says Taylor, director of Argonne’s Mathematics and Computer Science (MCS) division. ​“We use statistical analysis to develop the models and identify hints on how to modify the application.”

    Material scientists are exploring the use of machine learning to optimize models of complex material properties in the discovery and design of new materials that could benefit energy storage, electronics, renewable energy resources and additive manufacturing, to name just a few areas.

    And still more projects address complex transportation and vehicle efficiency issues by enhancing engine design, minimizing road congestion, increasing energy efficiency and improving safety.

    Beyond the deep

    Beyond deep learning, there are many sub-ranges of AI that people have been working on for years, notes Stevens. ​“And while machine learning now dominates, something else might emerge as a strength.”

    Natural language processing, for example, is commercially recognizable as voice-activated technologies — think Siri — and on-the-fly language translators. Exceeding those capabilities is its ability to review, analyze and summarize information about a given topic from journal articles, reports and other publications, and extract and coalesce select information from massive and disparate datasets.

    Immersive visualization can place us into 3D worlds of our own making, interject objects or data into our current reality or improve upon human pattern recognition. Argonne researchers have found application for virtual and augmented reality in the 3D visualization of complicated data sets and the detection of flaws or instabilities in mechanical systems.

    And of course, there is robotics — a program started at Argonne in the late 1940s and rebooted in 1999 — that is just beginning to take advantage of Argonne’s expanding AI toolkit, whether to conduct research in a specific domain or improve upon its more utilitarian use in decommissioning nuclear power plants.

    Until recently, according to Stevens, AI has been a loose collection of methods using very different underlying mechanisms, and the people using them weren’t necessarily communicating their progress or potentials with one another.

    But with a federal initiative in hand and a Laboratory-wide vision, that is beginning to change.

    Among those trying to find new ways to collaborate and combine these different AI methods is Marius Stan, a computational scientist in Argonne’s Applied Materials division (AMD) and a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

    Stan leads a research area called Intelligent Materials Design that focuses on combining different elements of AI to discover and design new materials and to optimize and control complex synthesis and manufacturing processes.

    Work on the latter has created a collaboration between Stan and colleagues in the Applied Materials and Energy Systems divisions, and the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    Merging machine learning and computer vision with the Flame Spray Pyrolysis technology at Argonne’s Materials Engineering Research Facility, the team has developed an AI ​“intelligent software” that can optimize, in real time, the manufacturing process.

    “Our idea was to use the AI to better understand and control in real time — first in a virtual, experimental setup, then in reality — a complex synthesis process,” says Stan.

    Automating the process leads to a safer and much faster process compared to those led by humans. But even more intriguing is the potential that the AI process might observe materials with better properties than did the researchers.

    What drove us to AI?

    Whether or not they concur on a definition, most researchers will agree that the impetus for the escalation of AI in scientific research was the influx of massive data sets and the computing power to sift, sort and analyze it.

    Not only was the push coming from big corporations brimming with user data, but the tools that drive science were getting more expansive — bigger and better telescopes and accelerators and of course supercomputers, on which they could run larger, multiscale simulations.

    “The size of the simulations we are running is so big, the problems that we are trying to solve are getting bigger, so that these AI methods can no longer be seen as a luxury, but as must-have technology,” notes Prasanna Balaprakash, a computer scientist in MCS and ALCF.

    Data and compute size also drove the convergence of more traditional techniques, such as simulation and data analysis, with machine and deep learning. Where analysis of data generated by simulation would eventually lead to changes in an underlying model, that data is now being fed back into machine learning models and used to guide more precise simulations.

    “More or less anybody who is doing large-scale computation is adopting an approach that puts machine learning in the middle of this complex computing process and AI will continue to integrate with simulation in new ways,” says Stevens.

    “And where the majority of users are in theory-modeling-simulation, they will be integrated with experimentalists on data-intense efforts. So the population of people who will be part of this initiative will be more diverse.”

    But while AI is leading to faster time-to-solution and more precise results, the number of data points, parameters and iterations required to get to those results can still prove monumental.

    Focused on the automated design and development of scalable algorithms, Balaprakash and his Argonne colleagues are developing new types of AI algorithms and methods to more efficiently solve large-scale problems that deal with different ranges of data. These additions are intended to make existing systems scale better on supercomputers, like those housed at the ALCF; a necessity in the light of exascale computing.

    “We are developing an automated machine learning system for a wide range of scientific applications, from analyzing cancer drug data to climate modeling,” says Balaprakash. ​“One way to speed up a simulation is to replace the computationally expensive part with an AI-based predictive model that can make the simulation faster.”

    Industry support

    The AI techniques that are expected to drive discovery are only as good as the tech that drives them, making collaboration between industry and the national labs essential.

    “Industry is investing a tremendous amount in building up AI tools,” says Taylor. ​“Their efforts shouldn’t be duplicated, but they should be leveraged. Also, industry comes in with a different perspective, so by working together, the solutions become more robust.”

    Argonne has long had relationships with computing manufacturers to deliver a succession of ever-more powerful machines to handle the exponential growth in data size and simulation scale. Its most recent partnership is that with semiconductor chip manufacturer Intel and supercomputer manufacturer Cray to develop the exascale machine Aurora.

    But the Laboratory is also collaborating with a host of other industrial partners in the development or provision of everything from chip design to deep learning-enabled video cameras.

    One of these, Cerebras, is working with Argonne to test a first-of-its-kind AI accelerator that provides a 100–500 times improvement over existing AI accelerators. As its first U.S. customer, Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer, cosmology, brain imaging and materials science, among others.

    The National Science Foundation-funded Array of Things, a partnership between Argonne, the University of Chicago and the City of Chicago, actively seeks commercial vendors to supply technologies for its edge computing network of programmable, multi-sensor devices.

    But Argonne and the other national labs are not the only ones to benefit from these collaborations. Companies understand the value in working with such organizations, recognizing that the AI tools developed by the labs, combined with the kinds of large-scale problems they seek to solve, offer industry unique benefits in terms of business transformation and economic growth, explains Balaprakash.

    “Companies are interested in working with us because of the type of scientific applications that we have for machine learning,” he adds ​“What we have is so diverse, it makes them think a lot harder about how to architect a chip or design software for these types of workloads and science applications. It’s a win-win for both of us.”

    AI’s future, our future

    “There is one area where I don’t see AI surpassing humans any time soon, and that is hypotheses formulation,” says Stan, ​“because that requires creativity. Humans propose interesting projects and for that you need to be creative, make correlations, propose something out of the ordinary. It’s still human territory but machines may soon take the lead.

    “It may happen,” he says, and adds that he’s working on it.

    In the meantime, Argonne researchers continue to push the boundaries of existing AI methods and forge new components for the AI toolbox. Deep learning techniques like neuromorphic algorithms that exhibit the adaptive nature of insects in an equally small computational space can be used at the ​“edge” — where there are few computing resources; as in cell phones or urban sensors.

    An optimizing neural network called a neural architecture search, where one neural network system improves another, is helping to automate deep-learning-based predictive model development in several scientific and engineering domains, such as cancer drug discovery and weather forecasting using supercomputers.

    Just as big data and better computational tools drove the convergence of simulation, data analysis and visualization, the introduction of the exascale computer Aurora into the Argonne complex of leadership-class tools and experts will only serve to accelerate the evolution of AI and witness its full assimilation into traditional techniques.

    The tools may change, the definitions may change, but AI is here to stay as an integral part of the scientific method and our lives.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 8:15 am on October 2, 2019 Permalink | Reply
    Tags: "How AI could change science", AI-Artificial Intelligence, , , , , , , Kavli Institute for Cosmological Physics,   

    From University of Chicago: “How AI could change science” 

    U Chicago bloc

    From University of Chicago

    Oct 1, 2019
    Louise Lerner
    Rob Mitchum

    1
    At the University of Chicago, researchers are using artificial intelligence’s ability to analyze massive amounts of data in applications from scanning for supernovae to finding new drugs. shutterstock.com

    Researchers at the University of Chicago seek to shape an emerging field.

    AI technology is increasingly used to open up new horizons for scientists and researchers. At the University of Chicago, researchers are using it for everything from scanning the skies for supernovae to finding new drugs from millions of potential combinations and developing a deeper understanding of the complex phenomena underlying the Earth’s climate.

    Today’s AI commonly works by starting from massive data sets, from which it figures out its own strategies to solve a problem or make a prediction—rather than rely on humans explicitly programming it how to reach a conclusion. The results are an array of innovative applications.

    “Academia has a vital role to play in the development of AI and its applications. While the tech industry is often focused on short-term returns, realizing the full potential of AI to improve our world requires long-term vision,” said Rebecca Willett, professor of statistics and computer science at the University of Chicago and a leading expert on AI foundations and applications in science. “Basic research at universities and national laboratories can establish the fundamentals of artificial intelligence and machine learning approaches, explore how to apply these technologies to solve societal challenges, and use AI to boost scientific discovery across fields.”

    2
    Prof. Rebecca Willett gives an introduction to her research on AI and data science foundations. Photo by Clay Kerr

    Willett is one of the featured speakers at the InnovationXLab Artificial Intelligence Summit hosted by UChicago-affiliated Argonne National Laboratory, which will soon be home to the most powerful computer in the world—and it’s being designed with an eye toward AI-style computing. The Oct. 2-3 summit showcases the U.S. Department of Energy lab, bringing together industry, universities, and investors with lab innovators and experts.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    The workshop comes as researchers around UChicago and the labs are leading new explorations into AI.

    For example, say that Andrew Ferguson, an associate professor at the Pritzker School of Molecular Engineering, wants to look for a new vaccine or flexible electronic materials. New materials essentially are just different combinations of chemicals and molecules, but there are literally billions of such combinations. How do scientists pick which ones to make and test in the labs? AI could quickly narrow down the list.

    “There are many areas where the Edisonian approach—that is, having an army of assistants make and test hundreds of different options for the lightbulb—just isn’t practical,” Ferguson said.

    Then there’s the question of what happens if AI takes a turn at being the scientist. Some are wondering whether AI models could propose new experiments that might never have occurred to their human counterparts.

    “For example, when someone programmed the rules for the game of Go into an AI, it invented strategies never seen in thousands of years of humans playing the game,” said Brian Nord, an associate scientist in the Kavli Institute for Cosmological Physics and UChicago-affiliated Fermi National Accelerator Laboratory.

    “Maybe sometimes it will have more interesting ideas than we have.”

    Ferguson agreed: “If we write down the laws of physics and input those, what can AI tell us about the universe?”

    3
    Scenes from the 2016 games of Go, an ancient Chinese game far more complex than chess, between Google’s AI “AlphaGo” and world-record Go player Lee Sedol. The match ended with the AI up 4-1. Image courtesy of Bob van den Hoek.

    But ensuring those applications are accurate, equitable, and effective requires more basic computer science research into the fundamentals of AI. UChicago scientists are exploring ways to reduce bias in model predictions, use advanced tools even when data is scarce, and developing “explainable AI” systems that will produce more actionable insights and raise trust among users of those models.

    “Most AIs right now just spit out an answer without any context. But a doctor, for example, is not going to accept a cancer diagnosis unless they can see why and how the AI got there,” Ferguson said.

    With the right calibration, however, researchers see a world of uses for AI. To name just a few: Willett, in collaboration with scientists from Argonne and the Department of Geophysical Sciences, is using machine learning to study clouds and their effect on weather and climate. Chicago Booth economist Sendhil Mullainathan is studying ways in which machine learning technology could change the way we approach social problems, such as policies to alleviate poverty; while neurobiologist David Freedman, a professor in the University’s Division of Biological Sciences, is using machine learning to understand how brains interpret sights and sounds and make decisions.

    Below are looks into three projects at the University showcasing the breadth of AI applications happening now.

    The depths of the universe to the structures of atoms

    We’re getting better and better at building telescopes to scan the sky and accelerators to smash particles at ever-higher energies. What comes along with that, however, is more and more data. For example, the Large Hadron Collider in Europe generates one petabyte of data per second; for perspective, in less than five minutes, that would fill up the world’s most powerful supercomputer.

    LHC

    CERN map


    CERN LHC Maximilien Brice and Julien Marius Ordan


    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS

    CERN ATLAS Image Claudia Marcelloni CERN/ATLAS

    ALICE

    CERN/ALICE Detector


    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    That’s way too much data to store. “You need to quickly pick out the interesting events to keep, and dump the rest,” Nord said.

    But see “From UC Santa Barbara: “Breaking Data out of the Silos

    Similarly, each night hundreds of telescopes scan the sky. Existing computer programs are pretty good at picking interesting things out of them, but there’s room to improve. (After LIGO detected the gravity waves from two neutron stars crashing together in 2017, telescopes around the world had rooms full of people frantically looking through sky photos to find the point of light it created.)

    MIT /Caltech Advanced aLigo


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Years ago, Nord was sitting and scanning telescope images to look for gravitational lensing, an effect in which large objects distort light as it passes.

    Gravitational Lensing NASA/ESA

    “We were spending all this time doing this by hand, and I thought, surely there has to be a better way,” he said. In fact, the capabilities of AI were just turning a corner; Nord began writing programs to search for lensing with neural networks. Others had the same idea; the technique is now emerging as a standard approach to find gravitational lensing.

    This year Nord is partnering with computer scientist Yuxin Chen to explore what they call a “self-driving telescope”: a framework that could optimize when and where to point telescopes to gather the most interesting data.

    “I view this collaboration between AI and science, in general, to be in a very early phase of development,” Chen said. “The outcome of the research project will not only have transformative effects in advancing the basic science, but it will also allow us to use the science involved in the physical processes to inform AI development.”

    Disentangling style and content for art and science

    In recent years, popular apps have sprung up that can transform photographs into different artistic forms—from generic modes such as charcoal sketches or watercolors to the specific styles of Dali, Monet and other masters. These “style transfer” apps use tools from the cutting edge of computer vision—primarily the neural networks that prove adept at image classification for applications such as image search and facial recognition.

    But beyond the novelty of turning your selfie into a Picasso, these tools kick-start a deeper conversation around the nature of human perception. From a young age, humans are capable of separating the content of an image from its style; that is, recognizing that photos of an actual bear, a stuffed teddy bear, or a bear made out of LEGOs all depict the same animal. What’s simple for humans can stump today’s computer vision systems, but Assoc. Profs. Jason Salavon and Greg Shakhnarovich think the “magic trick” of style transfer could help them catch up.

    Photo gallery 1/2

    4
    This tryptych of images demonstrates how neural networks can transform images with different artistic forms. [Sorry, I do not see the point here.]

    “The fact that we can look at pictures that artists create and still understand what’s in them, even though they sometimes look very different from reality, seems to be closely related to the holy grail of machine perception: what makes the content of the image understandable to people,” said Shakhnarovich, an associate professor at the Toyota Technological Institute of Chicago.

    Salavon and Shakhnarovich are collaborating on new style transfer approaches that separate, capture and manipulate content and style, unlocking new potential for art and science. These new models could transform a headshot into a much more distorted style, such as the distinctive caricatures of The Simpsons, or teach self-driving cars to better understand road signs in different weather conditions.

    “We’re in a global arms race for making cool things happen with these technologies. From what would be called practical space to cultural space, there’s a lot of action,” said Salavon, an associate professor in the Department of Visual Arts at the University of Chicago and an artist who makes “semi-autonomous art”. “But ultimately, the idea is to get to some computational understanding of the ‘essence’ of images. That’s the rich philosophical question.”

    5
    Researchers hope to use AI to decode nature’s rules for protein design, in order to create synthetic proteins with a range of applications. Image courtesy of Emw / CC BY-SA 3.0

    Learning nature’s rules for protein design

    Nature is an unparalleled engineer. Millions of years of evolution have created molecular machines capable of countless functions and survival in challenging environments, like deep sea vents. Scientists have long sought to harness these design skills and decode nature’s blueprints to build custom proteins of their own for applications in medicine, energy production, environmental clean-up and more. But only recently have the computational and biochemical technologies needed to create that pipeline become possible.

    Ferguson and Prof. Rama Ranganathan are bringing these pieces together in an ambitious project funded by a Center for Data and Computing seed grant. Combining recent advancements in machine learning and synthetic biology, they will build an iterative pipeline to learn nature’s rules for protein design, then remix them to create synthetic proteins with elevated or even new functions and properties.

    “It’s not just rebuilding what nature built, we can push it beyond what nature has ever shown us before,” said Ranganathan. “This proposal is basically the starting point for building a whole framework of data-driven molecular engineering.”

    “The way we think of this project is we’re trying to mimic millions of years of evolution in the lab, using computation and experiments instead of natural selection,” Ferguson said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Chicago Campus

    An intellectual destination

    One of the world’s premier academic and research institutions, the University of Chicago has driven new ways of thinking since our 1890 founding. Today, UChicago is an intellectual destination that draws inspired scholars to our Hyde Park and international campuses, keeping UChicago at the nexus of ideas that challenge and change the world.

    The University of Chicago is an urban research university that has driven new ways of thinking since 1890. Our commitment to free and open inquiry draws inspired scholars to our global campuses, where ideas are born that challenge and change the world.

    We empower individuals to challenge conventional thinking in pursuit of original ideas. Students in the College develop critical, analytic, and writing skills in our rigorous, interdisciplinary core curriculum. Through graduate programs, students test their ideas with UChicago scholars, and become the next generation of leaders in academia, industry, nonprofits, and government.

    UChicago research has led to such breakthroughs as discovering the link between cancer and genetics, establishing revolutionary theories of economics, and developing tools to produce reliably excellent urban schooling. We generate new insights for the benefit of present and future generations with our national and affiliated laboratories: Argonne National Laboratory, Fermi National Accelerator Laboratory, and the Marine Biological Laboratory in Woods Hole, Massachusetts.

    The University of Chicago is enriched by the city we call home. In partnership with our neighbors, we invest in Chicago’s mid-South Side across such areas as health, education, economic growth, and the arts. Together with our medical center, we are the largest private employer on the South Side.

    In all we do, we are driven to dig deeper, push further, and ask bigger questions—and to leverage our knowledge to enrich all human life. Our diverse and creative students and alumni drive innovation, lead international conversations, and make masterpieces. Alumni and faculty, lecturers and postdocs go on to become Nobel laureates, CEOs, university presidents, attorneys general, literary giants, and astronauts.

     
  • richardmitnick 9:33 am on August 15, 2019 Permalink | Reply
    Tags: "Deepfakes: danger in the digital age", AI-Artificial Intelligence, , Infocalypse- A term used to label the age of cybercriminals digital misinformation clickbait and data misuse.,   

    From CSIROscope: “Deepfakes: danger in the digital age” 

    CSIRO bloc

    From CSIROscope

    15 August 2019
    Alison Donnellan

    As we dive deeper into the digital age, fake news, online deceit and widespread use of social media are having a profound impact on every element of society. From swaying elections to manipulating science-proven facts.

    Deepfaking is the act of using artificial intelligence and machine learning technology to produce or alter video, image or audio content. It’s done using the sequence of the original to create a version of something that didn’t occur.

    So, what’s the deal with deepfakes?

    Once a topic only discussed in computer research labs, deepfakes were catapulted into mainstream media in 2017. This was after various online communities began swapping faces of high-profile personalities with actors in pornographic films.

    “You need a piece of machine learning to digest all of these video sequences. The machine eventually learns who the person is, how they are represented, how they move and evolve in the video,” says Dr Richard Nock, machine learning expert with our Data61 team.

    “So if you ask the machine to make a new sequence of this person, the machine is going to be able to automatically generate a new one.”

    “The piece of technology is almost always the same, which is where the name ‘deepfake’ comes from,” says Dr Nock. “It’s usually deep learning, a subset of machine learning, used to ask the machine to forge a new reality.”

    Let’s go… deeper

    As a result, deepfakes have been described as one of the contributing factors of the Infocalypse. A term used to label the age of cybercriminals, digital misinformation, clickbait and data misuse. As the technology behind the AI-generated videos improves, the ability for audiences to distinguish fact from fiction is becoming increasingly difficult.

    Creating a convincing deepfake is an unlikely feat for the general computer user. But an individual with advanced knowledge of machine learning (the specific software needed to digitally alter a piece of content) and access to the victim’s publicly-available social media profile for photographic, video and audio content, could do so.

    Now face-morphing apps inbuilt with automated AI and machine learning are becoming more advanced. So, deepfake creation could possibly come to be attainable to the general population in the future.

    One example of this is Snapchat’s introduction of the gender swap filter. The cost of a free download is all it takes for a Snapchat user to appear as someone else. The application’s gender swap filter completely alters the user’s appearance.

    There have been numerous instances of cat fishing (an individual that fabricates an online identity to trick others into exploitative emotional or romantic relationships) via online dating apps using the technology. Some people are using the experience as a social experiment and others as a ploy to extract sensitive information.

    To deepfake or not to deepfake

    Politicians, celebrities and those in the public spotlight are the most obvious victims of deep fakes. But the rise of posting multiple videos and selfies to public internet platforms places everyone at risk.

    ‘The creation of explicit images is one example of how deepfakes are being used to harass individuals online. One AI-powered app is creating images of what women might look like, according to the algorithm, unclothed.’

    According to Dr Nock, an alternative effect of election deepfakery could be an online exodus. Basically, a segment of the population placing their trust in the opinions of a closed circle of friends, whether it be physical or an online forum, such as Reddit.

    “Once you’ve passed that breaking point and no longer trust an information source, most people would start retracting themselves. Refraining themselves from accessing public media content because it cannot be trusted anymore. And eventually relying on their friends, which can be limiting if people are more exposed to opinions rather than the facts.”


    The Obama deepfake was a viral hit. There were over six million views of the video seemingly produced by the US president. The video brought to light the existence of deepfake technology alongside a warning for the trust users place in online content.

    Mitigating the threat of digital deceit

    There are three ways to prevent deepfakes according to Dr Nock:

    Invent a mechanism of authenticity. Whether that be a physical stamp such as blockchain or branding, to confirm that the information is from a trusted source and the video is depicting something that happened.
    Train machine learning to detect deep fakes created by other machines.
    These mechanisms would need to be widely adopted by different information sources in order to be successful.

    “Blockchain could work – if carefully crafted – but a watermark component would probably not,” explains Dr Nock. “Changing the format of an original document would eventually alter the watermark, while the document would obviously stay original. This would not happen with blockchain.”

    Machine learning is already detecting deep fakes. Researchers from UC Berkeley and the University of Southern California are using this method to distinguish unique head and face movement. These subtle personal quirks are currently not modeled by deep fake algorithms, with the technique returning a 92 per cent level of accuracy.

    While this research is comforting, bad actors will inevitably continue to reinvent and adapt AI-generated fakes.

    Machine learning is a powerful technology. And one that’s becoming more sophisticated over time. Deepfakes aside, machine learning is also bringing enormous positive benefits to areas like privacy, healthcare, transport and even self-driving cars.

    Our Data61 team acts as a network and partner with government, industry and universities, to advance the technologies of AI in many areas of society and industry, such as adversarial machine learning, cybersecurity and data protection, and rich data-driven insights.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 10:13 am on June 20, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, , Ted Chiang   

    From Science Node: “Why does AI fascinate us?” 

    Science Node bloc

    10 June, 2019
    Alisa Alering

    1

    Ted Chiang talks about how our love for fictional AI interacts with the real-world use of artificial intelligence.

    Why do you think we are fascinated by AI?

    People have been interested in artificial beings for a very long time. Ever since we’ve had lifelike statues, we’ve imagined how they might behave if they were actually alive. More recently, our ideas of how robots might act are shaped by our perception of how good computers are at certain tasks. The earliest calculating machines did things like computing logarithm tables more accurately than people could. The fact that machines became capable of doing a task which we previously associated with very smart people made us think that the machines were, in some sense, like very smart people.

    How does our—let’s call it shared human mythology—of AI interact with the real forms of artificial intelligence we encounter in the world today?

    The fact that we use the term “artificial intelligence” creates associations in the public imagination which might not exist if the software industry used some other term. AI has, in science fiction, referred to a certain trope of androids and robots, so when the software industry uses the same term, it encourages us to personify software even more than we normally would.

    Is there a big difference between our fictional imaginary consumption of AI and what’s actually going on in current technology?

    2
    Intelligent machines. ‘Maria’ was the first robot to be depicted on film, in Fritz Lang’s Metropolis (1927). Courtesy Jeremy Tarling. (CC BY-SA 2.0)

    I think there’s a huge difference. In our fictional imagination “artificial intelligence” refers to something that is, in many ways, like a person. It’s a very rigid person, but we still think of it as a person. But nothing that we have in the software industry right now is remotely like a person—not even close. It’s very easy for us to attribute human-like characteristics to software, but that’s more of a reflection of our cognitive biases. It doesn’t say anything about the properties that the software itself possesses.

    What’s happening now or in the near future with intelligent systems that really captures your interest?

    What I find most interesting is not typically described as AI, but with the phrase ‘artificial life.’ Some researchers are creating digital organisms with bodies and sense organs that allow them to move around and navigate their environment. Usually there’s some mechanism where they can give rise to slightly different versions of themselves, and thus evolve over time. This avenue of research is really interesting because it could eventually result in software entities which have a lot of the properties that we associate with living organisms. It’s still going to be a long ways from anything that we consider intelligent, but it’s a very interesting avenue of research.

    Over time, these entities might come to have the intelligence of an insect. Even that would be pretty impressive, because even an insect is good at a lot of things which Watson (IBM’s AI supercomputer) can’t do at all. An insect can navigate its environment and look for food and avoid danger. A lot of the things that we call common sense are outgrowths of the fact that we have bodies and live in the physical world. If a digital organism could have some of that, that would be a way of laying the groundwork for an artificial intelligence to eventually have common sense.

    How do we teach an artificial intelligence the things we consider common sense?

    Alan Turing once wrote that he didn’t know what would be the best way to create a thinking machine; it might involve teaching it abstract activities like chess, or it might involve giving it eyes and a body and teaching it the way you’d teach a child. He thought both would be good avenues to explore.

    Historically, we’ve only tried that first route, and that has led to this idea that common sense is hard to teach or that artificial intelligence lack common sense. I think if we had gone with the second route, we’d have a different view of things.

    If you want an AI to be really good at playing chess, we have got that problem licked. But if you want something that can navigate your living room without constantly bumping into a coffee table, that’s a completely different challenge. If you want to solve that one, you’re going to need a different approach than what we’ve used for solving the grandmaster-level chess-playing problem.

    My cat’s really good in the living room but not so good at chess.

    Exactly. Because your cat grew up with eyes and a physical body.

    Since you’re someone who (presumably) spends a lot of time thinking about the social and philosophical aspects of AI, what do you think the creators of artificial beings should be concerned about?

    I think it’s important for all of us to think about the greater context in which the work we do takes place. When people say, “I was just doing my job,” we tend not to consider that a good excuse when doing that job leads to bad moral outcomes.

    When you as a technologist are being asked how to solve a problem, it’s worth thinking about, “Why am I being asked to solve this problem? In whose interest is it to solve this problem?” That’s something we all need to be thinking about no matter what sort of work we do.

    Otherwise, if everyone simply keeps their head down and just focuses narrowly on the task at hand, then nothing changes.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 7:00 am on April 10, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, Alán Aspuru-Guzik, Bo Wang, CIFAR-Canadian Institute for Advanced Research, Daniel Roy, David Fleet,   

    From University of Toronto: “U of T researchers named new CIFAR chairs in artificial intelligence” 

    U Toronto Bloc

    From University of Toronto


    https://www.cifar.ca/

    1
    U of T’s new CIFAR chairs in artificial intelligence (clockwise from top left): David Fleet, Bo Wang, Alán Aspuru-Guzik and Daniel Roy

    April 09, 2019
    Geoffrey Vendeville

    The new U of T chairs – Alán Aspuru-Guzik, David Fleet, Daniel Roy and Bo Wang – join another eight at U of T who were named among the inaugural group last fall and are all associated with the Vector Institute for Artificial Intelligence.

    The four new chairs are expected to play an important part in developing new technologies, from new, state-of-the-art materials to predictive software that promises to improve patient care.

    “Today’s announcement will help U of T increase the number of outstanding artificial intelligence researchers and skilled graduates it produces,” said Vivek Goel, U of T’s vice-president of research and innovation.

    “It will also help heighten U of T’s, and Canada’s, international profile in AI research and training.”

    CIFAR is at the heart of the $125-million pan-Canadian AI strategy, which is intended to attract and retain the best minds in the rapidly growing and competitive field. CIFAR supports AI researchers across the country based at three institutes: Toronto’s Vector Institute, Edmonton’s Alberta Machine Intelligence Institute (Amii) and the Quebec Artificial Intelligence Institute (Mila).

    Aspuru-Guzik, who came to U of T from Harvard University, works at the intersection of theoretical chemistry and computational physics. In his Matter Lab, he and his colleagues use AI to simulate and classify molecules for application in new materials including cleantech and optoelectronics.

    Traditionally, developing new materials required endless experimentation and calculation. “It’s like searching for a needle in a haystack, where the needle is the function of the molecule that you need,” Aspuru-Guzik said. He added that, in the past, experimenters used a scattershot approach but, with AI, they can become sharpshooters.

    “Typically to make a material, it takes $10 million and 10 years of effort,” Aspuru-Guzik said. “So what I’m aiming to do in my research is to bring it down to at least to $1 million and one year per material, and even $100,000 and one month.”

    Canada’s support for AI research is crucial to maintain its early advantage in a field where other countries are looking to be leaders, he added.

    “The AI race is on. This is a race where everybody is putting [forward] their best wheels and race cars. They are pushing their pedals to the limit.”

    As for Wang, he worked as a senior AI consultant for the U.S. biotech company Genentech after obtaining a master’s degree at U of T. He later returned to Toronto to join U of T’s Faculty of Medicine as an assistant professor.

    Wang is also the lead artificial intelligence scientist at the Techna Institute and Peter Munk Cardiac Centre at the University Health Network. He applies his research in machine learning, computational biology, and computer vision to refining patient care.

    Fleet, a professor in the department computer and mathematical sciences at U of T Scarborough, said Canada is reaping the rewards of investments in AI going back to the 1980s. That includes supporting pioneers like Geoffrey Hinton, who recently won an A.M. Turing Award, Hector Levesque and Ray Reiter, who made influential contributions to AI, knowledge representation and databases, and theorem-proving.

    “There was the core group that they built the program around in Toronto that was world renowned,” said Fleet, who completed his PhD in computer science at U of T in 1990 and whose research spans computer vision, machine learning, image processing and visual neuroscience.

    U of T Scarborough is also home to Roy, an assistant professor who combines expertise in computer science, statistics and probability theory. His interest in computers started as a child after his parents brought home a Tandy personal computer. He learned how computer games worked and built his own.

    Fast forward to today and Roy is working on the theoretical frontier of machine learning by helping researchers build more reliable and efficient systems.

    Research in AI is advancing so quickly that it’s impossible to predict where it will lead, Roy said. “It’s difficult for me to anticipate what surprising things will show up at the main conferences next year – let alone five years from now.

    “It repeatedly blows my mind.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded in 1827, the University of Toronto has evolved into Canada’s leading institution of learning, discovery and knowledge creation. We are proud to be one of the world’s top research-intensive universities, driven to invent and innovate.

    Our students have the opportunity to learn from and work with preeminent thought leaders through our multidisciplinary network of teaching and research faculty, alumni and partners.

    The ideas, innovations and actions of more than 560,000 graduates continue to have a positive impact on the world.

     
  • richardmitnick 1:20 pm on March 4, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, , Completely doing away with wind variability is next to impossible, , , Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful, Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind, Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified, , , Unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector   

    From Geospatial World: “Google and DeepMind predict wind energy output using AI” 

    From Geospatial World

    03/04/2019
    Aditya Chaturvedi

    1
    Image Courtesy: Unsplash

    Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind.

    Renewable energy is the way towards lowering carbon emissions and sustainability, so it is imperative that we focus on yielding optimum energy outputs from renewable energy.

    Renewable technologies will be at the forefront of climate change mitigation and addressing global warming, however, the complete potential is yet to be harnessed owing to a slew of obstructions. Wind energy has emerged as a crucial source of renewable energy in the past decade due to a decline in the cost of turbines that has led to the gradual mainstreaming of wind power. Though, unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector.

    Google and DeepMind project will change this forever by overcoming this limitation that has hobbled wind energy adoption.

    With the help of DeepMind’s Machine Learning algorithms, Google has been able to predict the wind energy output of the farms that it uses for its Green Energy initiatives.

    “DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms—part of Google’s global fleet of renewable energy projects—collectively generate as much electricity as is needed by a medium-sized city”, the blog says.

    Google is optimistic that it can accurately predict and schedule energy output, which certainly would have an upper hand over non-time based deliveries.

    3
    Image Courtesy: Google/ DeepMind

    Taking a neural network that makes uses of weather forecasts and turbine data history, DeepMind system has been configured to predict wind power output 36 hours in advance.

    Taking a cue from these predictions, the advanced model recommends the best possible method to fulfill, and even exceed, delivery commitments 24 hrs in advance. Its importance can be estimated from the fact that energy sources that deliver a particular amount of power over a defined period of time are usually more vulnerable to the grid.

    Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified. Till date, Google claims that Machine Learning algorithms have boosted wind energy generated by 20%, ‘compared to the to the baseline scenario of no time-based commitments to the grid’, the blog says.

    4
    Image Courtesy: Google

    Completely doing away with wind variability is next to impossible, but Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful.

    This unique approach would surely open up new avenues and make wind farm data more reliable and precise. When the productivity of wind power farms in greatly increased and their output can be predicted as well as calculated, wind will have the capability to match conventional electricity sources.

    Google is hopeful that the power of Machine Learning and AI would boost the mass adoption of wind power and turn it into a popular alternative to traditional sources of electricity over the years.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    http://www.geospatialworld.net

    With an average of 55,000+ unique visitors per month, http://www.geospatialworld.net is easily the number one media portal in geospatial domain; and is a reliable source of information for professionals in 150+ countries. The website, which integrates text, graphics and video elements, is an interactive medium for geospatial industry stakeholders to connect through several innovative features, including news, videos, guest blogs, case studies, articles, interviews, business listings and events.

    600,000+ annual unique visitors

     
  • richardmitnick 11:58 am on January 30, 2019 Permalink | Reply
    Tags: AI-Artificial Intelligence, Algorithm could help autonomous underwater vehicles explore risky but scientifically-rewarding environments, , , , Engineers program marine robots to take calculated risks, ,   

    From MIT News: “Engineers program marine robots to take calculated risks” 

    MIT News
    MIT Widget

    From MIT News

    January 30, 2019
    Jennifer Chu

    1
    MIT engineers have now developed an algorithm that lets autonomous underwater vehicles weigh the risks and potential rewards of exploring an unknown region.
    Image: stock image.

    Algorithm could help autonomous underwater vehicles explore risky but scientifically-rewarding environments.

    We know far less about the Earth’s oceans than we do about the surface of the moon or Mars. The sea floor is carved with expansive canyons, towering seamounts, deep trenches, and sheer cliffs, most of which are considered too dangerous or inaccessible for autonomous underwater vehicles (AUV) to navigate.

    But what if the reward for traversing such places was worth the risk?

    MIT engineers have now developed an algorithm that lets AUVs weigh the risks and potential rewards of exploring an unknown region. For instance, if a vehicle tasked with identifying underwater oil seeps approached a steep, rocky trench, the algorithm could assess the reward level (the probability that an oil seep exists near this trench), and the risk level (the probability of colliding with an obstacle), if it were to take a path through the trench.

    “If we were very conservative with our expensive vehicle, saying its survivability was paramount above all, then we wouldn’t find anything of interest,” says Benjamin Ayton, a graduate student in MIT’s Department of Aeronautics and Astronautics. “But if we understand there’s a tradeoff between the reward of what you gather, and the risk or threat of going toward these dangerous geographies, we can take certain risks when it’s worthwhile.”

    Ayton says the new algorithm can compute tradeoffs of risk versus reward in real time, as a vehicle decides where to explore next. He and his colleagues in the lab of Brian Williams, professor of aeronautics and astronautics, are implementing this algorithm and others on AUVs, with the vision of deploying fleets of bold, intelligent robotic explorers for a number of missions, including looking for offshore oil deposits, investigating the impact of climate change on coral reefs, and exploring extreme environments analogous to Europa, an ice-covered moon of Jupiter that the team hopes vehicles will one day traverse.

    “If we went to Europa and had a very strong reason to believe that there might be a billion-dollar observation in a cave or crevasse, which would justify sending a spacecraft to Europa, then we would absolutely want to risk going in that cave,” Ayton says. “But algorithms that don’t consider risk are never going to find that potentially history-changing observation.”

    Ayton and Williams, along with Richard Camilli of the Woods Hole Oceanographic Institution, will present their new algorithm at the Association for the Advancement of Artificial Intelligence conference this week in Honolulu.

    A bold path

    The team’s new algorithm is the first to enable “risk-bounded adaptive sampling.” An adaptive sampling mission is designed, for instance, to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do so by finding paths with a concrete, acceptable level of risk. For instance, AUVs may be programmed to only chart paths with a chance of collision that doesn’t exceed 5 percent.

    But the researchers found that accounting for risk alone could severely limit a mission’s potential rewards.

    “Before we go into a mission, we want to specify the risk we’re willing to take for a certain level of reward,” Ayton says. “For instance, if a path were to take us to more hydrothermal vents, we would be willing to take this amount of risk, but if we’re not going to see anything, we would be willing to take less risk.”

    The team’s algorithm takes in bathymetric data, or information about the ocean topography, including any surrounding obstacles, along with the vehicle’s dynamics and inertial measurements, to compute the level of risk for a certain proposed path. The algorithm also takes in all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

    If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV goes ahead with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

    The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They used bathymetric data collected from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

    In the scenario with the lowest acceptable risk, meaning that the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that also did not have any high rewards — in this case, high temperatures. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and ultimately to a high-reward region.

    The team also ran the algorithm through 10,000 numerical simulations, generating random environments in each simulation through which to plan a path, and found that the algorithm “trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.”

    A risky slope

    Last December, Ayton, Williams, and others spent two weeks on a cruise off the coast of Costa Rica, deploying underwater gliders, on which they tested several algorithms, including this newest one. For the most part, the algorithm’s path planning agreed with those proposed by several onboard geologists who were looking for the best routes to find oil seeps.

    Ayton says there was a particular moment when the risk-bounded algorithm proved especially handy. An AUV was making its way up a precarious slump, or landslide, where the vehicle couldn’t take too many risks.

    “The algorithm found a method to get us up the slump quickly, while being the most worthwhile,” Ayton says. “It took us up a path that, while it didn’t help us discover oil seeps, it did help us refine our understanding of the environment.”

    “What was really interesting was to watch how the machine algorithms began to ‘learn’ after the findings of several dives, and began to choose sites that we geologists might not have chosen initially,” says Lori Summa, a geologist and guest investigator at the Woods Hole Oceanographic Institution, who took part in the cruise. “This part of the process is still evolving, but it was exciting to watch the algorithms begin to identify the new patterns from large amounts of data, and couple that information to an efficient, ‘safe’ search strategy.”

    In their long-term vision, the researchers hope to use such algorithms to help autonomous vehicles explore environments beyond Earth.

    “If we went to Europa and weren’t willing to take any risks in order to preserve a probe, then the probability of finding life would be very, very low,” Ayton says. “You have to risk a little to get more reward, which is generally true in life as well.”

    This research was supported, in part, by Exxon Mobile, as part of the MIT Energy Initiative, and by NASA.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:49 pm on October 16, 2018 Permalink | Reply
    Tags: AI-Artificial Intelligence, , , , , Deep Skies Lab, Galaxy Zoo-Citizen Science, , , , ,   

    From Symmetry: “Studying the stars with machine learning” 

    Symmetry Mag
    From Symmetry

    10/16/18
    Evelyn Lamb

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    To keep up with an impending astronomical increase in data about our universe, astrophysicists turn to machine learning.

    Kevin Schawinski had a problem.

    In 2007 he was an astrophysicist at Oxford University and hard at work reviewing seven years’ worth of photographs from the Sloan Digital Sky Survey—images of more than 900,000 galaxies. He spent his days looking at image after image, noting whether a galaxy looked spiral or elliptical, or logging which way it seemed to be spinning.

    Technological advancements had sped up scientists’ ability to collect information, but scientists were still processing information at the same rate. After working on the task full time and barely making a dent, Schawinski and colleague Chris Lintott decided there had to be a better way to do this.

    There was: a citizen science project called Galaxy Zoo. Schawinski and Lintott recruited volunteers from the public to help out by classifying images online. Showing the same images to multiple volunteers allowed them to check one another’s work. More than 100,000 people chipped in and condensed a task that would have taken years into just under six months.

    Citizen scientists continue to contribute to image-classification tasks. But technology also continues to advance.

    The Dark Energy Spectroscopic Instrument, scheduled to begin in 2019, will measure the velocities of about 30 million galaxies and quasars over five years.

    LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA

    The Large Synoptic Survey Telescope, scheduled to begin in the early 2020s, will collect more than 30 terabytes of data each night—for a decade.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “The volume of datasets [from those surveys] will be at least an order of magnitude larger,” says Camille Avestruz, a postdoctoral researcher at the University of Chicago.

    To keep up, astrophysicists like Schawinski and Avestruz have recruited a new class of non-scientist scientists: machines.

    Researchers are using artificial intelligence to help with a variety of tasks in astronomy and cosmology, from image analysis to telescope scheduling.

    Superhuman scheduling, computerized calibration

    Artificial intelligence is an umbrella term for ways in which computers can seem to reason, make decisions, learn, and perform other tasks that we associate with human intelligence. Machine learning is a subfield of artificial intelligence that uses statistical techniques and pattern recognition to train computers to make decisions, rather than programming more direct algorithms.

    In 2017, a research group from Stanford University used machine learning to study images of strong gravitational lensing, a phenomenon in which an accumulation of matter in space is dense enough that it bends light waves as they travel around it.

    Gravitational Lensing NASA/ESA

    Because many gravitational lenses can’t be accounted for by luminous matter alone, a better understanding of gravitational lenses can help astronomers gain insight into dark matter.

    In the past, scientists have conducted this research by comparing actual images of gravitational lenses with large numbers of computer simulations of mathematical lensing models, a process that can take weeks or even months for a single image. The Stanford team showed that machine learning algorithms can speed up this process by a factor of millions.

    3
    Greg Stewart, SLAC National Accelerator Laboratory

    Schawinski, who is now an astrophysicist at ETH Zürich, uses machine learning in his current work. His group has used tools called generative adversarial networks, or GAN, to recover clean versions of images that have been degraded by random noise. They recently published a paper [Astronomy and Astrophysics]about using AI to generate and test new hypotheses in astrophysics and other areas of research.

    Another application of machine learning in astrophysics involves solving logistical challenges such as scheduling. There are only so many hours in a night that a given high-powered telescope can be used, and it can only point in one direction at a time. “It costs millions of dollars to use a telescope for on the order of weeks,” says Brian Nord, a physicist at the University of Chicago and part of Fermilab’s Machine Intelligence Group, which is tasked with helping researchers in all areas of high-energy physics deploy AI in their work.

    Machine learning can help observatories schedule telescopes so they can collect data as efficiently as possible. Both Schawinski’s lab and Fermilab are using a technique called reinforcement learning to train algorithms to solve problems like this one. In reinforcement learning, an algorithm isn’t trained on “right” and “wrong” answers but through differing rewards that depend on its outputs. The algorithms must strike a balance between the safe, predictable payoffs of understood options and the potential for a big win with an unexpected solution.

    4
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    A growing field

    When computer science graduate student Shubhendu Trivedi of the Toyota Technological Institute at University of Chicago started teaching a graduate course on deep learning with one of his mentors, Risi Kondor, he was pleased with how many researchers from the physical sciences signed up for it. They didn’t know much about how to use AI in their research, and Trivedi realized there was an unmet need for machine learning experts to help scientists in different fields find ways of exploiting these new techniques.

    The conversations he had with researchers in his class evolved into collaborations, including participation in the Deep Skies Lab, an astronomy and artificial intelligence research group co-founded by Avestruz, Nord and astronomer Joshua Peek of the Space Telescope Science Institute. Earlier this month, they submitted their first peer-reviewed paper demonstrating the efficiency of an AI-based method to measure gravitational lensing in the Cosmic Microwave Background [CMB].

    Similar groups are popping up across the world, from Schawinski’s group in Switzerland to the Centre for Astrophysics and Supercomputing in Australia. And adoption of machine learning techniques in astronomy is increasing rapidly. In an arXiv search of astronomy papers, the terms “deep learning” and “machine learning” appear more in the titles of papers from the first seven months of 2018 than from all of 2017, which in turn had more than 2016.

    “Five years ago, [machine learning algorithms in astronomy] were esoteric tools that performed worse than humans in most circumstances,” Nord says. Today, more and more algorithms are consistently outperforming humans. “You’d be surprised at how much low-hanging fruit there is.”

    But there are obstacles to introducing machine learning into astrophysics research. One of the biggest is the fact that machine learning is a black box. “We don’t have a fundamental theory of how neural networks work and make sense of things,” Schawinski says. Scientists are understandably nervous about using tools without fully understanding how they work.

    Another related stumbling block is uncertainty. Machine learning often depends on inputs that all have some amount of noise or error, and the models themselves make assumptions that introduce uncertainty. Researchers using machine learning techniques in their work need to understand these uncertainties and communicate those accurately to each other and the broader public.

    The state of the art in machine learning is changing so rapidly that researchers are reluctant to make predictions about what will be coming even in the next five years. “I would be really excited if as soon as data comes off the telescopes, a machine could look at it and find unexpected patterns,” Nord says.

    No matter exactly the form future advances take, the data keeps coming faster and faster, and researchers are increasingly convinced that artificial intelligence is going to be necessary to help them keep up.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 6:53 pm on September 11, 2018 Permalink | Reply
    Tags: AI-Artificial Intelligence, , , , , , , , , The notorious repeating fast radio source FRB 121102,   

    From Breakthrough Listen via Science Alert: “Astronomers Have Detected an Astonishing 72 New Mystery Radio Bursts From Space “ 

    From Breakthrough Listen Project

    via

    ScienceAlert

    Science Alert

    11 SEP 2018
    MICHELLE STARR

    A massive number of new signals have been discovered coming from the notorious repeating fast radio source FRB 121102 – and we can thank artificial intelligence for these findings.

    Researchers at the search for extraterrestrial intelligence (SETI) project Breakthrough Listen applied machine learning to comb through existing data, and found 72 fast radio bursts that had previously been missed.

    Fast radio bursts (FRBs) are among the most mysterious phenomena in the cosmos. They are extremely powerful, generating as much energy as hundreds of millions of Suns. But they are also extremely short, lasting just milliseconds; and most of them only occur once, without warning.

    This means they can’t be predicted; so it’s not like astronomers are able to plan observations. They are only picked up later in data from other radio observations of the sky.

    Except for one source. FRB 121102 is a special individual – because ever since its discovery in 2012, it has been caught bursting again and again, the only FRB source known to behave this way.

    Because we know FRB 121102 to be a repeating source of FRBs, this means we can try to catch it in the act. This is exactly what researchers at Breakthrough Listen did last year. On 26 August 2017, they pointed the Green Bank Telescope in West Virginia at its location for five hours.

    In the 400 terabytes of data from that observation, the researchers discovered 21 FRBs using standard computer algorithms, all from within the first hour. They concluded that the source goes through periods of frenzied activity and quiescence.

    But the powerful new algorithm used to reanalyse that August 26 data suggests that FRB 121102 is a lot more active and possibly complex than originally thought. Researchers trained what is known as a convolutional neural network to look for the signals, then set it loose on the data like a truffle pig.

    It returned triumphant with 72 previously undetected signals, bringing the total number that astronomers have observed from the object to around 300.

    “This work is only the beginning of using these powerful methods to find radio transients,” said astronomer Gerry Zhang of the University of California Berkeley, which runs Breakthrough Listen.

    “We hope our success may inspire other serious endeavours in applying machine learning to radio astronomy.”

    The new result has helped us learn a little more about FRB 121102, putting constraints on the periodicity of the bursts. It suggests that, the researchers said, there’s no pattern to the way we receive them – unless the pattern is shorter than 10 milliseconds.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Listen

    Breakthrough Listen is the largest ever scientific research program aimed at finding evidence of civilizations beyond Earth. The scope and power of the search are on an unprecedented scale:

    The program includes a survey of the 1,000,000 closest stars to Earth. It scans the center of our galaxy and the entire galactic plane. Beyond the Milky Way, it listens for messages from the 100 closest galaxies to ours.

    The instruments used are among the world’s most powerful. They are 50 times more sensitive than existing telescopes dedicated to the search for intelligence.

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    UCSC Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA



    GBO radio telescope, Green Bank, West Virginia, USA

    The radio surveys cover 10 times more of the sky than previous programs. They also cover at least 5 times more of the radio spectrum – and do it 100 times faster. They are sensitive enough to hear a common aircraft radar transmitting to us from any of the 1000 nearest stars.

    We are also carrying out the deepest and broadest ever search for optical laser transmissions. These spectroscopic searches are 1000 times more effective at finding laser signals than ordinary visible light surveys. They could detect a 100 watt laser (the energy of a normal household bulb) from 25 trillion miles away.

    Listen combines these instruments with innovative software and data analysis techniques.

    The initiative will span 10 years and commit a total of $100,000,000.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: