Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:49 pm on October 16, 2018 Permalink | Reply
    Tags: , , , , , Deep Skies Lab, Galaxy Zoo-Citizen Science, Gravitational lenses, , , Machine learning,   

    From Symmetry: “Studying the stars with machine learning” 

    Symmetry Mag
    From Symmetry

    10/16/18
    Evelyn Lamb

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    To keep up with an impending astronomical increase in data about our universe, astrophysicists turn to machine learning.

    Kevin Schawinski had a problem.

    In 2007 he was an astrophysicist at Oxford University and hard at work reviewing seven years’ worth of photographs from the Sloan Digital Sky Survey—images of more than 900,000 galaxies. He spent his days looking at image after image, noting whether a galaxy looked spiral or elliptical, or logging which way it seemed to be spinning.

    Technological advancements had sped up scientists’ ability to collect information, but scientists were still processing information at the same rate. After working on the task full time and barely making a dent, Schawinski and colleague Chris Lintott decided there had to be a better way to do this.

    There was: a citizen science project called Galaxy Zoo. Schawinski and Lintott recruited volunteers from the public to help out by classifying images online. Showing the same images to multiple volunteers allowed them to check one another’s work. More than 100,000 people chipped in and condensed a task that would have taken years into just under six months.

    Citizen scientists continue to contribute to image-classification tasks. But technology also continues to advance.

    The Dark Energy Spectroscopic Instrument, scheduled to begin in 2019, will measure the velocities of about 30 million galaxies and quasars over five years.

    LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA

    The Large Synoptic Survey Telescope, scheduled to begin in the early 2020s, will collect more than 30 terabytes of data each night—for a decade.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “The volume of datasets [from those surveys] will be at least an order of magnitude larger,” says Camille Avestruz, a postdoctoral researcher at the University of Chicago.

    To keep up, astrophysicists like Schawinski and Avestruz have recruited a new class of non-scientist scientists: machines.

    Researchers are using artificial intelligence to help with a variety of tasks in astronomy and cosmology, from image analysis to telescope scheduling.

    Superhuman scheduling, computerized calibration

    Artificial intelligence is an umbrella term for ways in which computers can seem to reason, make decisions, learn, and perform other tasks that we associate with human intelligence. Machine learning is a subfield of artificial intelligence that uses statistical techniques and pattern recognition to train computers to make decisions, rather than programming more direct algorithms.

    In 2017, a research group from Stanford University used machine learning to study images of strong gravitational lensing, a phenomenon in which an accumulation of matter in space is dense enough that it bends light waves as they travel around it.

    Gravitational Lensing NASA/ESA

    Because many gravitational lenses can’t be accounted for by luminous matter alone, a better understanding of gravitational lenses can help astronomers gain insight into dark matter.

    In the past, scientists have conducted this research by comparing actual images of gravitational lenses with large numbers of computer simulations of mathematical lensing models, a process that can take weeks or even months for a single image. The Stanford team showed that machine learning algorithms can speed up this process by a factor of millions.

    3
    Greg Stewart, SLAC National Accelerator Laboratory

    Schawinski, who is now an astrophysicist at ETH Zürich, uses machine learning in his current work. His group has used tools called generative adversarial networks, or GAN, to recover clean versions of images that have been degraded by random noise. They recently published a paper [Astronomy and Astrophysics]about using AI to generate and test new hypotheses in astrophysics and other areas of research.

    Another application of machine learning in astrophysics involves solving logistical challenges such as scheduling. There are only so many hours in a night that a given high-powered telescope can be used, and it can only point in one direction at a time. “It costs millions of dollars to use a telescope for on the order of weeks,” says Brian Nord, a physicist at the University of Chicago and part of Fermilab’s Machine Intelligence Group, which is tasked with helping researchers in all areas of high-energy physics deploy AI in their work.

    Machine learning can help observatories schedule telescopes so they can collect data as efficiently as possible. Both Schawinski’s lab and Fermilab are using a technique called reinforcement learning to train algorithms to solve problems like this one. In reinforcement learning, an algorithm isn’t trained on “right” and “wrong” answers but through differing rewards that depend on its outputs. The algorithms must strike a balance between the safe, predictable payoffs of understood options and the potential for a big win with an unexpected solution.

    4
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    A growing field

    When computer science graduate student Shubhendu Trivedi of the Toyota Technological Institute at University of Chicago started teaching a graduate course on deep learning with one of his mentors, Risi Kondor, he was pleased with how many researchers from the physical sciences signed up for it. They didn’t know much about how to use AI in their research, and Trivedi realized there was an unmet need for machine learning experts to help scientists in different fields find ways of exploiting these new techniques.

    The conversations he had with researchers in his class evolved into collaborations, including participation in the Deep Skies Lab, an astronomy and artificial intelligence research group co-founded by Avestruz, Nord and astronomer Joshua Peek of the Space Telescope Science Institute. Earlier this month, they submitted their first peer-reviewed paper demonstrating the efficiency of an AI-based method to measure gravitational lensing in the Cosmic Microwave Background [CMB].

    Similar groups are popping up across the world, from Schawinski’s group in Switzerland to the Centre for Astrophysics and Supercomputing in Australia. And adoption of machine learning techniques in astronomy is increasing rapidly. In an arXiv search of astronomy papers, the terms “deep learning” and “machine learning” appear more in the titles of papers from the first seven months of 2018 than from all of 2017, which in turn had more than 2016.

    “Five years ago, [machine learning algorithms in astronomy] were esoteric tools that performed worse than humans in most circumstances,” Nord says. Today, more and more algorithms are consistently outperforming humans. “You’d be surprised at how much low-hanging fruit there is.”

    But there are obstacles to introducing machine learning into astrophysics research. One of the biggest is the fact that machine learning is a black box. “We don’t have a fundamental theory of how neural networks work and make sense of things,” Schawinski says. Scientists are understandably nervous about using tools without fully understanding how they work.

    Another related stumbling block is uncertainty. Machine learning often depends on inputs that all have some amount of noise or error, and the models themselves make assumptions that introduce uncertainty. Researchers using machine learning techniques in their work need to understand these uncertainties and communicate those accurately to each other and the broader public.

    The state of the art in machine learning is changing so rapidly that researchers are reluctant to make predictions about what will be coming even in the next five years. “I would be really excited if as soon as data comes off the telescopes, a machine could look at it and find unexpected patterns,” Nord says.

    No matter exactly the form future advances take, the data keeps coming faster and faster, and researchers are increasingly convinced that artificial intelligence is going to be necessary to help them keep up.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


    Advertisements
     
  • richardmitnick 11:29 am on September 21, 2018 Permalink | Reply
    Tags: Andrew Peterson, Brown awarded $3.5M to speed up atomic-scale computer simulations, , Computational power is growing rapidly which lets us perform larger and more realistic simulations, Different simulations often have the same sets of calculations underlying them- so finding what can be re-used saves a lot of time and money, Machine learning,   

    From Brown University: “Brown awarded $3.5M to speed up atomic-scale computer simulations” 

    Brown University
    From Brown University

    September 20, 2018
    Kevin Stacey
    kevin_stacey@brown.edu
    401-863-3766

    1
    Andrew Peterson. No photo credit.

    With a new grant from the U.S. Department of Energy, a Brown University-led research team will use machine learning to speed up atom-level simulations of chemical reactions and the properties of materials.

    “Simulations provide insights into materials and chemical processes that we can’t readily get from experiments,” said Andrew Peterson, an associate professor in Brown’s School of Engineering who will lead the work.

    “Computational power is growing rapidly, which lets us perform larger and more realistic simulations. But as the size of the simulations grows, the time involved in running them can grow exponentially. This paradox means that even with the growth in computational power, our field still cannot perform truly large-scale simulations. Our goal is to speed those simulations up dramatically — ideally by orders of magnitude — using machine learning.”

    The grant provides $3.5 million dollars for the work over four years. Peterson will work with two Brown colleagues — Franklin Goldsmith, assistant professor of engineering, and Brenda Rubenstein, assistant professor of chemistry — as well as researchers from Carnegie Mellon, Georgia Tech and MIT.

    The idea behind the work is that different simulations often have the same sets of calculations underlying them. Peterson and his colleagues aim to use machine learning to find those underlying similarities and fast-forward through them.

    “What we’re doing is taking the results of calculations from prior simulations and using them to predict the outcome of calculations that haven’t been done yet,” Peterson said. “If we can eliminate the need to do similar calculations over and over again, we can speed things up dramatically, potentially by orders of magnitude.”

    The team will focus their work initially on simulations of electrocatalysis — the kinds of chemical reactions that are important in devices like fuel cells and batteries. These are complex, often multi-step reactions that are fertile ground for simulation-driven research, Peterson says.

    Atomic-scale simulations have demonstrated usefulness in Peterson’s own work in the design of new catalysts. In a recent example, Peterson worked with Brown chemist Shouheng Sun on a gold nanoparticle catalyst that can perform a reaction necessary for converting carbon dioxide into useful forms of carbon. Peterson’s simulations showed it was the sharp edges of the oddly shaped catalyst that were particularly active for the desired reaction.

    “That led us to change the geometry of the catalyst to a nanowire — something that’s basically all edges — to maximize its reactivity,” Peterson said. “We might have eventually tried a nanowire by trial and error, but because of the computational insights we were able to get there much more quickly.”

    The researchers will use a software package that Peterson’s research group developed previously as a starting point. The software, called AMP (Atomistic Machine-learning Package) is open-source and already widely used in the simulation community, Peterson says.

    The Department of Energy grant will bring atomic-scale simulations — and the insights they produce — to bear on ever larger and more complex simulations. And while the work under the grant will focus on electrocatalysis, the tools the team develops should be widely applicable to other types of material and chemical simulations.

    Peterson is hopeful that the investment that the federal government is making in machine learning will be repaid by making better use of valuable computing resources.

    “Modern supercomputers cost millions of dollars to build, and simulation time on them is precious,” Peterson said. “If we’re able to free up time on those machines for additional simulations to be run, that translates into vastly increased return-on-investment for those machines. It’s real money.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Brown

    Brown U Robinson Hall
    Located in historic Providence, Rhode Island and founded in 1764, Brown University is the seventh-oldest college in the United States. Brown is an independent, coeducational Ivy League institution comprising undergraduate and graduate programs, plus the Alpert Medical School, School of Public Health, School of Engineering, and the School of Professional Studies.

    With its talented and motivated student body and accomplished faculty, Brown is a leading research university that maintains a particular commitment to exceptional undergraduate instruction.

    Brown’s vibrant, diverse community consists of 6,000 undergraduates, 2,000 graduate students, 400 medical school students, more than 5,000 summer, visiting and online students, and nearly 700 faculty members. Brown students come from all 50 states and more than 100 countries.

    Undergraduates pursue bachelor’s degrees in more than 70 concentrations, ranging from Egyptology to cognitive neuroscience. Anything’s possible at Brown—the university’s commitment to undergraduate freedom means students must take responsibility as architects of their courses of study.

     
  • richardmitnick 2:04 pm on September 12, 2018 Permalink | Reply
    Tags: , , Machine learning, , , ,   

    From Fermi National Accelerator Lab: “MicroBooNE demonstrates use of convolutional neural networks on liquid-argon TPC data for first time” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermi National Accelerator Lab is an enduring source of strength for the US contribution to scientific research world wide.

    September 12, 2018
    Victor Genty, Kazuhiro Terao and Taritree

    It is hard these days not to encounter examples of machine learning out in the world. Chances are, if your phone unlocks using facial recognition or if you’re using voice commands to control your phone, you are likely using machine learning algorithms — in particular deep neural networks.

    What makes these algorithms so powerful is that they learn relationships between high-level concepts we wish to find in an image (faces) or sound wave (words) with sets of low-level patterns (lines, shapes, colors, textures, individual sounds), which represent them in the data. Furthermore, these low-level patterns and relationships do not have to be conceived of or hand-designed by humans, but instead are learned directly from examples of the data. Not having to come up with new patterns to find for each new problem is why deep neural networks have been able to advance the state of the art for so many different types of problems: from analyzing video for self-driving cars to assisting robots in learning how to manipulate objects.

    Here at Fermilab, there has been a lot of effort in having these deep neural networks help us analyze the data from our particle detectors so that we can more quickly and effectively use it to look for new physics. These applications are a continuation of the high-energy physics community’s long history in adopting and furthering the use of machine learning algorithms.

    Recently, the MicroBooNE neutrino experiment published a paper describing how they used convolutional neural networks — a particular type of deep neural network — to sort individual pixels coming from images made by a particular type of detector known as a liquid-argon time projection (LArTPC) chamber. The experiment designed a convolutional neural network called U-ResNet to distinguish between two types of pixels: those that were a part of a track-like particle trajectory from those that were a part of a shower-like particle trajectory.

    1
    This plot shows a comparison of U-ResNet performance on data and simulation, where the true pixel labels are provided by a physicist. The sample used is 100 events that contain a charged-current neutrino interaction candidate with neutral pions produced at the event vertex. The horizontal axis shows the fraction of pixels where the prediction by U-ResNet differed from the labels for each event. The error bars indicate only a statistical uncertainty.

    Track-like trajectories, made by particles such as a muon or proton, consist of a line with small curvature. Shower-like trajectories, produced by particles such as an electron or photon, are more complex topological features with many branching trajectories. This distinction is important because separating these type of topologies is known to be difficult for traditional algorithms. Not only that, shower-like shapes are produced when electrons and photons interact in the detector, and these two particles are often an important signal or background in physics analyses.

    MicroBooNE researchers demonstrated that these networks not only performed well but also worked in a similar fashion when presented with simulated data and real data. The latter is the first time this has been demonstrated for data from LArTPCs.

    Showing that networks behave the same on simulated and real data is critical, because these networks are typically trained on simulated data. Recall that these networks learn by looking at many examples. In industry, gathering large “training” data sets is an arduous and expensive task. However, particle physicists have a secret weapon — they can create as much simulated data as they want, since all experiments produce a highly detailed model of their detectors and data acquisition systems in order to produce as faithful a representation of the data as possible.

    However, these models are never perfect. And so a big question was, “Is the simulated data close enough to the real data to properly train these neural networks?” The way MicroBooNE answered this question is by performing a Turing test that compares the performance of the network to that of a physicist. They demonstrated that the accuracy of the human was similar to the machine when labeling simulated data, for which an absolute accuracy can be defined. They then compared the labels for real data. Here the disagreement between labels was low, and similar between machine and human (See the top figure. See the figure below for an example of how a human and computer labeled the same data event.) In addition, a number of qualitative studies looked at the correlation between manipulations of the image and the label provided by the network. They showed that the correlations follow human-like intuitions. For example, as a line segment gets shorter, the network becomes less confident if the segment is due to a track or a shower. This suggests that the low-level correlations being used are the same physically motivated correlations a physicist would use if engineering an algorithm by hand.

    2
    This example image shows a charged-current neutrino interaction with decay gamma rays from a neutral pion (left). The label image (middle) is shown with the output of U-ResNet (right) where track and shower pixels are shown in yellow and cyan color respectively.

    Demonstrating this simulated-versus-real data milestone is important because convolutional neural networks are valuable to current and future neutrino experiments that will use LArTPCs. This track-shower labeling is currently being employed in upcoming MicroBooNE analyses. Furthermore, for the upcoming Deep Underground Neutrino Experiment (DUNE), convolutional neural networks are showing much promise toward having the performance necessary to achieve DUNE’s physics goals, such as the measurement of CP violation, a possible explanation of the asymmetry in the presence of matter and antimatter in the current universe.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford



    SURF building in Lead SD USA

    The more demonstrations there are that these algorithms work on real LArTPC data, the more confidence the community can have that convolutional neural networks will help us learn about the properties of the neutrino and the fundamental laws of nature once DUNE begins to take data.

    Science paper:
    A Deep Neural Network for Pixel-Level Electromagnetic Particle Identification in the MicroBooNE Liquid Argon Time Projection Chamber
    https://arxiv.org/abs/1808.07269

    Victor Genty, Kazuhiro Terao and Taritree Wongjirad are three of the scientists who analyzed this result. Victor Genty is a graduate student at Columbia University. Kazuhiro Terao is a physicist at SLAC National Accelerator Laboratory. Taritree Wongjirad is an assistant professor at Tufts University.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.


    FNAL/MINERvA

    FNAL DAMIC

    FNAL Muon g-2 studio

    FNAL Short-Baseline Near Detector under construction

    FNAL Mu2e solenoid

    Dark Energy Camera [DECam], built at FNAL

    FNAL DUNE Argon tank at SURF

    FNAL/MicrobooNE

    FNAL Don Lincoln

    FNAL/MINOS

    FNAL Cryomodule Testing Facility

    FNAL Minos Far Detector

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    FNAL ICARUS

    FNAL Holometer

     
  • richardmitnick 6:53 pm on September 11, 2018 Permalink | Reply
    Tags: , , , , , , , Machine learning, , The notorious repeating fast radio source FRB 121102,   

    From Breakthrough Listen via Science Alert: “Astronomers Have Detected an Astonishing 72 New Mystery Radio Bursts From Space “ 

    From Breakthrough Listen Project

    via

    ScienceAlert

    Science Alert

    11 SEP 2018
    MICHELLE STARR

    A massive number of new signals have been discovered coming from the notorious repeating fast radio source FRB 121102 – and we can thank artificial intelligence for these findings.

    Researchers at the search for extraterrestrial intelligence (SETI) project Breakthrough Listen applied machine learning to comb through existing data, and found 72 fast radio bursts that had previously been missed.

    Fast radio bursts (FRBs) are among the most mysterious phenomena in the cosmos. They are extremely powerful, generating as much energy as hundreds of millions of Suns. But they are also extremely short, lasting just milliseconds; and most of them only occur once, without warning.

    This means they can’t be predicted; so it’s not like astronomers are able to plan observations. They are only picked up later in data from other radio observations of the sky.

    Except for one source. FRB 121102 is a special individual – because ever since its discovery in 2012, it has been caught bursting again and again, the only FRB source known to behave this way.

    Because we know FRB 121102 to be a repeating source of FRBs, this means we can try to catch it in the act. This is exactly what researchers at Breakthrough Listen did last year. On 26 August 2017, they pointed the Green Bank Telescope in West Virginia at its location for five hours.

    In the 400 terabytes of data from that observation, the researchers discovered 21 FRBs using standard computer algorithms, all from within the first hour. They concluded that the source goes through periods of frenzied activity and quiescence.

    But the powerful new algorithm used to reanalyse that August 26 data suggests that FRB 121102 is a lot more active and possibly complex than originally thought. Researchers trained what is known as a convolutional neural network to look for the signals, then set it loose on the data like a truffle pig.

    It returned triumphant with 72 previously undetected signals, bringing the total number that astronomers have observed from the object to around 300.

    “This work is only the beginning of using these powerful methods to find radio transients,” said astronomer Gerry Zhang of the University of California Berkeley, which runs Breakthrough Listen.

    “We hope our success may inspire other serious endeavours in applying machine learning to radio astronomy.”

    The new result has helped us learn a little more about FRB 121102, putting constraints on the periodicity of the bursts. It suggests that, the researchers said, there’s no pattern to the way we receive them – unless the pattern is shorter than 10 milliseconds.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Listen

    Breakthrough Listen is the largest ever scientific research program aimed at finding evidence of civilizations beyond Earth. The scope and power of the search are on an unprecedented scale:

    The program includes a survey of the 1,000,000 closest stars to Earth. It scans the center of our galaxy and the entire galactic plane. Beyond the Milky Way, it listens for messages from the 100 closest galaxies to ours.

    The instruments used are among the world’s most powerful. They are 50 times more sensitive than existing telescopes dedicated to the search for intelligence.

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    UCSC Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA



    GBO radio telescope, Green Bank, West Virginia, USA

    The radio surveys cover 10 times more of the sky than previous programs. They also cover at least 5 times more of the radio spectrum – and do it 100 times faster. They are sensitive enough to hear a common aircraft radar transmitting to us from any of the 1000 nearest stars.

    We are also carrying out the deepest and broadest ever search for optical laser transmissions. These spectroscopic searches are 1000 times more effective at finding laser signals than ordinary visible light surveys. They could detect a 100 watt laser (the energy of a normal household bulb) from 25 trillion miles away.

    Listen combines these instruments with innovative software and data analysis techniques.

    The initiative will span 10 years and commit a total of $100,000,000.

     
  • richardmitnick 10:29 am on September 7, 2018 Permalink | Reply
    Tags: AIM-Adaptable Interpretable Machine Learning, , Black-box models, , Machine learning, ,   

    From MIT News: “Taking machine thinking out of the black box” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Anne McGovern | Lincoln Laboratory

    1
    Members of a team developing Adaptable Interpretable Machine Learning at Lincoln Laboratory are: (l-r) Melva James, Stephanie Carnell, Jonathan Su, and Neela Kaushik. Photo: Glen Cooper.

    Adaptable Interpretable Machine Learning project is redesigning machine learning models so humans can understand what computers are thinking.

    Software applications provide people with many kinds of automated decisions, such as identifying what an individual’s credit risk is, informing a recruiter of which job candidate to hire, or determining whether someone is a threat to the public. In recent years, news headlines have warned of a future in which machines operate in the background of society, deciding the course of human lives while using untrustworthy logic.

    Part of this fear is derived from the obscure way in which many machine learning models operate. Known as black-box models, they are defined as systems in which the journey from input to output is next to impossible for even their developers to comprehend.

    “As machine learning becomes ubiquitous and is used for applications with more serious consequences, there’s a need for people to understand how it’s making predictions so they’ll trust it when it’s doing more than serving up an advertisement,” says Jonathan Su, a member of the technical staff in MIT Lincoln Laboratory’s Informatics and Decision Support Group.

    Currently, researchers either use post hoc techniques or an interpretable model such as a decision tree to explain how a black-box model reaches its conclusion. With post hoc techniques, researchers observe an algorithm’s inputs and outputs and then try to construct an approximate explanation for what happened inside the black box. The issue with this method is that researchers can only guess at the inner workings, and the explanations can often be wrong. Decision trees, which map choices and their potential consequences in a tree-like construction, work nicely for categorical data whose features are meaningful, but these trees are not interpretable in important domains, such as computer vision and other complex data problems.

    Su leads a team at the laboratory that is collaborating with Professor Cynthia Rudin at Duke University, along with Duke students Chaofan Chen, Oscar Li, and Alina Barnett, to research methods for replacing black-box models with prediction methods that are more transparent. Their project, called Adaptable Interpretable Machine Learning (AIM), focuses on two approaches: interpretable neural networks as well as adaptable and interpretable Bayesian rule lists (BRLs).

    A neural network is a computing system composed of many interconnected processing elements. These networks are typically used for image analysis and object recognition. For instance, an algorithm can be taught to recognize whether a photograph includes a dog by first being shown photos of dogs. Researchers say the problem with these neural networks is that their functions are nonlinear and recursive, as well as complicated and confusing to humans, and the end result is that it is difficult to pinpoint what exactly the network has defined as “dogness” within the photos and what led it to that conclusion.

    To address this problem, the team is developing what it calls “prototype neural networks.” These are different from traditional neural networks in that they naturally encode explanations for each of their predictions by creating prototypes, which are particularly representative parts of an input image. These networks make their predictions based on the similarity of parts of the input image to each prototype.

    As an example, if a network is tasked with identifying whether an image is a dog, cat, or horse, it would compare parts of the image to prototypes of important parts of each animal and use this information to make a prediction. A paper on this work: “This looks like that: deep learning for interpretable image recognition,” was recently featured in an episode of the “Data Science at Home” podcast. A previous paper, “Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions,” used entire images as prototypes, rather than parts.

    The other area the research team is investigating is BRLs, which are less-complicated, one-sided decision trees that are suitable for tabular data and often as accurate as other models. BRLs are made of a sequence of conditional statements that naturally form an interpretable model. For example, if blood pressure is high, then risk of heart disease is high. Su and colleagues are using properties of BRLs to enable users to indicate which features are important for a prediction. They are also developing interactive BRLs, which can be adapted immediately when new data arrive rather than recalibrated from scratch on an ever-growing dataset.

    Stephanie Carnell, a graduate student from the University of Florida and a summer intern in the Informatics and Decision Support Group, is applying the interactive BRLs from the AIM program to a project to help medical students become better at interviewing and diagnosing patients. Currently, medical students practice these skills by interviewing virtual patients and receiving a score on how much important diagnostic information they were able to uncover. But the score does not include an explanation of what, precisely, in the interview the students did to achieve their score. The AIM project hopes to change this.

    “I can imagine that most medical students are pretty frustrated to receive a prediction regarding success without some concrete reason why,” Carnell says. “The rule lists generated by AIM should be an ideal method for giving the students data-driven, understandable feedback.”

    The AIM program is part of ongoing research at the laboratory in human-systems engineering — or the practice of designing systems that are more compatible with how people think and function, such as understandable, rather than obscure, algorithms.

    “The laboratory has the opportunity to be a global leader in bringing humans and technology together,” says Hayley Reynolds, assistant leader of the Informatics and Decision Support Group. “We’re on the cusp of huge advancements.”

    Melva James is another technical staff member in the Informatics and Decision Support Group involved in the AIM project. “We at the laboratory have developed Python implementations of both BRL and interactive BRLs,” she says. “[We] are concurrently testing the output of the BRL and interactive BRL implementations on different operating systems and hardware platforms to establish portability and reproducibility. We are also identifying additional practical applications of these algorithms.”

    Su explains: “We’re hoping to build a new strategic capability for the laboratory — machine learning algorithms that people trust because they understand them.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:31 am on September 4, 2018 Permalink | Reply
    Tags: Aftershocks can often be as horrifying as the main event, , , , Machine learning, , , , This New AI Tool Could Solve a Deadly Earthquake Problem We Currently Can't Fix   

    From Harvard University via Science Alert: “This New AI Tool Could Solve a Deadly Earthquake Problem We Currently Can’t Fix” 

    Harvard University
    From Harvard University

    via

    Science Alert

    4 SEP 2018
    DAVID NIELD

    1
    (mehmetakgu/iStock)

    It could literally save lives.

    The aftershocks of a devastating earthquake can often be as horrifying as the main event. Now scientists have developed a system for predicting where such post-quake tremors could take place, and they’ve used an ingenious application of artificial intelligence (AI) to make this happen.

    Knowing more about what’s coming next can be a matter of life or death for communities reeling from a large quake. The aftershocks can often cause further injuries and fatalities, damage buildings, and complicate rescue efforts.

    A team led by researchers from Harvard University has trained AI to crunch huge amounts of sensor data and apply deep learning to make more accurate predictions.

    The researchers behind the new system say it’s not ready to be deployed yet, but is already more reliable at pinpointing aftershocks than current prediction models.

    In the years ahead, it could become a vital part of the prediction systems used by seismologists.

    “There are three things you want to know about earthquakes – you want to know when they are going to occur, how big they’re going to be and where they’re going to be,” says one of the team, Brendan Meade from Harvard University in Massachusetts.

    “Prior to this work we had empirical laws for when they would occur and how big they were going to be, and now we’re working the third leg, where they might occur.”

    The idea to use deep learning to tackle this came to Meade when he was on a sabbatical at Google – a company where AI is being deployed in many different areas of computing and science.

    Machine learning is just one facet of AI, and is exactly what it sounds like: machines learning from sets of data, so they can cope with new problems that they haven’t been specifically programmed to tackle.

    Deep learning is a more advanced type of machine learning, applying what are called neural networks to try and mimic the thinking processes of the brain.

    In simple terms it means the AI can see more possible results at once, and weigh up a more complex map of factors and considerations, sort-of like neurons in a brain would.

    It’s perfect for earthquakes, with so many variables to consider – from the strength of the shock to the position of the tectonic plates to the type of ground involved. Deep learning could potentially tease out patterns that human analysts could never spot.

    To put this to use with aftershocks, Meade and his colleagues tapped into a database of over 131,000 pairs of earthquake and aftershock readings, taken from 199 previous earthquakes.

    Having let the AI engine chew through those, they then got it to predict the activity of more than 30,000 similar pairs, suggesting the likelihood of aftershocks hitting locations based on a grid of 5 square kilometre (1.9 square mile) units.

    The results were ahead of the Coulomb failure stress change model currently in use. If 1 represents perfect accuracy, and .5 represents flipping a coin, the Coulomb model scored 0.583, and the new AI system managed 0.849.

    “I’m very excited for the potential for machine learning going forward with these kind of problems – it’s a very important problem to go after,” says one of the researchers, Phoebe DeVries from Harvard University.

    “Aftershock forecasting in particular is a challenge that’s well-suited to machine learning because there are so many physical phenomena that could influence aftershock behaviour and machine learning is extremely good at teasing out those relationships.”

    A key ingredient, the researchers say, was the addition of the von Mises yield criterion into the AI’s algorithms – a calculation that can predict when materials will break under stress. Previously used in fields like metallurgy, the calculation hasn’t been extensively used in modelling earthquakes before now.

    There’s still a way to go here – the researchers point out their current AI models are only designed to deal with one type of aftershock trigger, and simple fault lines: it’s not yet a system that can be applied to any kind of quake around the world.

    What’s more, it’s too slow right now to predict the deadly aftershocks that can happen a day or two after the first earthquake.

    However, the good news is that neural networks are designed to continually get better over time, which means with more data and more learning cycles, the system should steadily improve.

    “I think we’ve really just scratched the surface of what could be done with aftershock forecasting… and that’s really exciting,” says DeVries.

    The research has been published in Nature.

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 5:05 pm on August 21, 2018 Permalink | Reply
    Tags: KPI-driven decision-making, Machine learning, MIT Sloan School   

    From MIT Sloan: “Improving Strategic Execution With Machine Learning” 

    MIT News

    From MIT Sloan

    8.21.18
    Michael Schrage
    David Kiron

    MIT SMR’s 2018 Strategic Measurement study reveals how organizations using machine learning to enhance KPI-driven decision-making are pulling ahead of their competitors.

    Machine learning (ML) is changing how leaders use metrics to drive business performance, customer experience, and growth. A small but growing group of companies is investing in ML to augment strategic decision-making with key performance indicators (KPIs). Our research,1 based on a global survey and more than a dozen interviews with executives and academics, suggests that ML is literally, and figuratively, redefining how businesses create and measure value.

    KPIs traditionally have had a retrospective, reporting bias, but by surfacing hidden variables that anticipate “key performance,” machine learning is making KPIs more predictive and prescriptive. With more forward-looking KPIs, progressive leaders can treat strategic measures as high-octane data fuel for training machine-learning algorithms to optimize business processes. Our survey and interviews suggest that this flip ― transforming KPIs from analytic outputs to data inputs ― is at an early, albeit promising, stage.

    Those companies that are already taking action on machine learning ― investing in ML and actively using it to engage customers ― differ radically from companies that are not yet investing in ML. They are far more likely to:

    These differences all depend on treating data as a valuable corporate asset. We see a strong correlation between companies that embrace ML and data-driven decision-making.
    Augmenting Execution With Machine Learning

    Nearly three quarters of survey respondents believe their organization’s current functional KPIs would be better achieved with greater investment in automation and machine-learning technologies. Our interviews with senior executives identified a variety of innovative ML practices. Without exception, the companies with the most intriguing and ambitious ML initiatives were the ones with the most serious commitment ― cultural and organizational ― to managing data as a valuable corporate asset.

    About the Research

    This report explores some of the key findings from the authors’ 2018 research study of KPIs and machine learning in today’s corporate landscape. The research, which involved a survey of 4,700 executives and managers and interviews with more than a dozen corporate leaders and academics, has far-reaching implications for modern businesses. We focused our analysis on 3,225 executive-level respondents; more than half were marketing executives.

    The marketing function is often an early adopter of machine learning in the enterprise. Applications in advertising, customer segmentation, and customer intelligence have become common.2 Even among marketers, however, slightly less than half of surveyed companies have incentives or internal functional KPIs to use more automation and ML technologies. (See Figure 1.) It is highly unlikely this finding reflects ML saturation in the enterprise. Most of the executives we interviewed for our study are focused more on ML’s potential than its actual development or deployment.

    1

    Kelly Watkins, vice president of global marketing at Slack, is exploring machine-learning solutions. For Slack, an essential KPI is determining which businesses using the company’s free workplace collaboration app are good candidates for converting to paid subscriptions for premium features. “This is an effort that the marketing organization, product organization, and the sales organization are working on together,” Watkins says. “Can we train lead-scoring algorithms to really get a sense of, based on a variety of criteria, what’s the best place for sales reps to start among the options that they have for outreach?”

    Watkins also envisions implementing machine learning to handle routine tasks currently performed by Slack employees. She says her intention is to “enable folks in my organization to use their minds to solve strategic problems and to be more consistently looking for insights in the data that can shift the strategy and shift execution, up-leveling their daily mode of operating.” In short, Watkins sees an ML future transforming both efficiency and strategy.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

    MIT Sloan Management Review leads the discourse among academic researchers, business executives and other influential thought leaders about advances in management practice, particularly those shaped by technology, that are transforming how people lead and innovate. MIT SMR disseminates new management research and innovative ideas so that thoughtful executives can capitalize on the opportunities generated by rapid organizational, technological and societal change.

    We distribute our content on the web, in print and on mobile and portable platforms, as well as via licensees and libraries around the world.

     
  • richardmitnick 3:20 pm on August 1, 2018 Permalink | Reply
    Tags: , , Machine learning, , , , ,   

    From Symmetry: “Machine learning proliferates in particle physics” 

    Symmetry Mag
    From Symmetry

    08/01/18
    Manuel Gnida

    1

    A new review in Nature chronicles the many ways machine learning is popping up in particle physics research.

    Experiments at the Large Hadron Collider produce about a million gigabytes of data every second.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    Even after reduction and compression, the data amassed in just one hour at the LHC is similar to the data volume Facebook collects in an entire year.

    Luckily, particle physicists don’t have to deal with all of that data all by themselves. They partner with a form of artificial intelligence that learns how to do complex analyses on its own, called machine learning.

    “Compared to a traditional computer algorithm that we design to do a specific analysis, we design a machine learning algorithm to figure out for itself how to do various analyses, potentially saving us countless man-hours of design and analysis work,” says College of William & Mary physicist Alexander Radovic, who works on the NOvA neutrino experiment.

    FNAL NOvA detector in northern Minnesota


    FNAL/NOvA experiment map

    Radovic and a group of researchers summarize current applications and future prospects of machine learning in particle physics in a paper published today in Nature.

    Sifting through big data

    To handle the gigantic data volumes produced in modern experiments like the ones at the LHC, researchers apply what they call “triggers”—dedicated hardware and software that decide in real time which data to keep for analysis and which data to toss out.

    In LHCb, an experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70 percent of these decisions, says LHCb scientist Mike Williams from the Massachusetts Institute of Technology, one of the authors of the Nature summary.

    CERN LHCb chamber, LHC


    CERN/LHCb detector

    “Machine learning plays a role in almost all data aspects of the experiment, from triggers to the analysis of the remaining data,” he says.

    Machine learning has proven extremely successful in the area of analysis. The gigantic ATLAS and CMS detectors at the LHC, which enabled the discovery of the Higgs boson, each have millions of sensing elements whose signals need to be put together to obtain meaningful results.

    CERN ATLAS

    CERN/CMS Detector

    “These signals make up a complex data space,” says Michael Kagan of the US Department of Energy’s SLAC National Accelerator Laboratory, who works on ATLAS and was also an author on the Nature review. “We need to understand the relationship between them to come up with conclusions—for example, that a certain particle track in the detector was produced by an electron, a photon or something else.”

    Neutrino experiments also benefit from machine learning. NOvA [above], which is managed by Fermi National Accelerator Laboratory, studies how neutrinos change from one type to another as they travel through the Earth. These neutrino oscillations could potentially reveal the existence of a new neutrino type that some theories predict to be a particle of dark matter. NOvA’s detectors are watching out for charged particles produced when neutrinos hit the detector material, and machine learning algorithms identify them.

    From machine learning to deep learning

    Recent developments in machine learning often called “deep learning” promise to take applications in particle physics even further. Deep learning typically refers to the use of neural networks: computer algorithms with an architecture inspired by the dense network of neurons in the human brain.

    These neural nets learn on their own how to perform certain analysis tasks during a training period in which they are shown sample data, such as simulations, and are told how well they performed.

    Until recently, the success of neural nets was limited because training them used to be very hard, says co-author Kazuhiro Terao, a SLAC researcher working on the MicroBooNE neutrino experiment, which studies neutrino oscillations as part of Fermilab’s short-baseline neutrino program and will become a component of the future Deep Underground Neutrino Experiment at the Long-Baseline Neutrino Facility.

    FNAL/MicroBooNE

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    “These difficulties limited us to neural networks that were only a couple of layers deep,” he says. “Thanks to advances in algorithms and computing hardware, we now know much better how to build and train more capable networks hundreds or thousands of layers deep.”

    Many of the advances in deep learning are driven by tech giants’ commercial applications and the data explosion they have generated over the past two decades. “NOvA, for example, uses a neural network inspired by the architecture of the GoogleNet,” Radovic says. “It improved the experiment in ways that otherwise could have only been achieved by collecting 30 percent more data.”

    A fertile ground for innovation

    Machine learning algorithms become more sophisticated and fine-tuned day by day, opening up unprecedented opportunities to solve particle physics problems.

    Many of the new tasks they could be used for are related to computer vision, Kagan says. “It’s similar to facial recognition, except that in particle physics, image features are more abstract and complex than ears and noses.”

    Some experiments like NOvA and MicroBooNE produce data that can easily be translated into actual images, and AI can be readily used to identify features in them. In LHC experiments, on the other hand, images first need to be reconstructed from a murky pool of data generated by millions of sensor elements.

    “But even if the data don’t look like images, we can still use computer vision methods if we’re able to process the data in the right way,” Radovic says.

    One area where this approach could be very useful is the analysis of particle jets produced in large numbers at the LHC. Jets are narrow sprays of particles whose individual tracks are extremely challenging to separate. Computer vision technology could help identify features in jets.

    Another emerging application of deep learning is the simulation of particle physics data that predict, for example, what happens in particle collisions at the LHC and can be compared to the actual data. Simulations like these are typically slow and require immense computing power. AI, on the other hand, could do simulations much faster, potentially complementing the traditional approach.

    “Just a few years ago, nobody would have thought that deep neural networks can be trained to ‘hallucinate’ data from random noise,” Kagan says. “Although this is very early work, it shows a lot of promise and may help with the data challenges of the future.”

    Benefiting from healthy skepticism

    Despite all obvious advances, machine learning enthusiasts frequently face skepticism from their collaboration partners, in part because machine learning algorithms mostly work like “black boxes” that provide very little information about how they reached a certain conclusion.

    “Skepticism is very healthy,” Williams says. “If you use machine learning for triggers that discard data, like we do in LHCb, then you want to be extremely cautious and set the bar very high.”

    Therefore, establishing machine learning in particle physics requires constant efforts to better understand the inner workings of the algorithms and to do cross-checks with real data whenever possible.

    “We should always try to understand what a computer algorithm does and always evaluate its outcome,” Terao says. “This is true for every algorithm, not only machine learning. So, being skeptical shouldn’t stop progress.”

    Rapid progress has some researchers dreaming of what could become possible in the near future. “Today we’re using machine learning mostly to find features in our data that can help us answer some of our questions,” Terao says. “Ten years from now, machine learning algorithms may be able to ask their own questions independently and recognize when they find new physics.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 12:18 pm on July 17, 2018 Permalink | Reply
    Tags: , , , Machine learning, , , ,   

    From Symmetry: “Rise of the machines” 

    Symmetry Mag
    From Symmetry

    07/17/18
    Sarah Charley

    Machine learning will become an even more important tool when scientists upgrade to the High-Luminosity Large Hadron Collider.

    1
    Artwork by Sandbox Studio, Chicago

    When do a few scattered dots become a line? And when does that line become a particle track? For decades, physicists have been asking these kinds of questions. Today, so are their machines.

    Machine learning is the process by which the task of pattern recognition is outsourced to a computer algorithm. Humans are naturally very good at finding and processing patterns. That’s why you can instantly recognize a song from your favorite band, even if you’ve never heard it before.

    Machine learning takes this very human process and puts computing power behind it. Whereas a human might be able to recognize a band based on a variety of attributes such as the vocal tenor of the lead singer, a computer can process other subtle features a human might miss. The music-streaming platform Pandora categorizes every piece of music in terms of 450 different auditory qualities.

    “Machines can handle a lot more information than our brains can,” says Eduardo Rodrigues, a physicist at the University of Cincinnati. “It’s why they can find patterns that are sometimes invisible to us.”

    Machine learning started to become commonplace in computing during the 1980s, and LHC physicists have been using it routinely to help to manage and process raw data since 2012. Now, with upgrades to what is already the world’s most powerful particle accelerator looming on the horizon, physicists are implementing new applications of machine learning to help them with the imminent data deluge.

    “The high-luminosity upgrade to the LHC is going to increase our amount of data by a factor of 100 relative to that used to discover the Higgs,” says Peter Elmer, a physicist at Princeton University. “This will help us search for rare particles and new physics, but if we’re not prepared, we risk being completely swamped with data.”

    Only a small fraction of the LHC’s collisions are interesting to scientists. For instance, Higgs bosons are born in just roughly one out of every 2 billion proton-proton collisions. Machine learning is helping scientists to sort through the noise and isolate what’s truly important.

    “It’s like mining for rare gems,” Rodrigues says. “Keeping all the sand and pebbles would be ridiculous, so we use algorithms to help us single out the things that look interesting. With machine learning, we can purify the sample even further and more efficiently.”

    LHC physicists use a kind of machine learning called supervised learning. The principle behind supervised learning is nothing new; in fact, it’s how most of us learn how to read and write. Physicists start by training their machine-learning algorithms with data from collisions that are already well-understood. They tell them, “This is what a Higgs looks like. This is what a particle with a bottom quark looks like.”

    After giving an algorithm all of the information they already know about hundreds of examples, physicists then pull back and task the computer with identifying the particles in collisions without labels. They monitor how well the algorithm performs and give corrections along the way. Eventually, the computer needs only minimal guidance and can become even better than humans at analyzing the data.

    “This is saving the LHCb experiment a huge amount of time,” Rodrigues says. “In the past, we needed months to make sense of our raw detector data. With machine learning, we can now process and label events within the first few hours after we record them.”

    Not only is machine learning helping physicists understand their real data, but it will soon help them create simulations to test their predictions from theory as well.

    Using algorithms in the absence of machine learning, scientists have created virtual versions of their detectors with all the known laws of physics pre-programmed.

    “The virtual experiment follows the known laws of physics to a T,” Elmer says. “We simulate proton-proton collisions and then predict how the byproducts will interact with every part of our detector.”

    If scientists find a consistent discrepancy between the virtual data generated by their simulations and the real data recorded by their detectors, it could mean that the particles in the real world are playing by a different set of rules than the ones physicists already know.

    A weakness of scientists’ current simulations is that they’re too slow. They use series of algorithms to precisely calculate how a particle will interact with every detector part it bumps into while moving through the many layers of a particle detector.

    Even though it takes only a few minutes to simulate a collision this way, scientists need to simulate trillions of collisions to cover the possible outcomes of the 600 million collisions per second they will record with the HL-LHC.

    “We don’t have the time or resources for that,” Elmer says.

    With machine learning, on the other hand, they can generalize. Instead of calculating every single particle interaction with matter along the way, they can estimate its overall behavior based on its typical paths through the detector.

    “It’s a matter of balancing quality with quantity,” Elmer says. “We’ll still use the very precise calculations for some studies. But for others, we don’t need such high-resolution simulations for the physics we want to do.”

    Machine learning is helping scientists process more data faster. With the planned upgrades to the LHC, it could play an even large role in the future. But it is not a silver bullet, Elmer says.

    “We still want to understand why and how all of our analyses work so that we can be completely confident in the results they produce,” he says. “We’ll always need a balance between shiny new technologies and our more traditional analysis techniques.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 10:59 am on June 19, 2018 Permalink | Reply
    Tags: , , , Machine learning, , Searching Science Data   

    From Lawrence Berkeley National Lab: “Berkeley Lab Researchers Use Machine Learning to Search Science Data” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    1
    A screenshot of image-based results in the Science Search interface. In this case, the user performed an image search for nanoparticles. (Credit: Gonzalo Rodrigo/Berkeley Lab)

    As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools.

    With this in mind, a team of researchers from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley are developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific data, called Science Search, that the Berkeley team is building.

    As a proof-of-concept, the team is working with staff at Berkeley Lab’s Molecular Foundry, to demonstrate the concepts of Science Search on the images captured by the facility’s instruments. A beta version of the platform has been made available to Foundry researchers.

    LBNL Molecular Foundry – No image credits found

    “A tool like Science Search has the potential to revolutionize our research,” says Colin Ophus, a Molecular Foundry research scientist within the National Center for Electron Microscopy (NCEM) and Science Search Collaborator. “We are a taxpayer-funded National User Facility, and we would like to make all of the data widely available, rather than the small number of images chosen for publication. However, today, most of the data that is collected here only really gets looked at by a handful of people—the data producers, including the PI (principal investigator), their postdocs or graduate students—because there is currently no easy way to sift through and share the data. By making this raw data easily searchable and shareable, via the Internet, Science Search could open this reservoir of ‘dark data’ to all scientists and maximize our facility’s scientific impact.”

    The Challenges of Searching Science Data

    2
    This screen capture of the Science Search interface shows how users can easily validate metadata tags that have been generated via machine learning, or add information that hasn’t already been captured. (Credit: Gonzalo Rodrigo/Berkeley Lab)

    Today, search engines are ubiquitously used to find information on the Internet but searching science data presents a different set of challenges. For example, Google’s algorithm relies on more than 200 clues to achieve an effective search. These clues can come in the form of key words on a webpage, metadata in images or audience feedback from billions of people when they click on the information they are looking for. In contrast, scientific data comes in many forms that are radically different than an average web page, requires context that is specific to the science and often also lacks the metadata to provide context that is required for effective searches.

    At National User Facilities like the Molecular Foundry, researchers from all over the world apply for time and then travel to Berkeley to use extremely specialized instruments free of charge. Ophus notes that the current cameras on microscopes at the Foundry can collect up to a terabyte of data in under 10 minutes. Users then need to manually sift through this data to find quality images with “good resolution” and save that information on a secure shared file system, like Dropbox, or on an external hard drive that they eventually take home with them to analyze.

    Oftentimes, the researchers that come to the Molecular Foundry only have a couple of days to collect their data. Because it is very tedious and time consuming to manually add notes to terabytes of scientific data and there is no standard for doing it, most researchers just type shorthand descriptions in the filename. This might make sense to the person saving the file, but often doesn’t make much sense to anyone else.

    “The lack of real metadata labels eventually causes problems when the scientist tries to find the data later or attempts to share it with others,” says Lavanya Ramakrishnan, a staff scientist in Berkeley Lab’s Computational Research Division (CRD) and co-principal investigator of the Science Search project. “But with machine-learning techniques, we can have computers help with what is laborious for the users, including adding tags to the data. Then we can use those tags to effectively search the data.”

    3
    In addition to images, Science Search can also be used to look for proposals and papers. This is a screenshot of the paper search results. (Credit: Gonzalo Rodrigo/Berkeley Lab). [No hot links.]

    To address the metadata issue, the Berkeley Lab team uses machine-learning techniques to mine the “science ecosystem”—including instrument timestamps, facility user logs, scientific proposals, publications and file system structures—for contextual information. The collective information from these sources including timestamp of the experiment, notes about the resolution and filter used and the user’s request for time, all provides critical contextual information. The Berkeley lab team has put together an innovative software stack that uses machine-learning techniques including natural language processing pull contextual keywords about the scientific experiment and automatically create metadata tags for the data.

    For the proof-of-concept, Ophus shared data from the Molecular Foundry’s TEAM 1 electron microscope at NCEM that was recently collected by the facility staff, with the Science Search Team.

    LBNL National Center for Electron Microscopy (NCEM)

    He also volunteered to label a few thousand images to give the machine-learning tools some labels from which to start learning. While this is a good start, Science Search co-principal investigator Gunther Weber notes that most successful machine-learning applications typically require significantly more data and feedback to deliver better results. For example, in the case of search engines like Google, Weber notes that training datasets are created and machine-learning techniques are validated when billions of people around the world verify their identity by clicking on all the images with street signs or storefronts after typing in their passwords, or on Facebook when they’re tagging their friends in an image.

    “In the case of science data only a handful of domain experts can create training sets and validate machine-learning techniques, so one of the big ongoing problems we face is an extremely small number of training sets,” says Weber, who is also a staff scientist in Berkeley Lab’s CRD.

    To overcome this challenge, the Berkeley Lab researchers used transfer learning to limit the degrees of freedom, or parameter counts, on their convolutional neural networks (CNNs). Transfer learning is a machine learning method in which a model developed for a task is reused as the starting point for a model on a second task, which allows the user to get more accurate results from a smaller training set. In the case of the TEAM I microscope, the data produced contains information about which operation mode the instrument was in at the time of collection. With that information, Weber was able to train the neural network on that classification so it could generate that mode of operation label automatically. He then froze that convolutional layer of the network, which meant he’d only have to retrain the densely connected layers. This approach effectively reduces the number of parameters on the CNN, allowing the team to get some meaningful results from their limited training data.

    Machine Learning to Mine the Scientific Ecosystem

    In addition to generating metadata tags through training datasets, the Berkeley Lab team also developed tools that use machine-learning techniques for mining the science ecosystem for data context. For example, the data ingest module can look at a multitude of information sources from the scientific ecosystem—including instrument timestamps, user logs, proposals and publications—and identify commonalities. Tools developed at Berkeley Lab that use natural language-processing methods can then identify and rank words that give context to the data and facilitate meaningful results for users later on. The user will see something similar to the results page of an Internet search, where content with the most text matching the user’s search words will appear higher on the page. The system also learns from user queries and the search results they click on.

    Because scientific instruments are generating an ever-growing body of data, all aspects of the Berkeley team’s science search engine needed to be scalable to keep pace with the rate and scale of the data volumes being produced. The team achieved this by setting up their system in a Spin instance on the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC).

    NERSC

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Spin is a Docker-based edge-services technology developed at NERSC that can access the facility’s high performance computing systems and storage on the back end.

    “One of the reasons it is possible for us to build a tool like Science Search is our access to resources at NERSC,” says Gonzalo Rodrigo, a Berkeley Lab postdoctoral researcher who is working on the natural language processing and infrastructure challenges in Science Search. “We have to store, analyze and retrieve really large datasets, and it is useful to have access to a supercomputing facility to do the heavy lifting for these tasks. NERSC’s Spin is a great platform to run our search engine that is a user-facing application that requires access to large datasets and analytical data that can only be stored on large supercomputing storage systems.”

    An Interface for Validating and Searching Data

    When the Berkeley Lab team developed the interface for users to interact with their system, they knew that it would have to accomplish a couple of objectives, including effective search and allowing human input to the machine learning models. Because the system relies on domain experts to help generate the training data and validate the machine-learning model output, the interface needed to facilitate that.

    “The tagging interface that we developed displays the original data and metadata available, as well as any machine-generated tags we have so far. Expert users then can browse the data and create new tags and review any machine-generated tags for accuracy,” says Matt Henderson, who is a Computer Systems Engineer in CRD and leads the user interface development effort.

    To facilitate an effective search for users based on available information, the team’s search interface provides a query mechanism for available files, proposals and papers that the Berkeley-developed machine-learning tools have parsed and extracted tags from. Each listed search result item represents a summary of that data, with a more detailed secondary view available, including information on tags that matched this item. The team is currently exploring how to best incorporate user feedback to improve the models and tags.

    “Having the ability to explore datasets is important for scientific breakthroughs, and this is the first time that anything like Science Search has been attempted,” says Ramakrishnan. “Our ultimate vision is to build the foundation that will eventually support a ‘Google’ for scientific data, where researchers can even search distributed datasets. Our current work provides the foundation needed to get to that ambitious vision.”

    “Berkeley Lab is really an ideal place to build a tool like Science Search because we have a number of user facilities, like the Molecular Foundry, that have decades worth of data that would provide even more value to the scientific community if the data could be searched and shared,” adds Katie Antypas, who is the principal investigator of Science Search and head of NERSC’s Data Department. “Plus we have great access to machine-learning expertise in the Berkeley Lab Computing Sciences Area as well as HPC resources at NERSC in order to build these capabilities.”

    In addition to Antypas, Ramakrishnan and Weber, UC Berkeley Computer Science Professor Joseph Hellerstein is also a principal investigator.

    This work was supported by the DOE Office of Advanced Scientific Computing Research (ASCR). Both the Molecular Foundry and NERSC are DOE Office of Science User Facilities located at Berkeley Lab.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: