Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:21 pm on April 16, 2019 Permalink | Reply
    Tags: , , , Machine learning, , Natural Sciences, The Brendan Iribe Center for Computer Science and Engineering, UMIACS-University of Maryland Institute for Advanced Computer Studies, University of Maryland CMNS   

    From University of Maryland CMNS: “University of Maryland Launches Center for Machine Learning” 

    U Maryland bloc

    From University of Maryland


    CMNS

    April 16, 2019

    Abby Robinson
    301-405-5845
    abbyr@umd.edu

    The University of Maryland recently launched a multidisciplinary center that uses powerful computing tools to address challenges in big data, computer vision, health care, financial transactions and more.

    The University of Maryland Center for Machine Learning will unify and enhance numerous activities in machine learning already underway on the Maryland campus.

    1
    University of Maryland computer science faculty member Thomas Goldstein (on left, with visiting graduate student) is a member of the new Center for Machine Learning. Goldstein’s research focuses on large-scale optimization and distributed algorithms for big data. Photo: John T. Consoli.

    Machine learning uses algorithms and statistical models so that computer systems can effectively perform a task without explicit instructions, relying instead on patterns and inference. At UMD, for example, computer vision experts are “training” computers to identify and match key facial characteristics by having machines analyze millions of images publicly available on social media.

    Researchers at UMD are exploring other applications such as groundbreaking work in cancer genomics; powerful algorithms to improve the selection process for organ transplants; and an innovative system that can quickly find, translate and summarize information from almost any language in the world.

    “We wanted to capitalize on the significant strengths we already have in machine learning, provide additional support, and embrace fresh opportunities arising from new facilities and partnerships,” said Mihai Pop, professor of computer science and director of the University of Maryland Institute for Advanced Computer Studies (UMIACS).

    The center officially launched with a workshop last month featuring talks and panel discussions from machine learning experts in auditory systems, biology and medicine, business, chemistry, natural language processing, and security.

    Initial funding for the center comes from the College of Computer, Mathematical, and Natural Sciences (CMNS) and UMIACS, which will provide technical and administrative support.

    An inaugural partner of the center, financial and technology leader Capital One, provided additional support, including endowing three faculty positions in machine learning and computer science. Those positions received matching funding from the state’s Maryland E-Nnovation Initiative.

    Capital One has also provided funding for research projects that align with the organization’s need to stay on the cutting edge in areas like fraud detection and enhancing the customer experience with more personalized, real-time features.

    “We are proud to be a part of the launch of the University of Maryland Center for Machine Learning, and are thrilled to extend our partnership with the university in this field,” said Dave Castillo, the company’s managing vice president at the Center for Machine Learning and Emerging Technology. “At Capital One, we believe forward-leaning technologies like machine learning can provide our customers greater protection, security, confidence and control of their finances. We look forward to advancing breakthrough work with the University of Maryland in years to come.”

    3
    University of Maryland computer science faculty members David Jacobs (left) and Furong Huang (right) are part of the new Center for Machine Learning. Jacobs is an expert in computer vision and is the center’s interim director; Huang is conducting research in neural networks. Photo: John T. Consoli.

    David Jacobs, a professor of computer science with an appointment in UMIACS, will serve as interim director of the new center.

    To jumpstart the center’s activities, Jacobs has recruited a core group of faculty members in computer science and UMIACS: John Dickerson, Soheil Feizi, Thomas Goldstein, Furong Huang and Aravind Srinivasan.

    Faculty members from mathematics, chemistry, biology, physics, linguistics, and data science are also heavily involved in machine learning applications, and Jacobs said he expects many of them to be active in the center through direct or affiliate appointments.

    “We want the center to be a focal point across the campus where faculty, students, and visiting scholars can come to learn about the latest technologies and theoretical applications based in machine learning,” he said.

    Key to the center’s success will be a robust computational infrastructure that is needed to perform complex computations involving massive amounts of data.

    This is where UMIACS plays an important role, Jacobs said, with the institute’s technical staff already supporting multiple machine learning activities in computer vision and computational linguistics.

    Plans call for CMNS, UMIACS and other organizations to invest substantially in new computing resources for the machine learning center, Jacobs added.

    4
    The Brendan Iribe Center for Computer Science and Engineering. Photo: John T. Consoli.

    The center will be located in the Brendan Iribe Center for Computer Science and Engineering, a new state-of-the-art facility at the entrance to campus that will be officially dedicated later this month. In addition to the very latest in computing resources, the Brendan Iribe Center promotes collaboration and connectivity through its open design and multiple meeting areas.

    The Brendan Iribe Center is directly adjacent to the university’s Discovery District, where researchers working in Capital One’s Tech Incubator and other tech startups can interact with UMD faculty members and students on topics related to machine learning.

    Amitabh Varshney, professor of computer science and dean of CMNS, said the center will be a valuable resource for the state of Maryland and the region—both for students seeking the latest knowledge and skills and for companies wanting professional development training for their employees.

    “We have new educational activities planned by the college that include professional master’s programs in machine learning and data science and analytics,” Varshney said. “We want to leverage our location near numerous federal agencies and private corporations that are interested in expanding their workforce capabilities in these areas.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Maryland Campus

    About CMNS

    The thirst for new knowledge is a fundamental and defining characteristic of humankind. It is also at the heart of scientific endeavor and discovery. As we seek to understand our world, across a host of complexly interconnected phenomena and over scales of time and distance that were virtually inaccessible to us a generation ago, our discoveries shape that world. At the forefront of many of these discoveries is the College of Computer, Mathematical, and Natural Sciences (CMNS).

    CMNS is home to 12 major research institutes and centers and to 10 academic departments: astronomy, atmospheric and oceanic science, biology, cell biology and molecular genetics, chemistry and biochemistry, computer science, entomology, geology, mathematics, and physics.

    Our Faculty

    Our faculty are at the cutting edge over the full range of these disciplines. Our physicists fill in major gaps in our fundamental understanding of matter, participating in the recent Higgs boson discovery, and demonstrating the first-ever teleportation of information between atoms. Our astronomers probe the origin of the universe with one of the world’s premier radio observatories, and have just discovered water on the moon. Our computer scientists are developing the principles for guaranteed security and privacy in information systems.

    Our Research

    Driven by the pursuit of excellence, the University of Maryland has enjoyed a remarkable rise in accomplishment and reputation over the past two decades. By any measure, Maryland is now one of the nation’s preeminent public research universities and on a path to become one of the world’s best. To fulfill this promise, we must capitalize on our momentum, fully exploit our competitive advantages, and pursue ambitious goals with great discipline and entrepreneurial spirit. This promise is within reach. This strategic plan is our working agenda.

    The plan is comprehensive, bold, and action oriented. It sets forth a vision of the University as an institution unmatched in its capacity to attract talent, address the most important issues of our time, and produce the leaders of tomorrow. The plan will guide the investment of our human and material resources as we strengthen our undergraduate and graduate programs and expand research, outreach and partnerships, become a truly international center, and enhance our surrounding community.

    Our success will benefit Maryland in the near and long term, strengthen the State’s competitive capacity in a challenging and changing environment and enrich the economic, social and cultural life of the region. We will be a catalyst for progress, the State’s most valuable asset, and an indispensable contributor to the nation’s well-being. Achieving the goals of Transforming Maryland requires broad-based and sustained support from our extended community. We ask our stakeholders to join with us to make the University an institution of world-class quality with world-wide reach and unparalleled impact as it serves the people and the state of Maryland.

    Our researchers are also at the cusp of the new biology for the 21st century, with bioscience emerging as a key area in almost all CMNS disciplines. Entomologists are learning how climate change affects the behavior of insects, and earth science faculty are coupling physical and biosphere data to predict that change. Geochemists are discovering how our planet evolved to support life, and biologists and entomologists are discovering how evolutionary processes have operated in living organisms. Our biologists have learned how human generated sound affects aquatic organisms, and cell biologists and computer scientists use advanced genomics to study disease and host-pathogen interactions. Our mathematicians are modeling the spread of AIDS, while our astronomers are searching for habitable exoplanets.

    Our Education

    CMNS is also a national resource for educating and training the next generation of leaders. Many of our major programs are ranked among the top 10 of public research universities in the nation. CMNS offers every student a high-quality, innovative and cross-disciplinary educational experience that is also affordable. Strongly committed to making science and mathematics studies available to all, CMNS actively encourages and supports the recruitment and retention of women and minorities.

    Our Students

    Our students have the unique opportunity to work closely with first-class faculty in state-of-the-art labs both on and off campus, conducting real-world, high-impact research on some of the most exciting problems of modern science. 87% of our undergraduates conduct research and/or hold internships while earning their bachelor’s degree. CMNS degrees command respect around the world, and open doors to a wide variety of rewarding career options. Many students continue on to graduate school; others find challenging positions in high-tech industry or federal laboratories, and some join professions such as medicine, teaching, and law.

     
  • richardmitnick 1:20 pm on March 4, 2019 Permalink | Reply
    Tags: , , Completely doing away with wind variability is next to impossible, , , Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful, Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind, Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified, Machine learning, , Unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector   

    From Geospatial World: “Google and DeepMind predict wind energy output using AI” 

    From Geospatial World

    03/04/2019
    Aditya Chaturvedi

    1
    Image Courtesy: Unsplash

    Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind.

    Renewable energy is the way towards lowering carbon emissions and sustainability, so it is imperative that we focus on yielding optimum energy outputs from renewable energy.

    Renewable technologies will be at the forefront of climate change mitigation and addressing global warming, however, the complete potential is yet to be harnessed owing to a slew of obstructions. Wind energy has emerged as a crucial source of renewable energy in the past decade due to a decline in the cost of turbines that has led to the gradual mainstreaming of wind power. Though, unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector.

    Google and DeepMind project will change this forever by overcoming this limitation that has hobbled wind energy adoption.

    With the help of DeepMind’s Machine Learning algorithms, Google has been able to predict the wind energy output of the farms that it uses for its Green Energy initiatives.

    “DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms—part of Google’s global fleet of renewable energy projects—collectively generate as much electricity as is needed by a medium-sized city”, the blog says.

    Google is optimistic that it can accurately predict and schedule energy output, which certainly would have an upper hand over non-time based deliveries.

    3
    Image Courtesy: Google/ DeepMind

    Taking a neural network that makes uses of weather forecasts and turbine data history, DeepMind system has been configured to predict wind power output 36 hours in advance.

    Taking a cue from these predictions, the advanced model recommends the best possible method to fulfill, and even exceed, delivery commitments 24 hrs in advance. Its importance can be estimated from the fact that energy sources that deliver a particular amount of power over a defined period of time are usually more vulnerable to the grid.

    Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified. Till date, Google claims that Machine Learning algorithms have boosted wind energy generated by 20%, ‘compared to the to the baseline scenario of no time-based commitments to the grid’, the blog says.

    4
    Image Courtesy: Google

    Completely doing away with wind variability is next to impossible, but Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful.

    This unique approach would surely open up new avenues and make wind farm data more reliable and precise. When the productivity of wind power farms in greatly increased and their output can be predicted as well as calculated, wind will have the capability to match conventional electricity sources.

    Google is hopeful that the power of Machine Learning and AI would boost the mass adoption of wind power and turn it into a popular alternative to traditional sources of electricity over the years.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    http://www.geospatialworld.net

    With an average of 55,000+ unique visitors per month, http://www.geospatialworld.net is easily the number one media portal in geospatial domain; and is a reliable source of information for professionals in 150+ countries. The website, which integrates text, graphics and video elements, is an interactive medium for geospatial industry stakeholders to connect through several innovative features, including news, videos, guest blogs, case studies, articles, interviews, business listings and events.

    600,000+ annual unique visitors

     
  • richardmitnick 2:49 pm on October 16, 2018 Permalink | Reply
    Tags: , , , , , Deep Skies Lab, Galaxy Zoo-Citizen Science, Gravitational lenses, , , Machine learning,   

    From Symmetry: “Studying the stars with machine learning” 

    Symmetry Mag
    From Symmetry

    10/16/18
    Evelyn Lamb

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    To keep up with an impending astronomical increase in data about our universe, astrophysicists turn to machine learning.

    Kevin Schawinski had a problem.

    In 2007 he was an astrophysicist at Oxford University and hard at work reviewing seven years’ worth of photographs from the Sloan Digital Sky Survey—images of more than 900,000 galaxies. He spent his days looking at image after image, noting whether a galaxy looked spiral or elliptical, or logging which way it seemed to be spinning.

    Technological advancements had sped up scientists’ ability to collect information, but scientists were still processing information at the same rate. After working on the task full time and barely making a dent, Schawinski and colleague Chris Lintott decided there had to be a better way to do this.

    There was: a citizen science project called Galaxy Zoo. Schawinski and Lintott recruited volunteers from the public to help out by classifying images online. Showing the same images to multiple volunteers allowed them to check one another’s work. More than 100,000 people chipped in and condensed a task that would have taken years into just under six months.

    Citizen scientists continue to contribute to image-classification tasks. But technology also continues to advance.

    The Dark Energy Spectroscopic Instrument, scheduled to begin in 2019, will measure the velocities of about 30 million galaxies and quasars over five years.

    LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA

    The Large Synoptic Survey Telescope, scheduled to begin in the early 2020s, will collect more than 30 terabytes of data each night—for a decade.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    “The volume of datasets [from those surveys] will be at least an order of magnitude larger,” says Camille Avestruz, a postdoctoral researcher at the University of Chicago.

    To keep up, astrophysicists like Schawinski and Avestruz have recruited a new class of non-scientist scientists: machines.

    Researchers are using artificial intelligence to help with a variety of tasks in astronomy and cosmology, from image analysis to telescope scheduling.

    Superhuman scheduling, computerized calibration

    Artificial intelligence is an umbrella term for ways in which computers can seem to reason, make decisions, learn, and perform other tasks that we associate with human intelligence. Machine learning is a subfield of artificial intelligence that uses statistical techniques and pattern recognition to train computers to make decisions, rather than programming more direct algorithms.

    In 2017, a research group from Stanford University used machine learning to study images of strong gravitational lensing, a phenomenon in which an accumulation of matter in space is dense enough that it bends light waves as they travel around it.

    Gravitational Lensing NASA/ESA

    Because many gravitational lenses can’t be accounted for by luminous matter alone, a better understanding of gravitational lenses can help astronomers gain insight into dark matter.

    In the past, scientists have conducted this research by comparing actual images of gravitational lenses with large numbers of computer simulations of mathematical lensing models, a process that can take weeks or even months for a single image. The Stanford team showed that machine learning algorithms can speed up this process by a factor of millions.

    3
    Greg Stewart, SLAC National Accelerator Laboratory

    Schawinski, who is now an astrophysicist at ETH Zürich, uses machine learning in his current work. His group has used tools called generative adversarial networks, or GAN, to recover clean versions of images that have been degraded by random noise. They recently published a paper [Astronomy and Astrophysics]about using AI to generate and test new hypotheses in astrophysics and other areas of research.

    Another application of machine learning in astrophysics involves solving logistical challenges such as scheduling. There are only so many hours in a night that a given high-powered telescope can be used, and it can only point in one direction at a time. “It costs millions of dollars to use a telescope for on the order of weeks,” says Brian Nord, a physicist at the University of Chicago and part of Fermilab’s Machine Intelligence Group, which is tasked with helping researchers in all areas of high-energy physics deploy AI in their work.

    Machine learning can help observatories schedule telescopes so they can collect data as efficiently as possible. Both Schawinski’s lab and Fermilab are using a technique called reinforcement learning to train algorithms to solve problems like this one. In reinforcement learning, an algorithm isn’t trained on “right” and “wrong” answers but through differing rewards that depend on its outputs. The algorithms must strike a balance between the safe, predictable payoffs of understood options and the potential for a big win with an unexpected solution.

    4
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    A growing field

    When computer science graduate student Shubhendu Trivedi of the Toyota Technological Institute at University of Chicago started teaching a graduate course on deep learning with one of his mentors, Risi Kondor, he was pleased with how many researchers from the physical sciences signed up for it. They didn’t know much about how to use AI in their research, and Trivedi realized there was an unmet need for machine learning experts to help scientists in different fields find ways of exploiting these new techniques.

    The conversations he had with researchers in his class evolved into collaborations, including participation in the Deep Skies Lab, an astronomy and artificial intelligence research group co-founded by Avestruz, Nord and astronomer Joshua Peek of the Space Telescope Science Institute. Earlier this month, they submitted their first peer-reviewed paper demonstrating the efficiency of an AI-based method to measure gravitational lensing in the Cosmic Microwave Background [CMB].

    Similar groups are popping up across the world, from Schawinski’s group in Switzerland to the Centre for Astrophysics and Supercomputing in Australia. And adoption of machine learning techniques in astronomy is increasing rapidly. In an arXiv search of astronomy papers, the terms “deep learning” and “machine learning” appear more in the titles of papers from the first seven months of 2018 than from all of 2017, which in turn had more than 2016.

    “Five years ago, [machine learning algorithms in astronomy] were esoteric tools that performed worse than humans in most circumstances,” Nord says. Today, more and more algorithms are consistently outperforming humans. “You’d be surprised at how much low-hanging fruit there is.”

    But there are obstacles to introducing machine learning into astrophysics research. One of the biggest is the fact that machine learning is a black box. “We don’t have a fundamental theory of how neural networks work and make sense of things,” Schawinski says. Scientists are understandably nervous about using tools without fully understanding how they work.

    Another related stumbling block is uncertainty. Machine learning often depends on inputs that all have some amount of noise or error, and the models themselves make assumptions that introduce uncertainty. Researchers using machine learning techniques in their work need to understand these uncertainties and communicate those accurately to each other and the broader public.

    The state of the art in machine learning is changing so rapidly that researchers are reluctant to make predictions about what will be coming even in the next five years. “I would be really excited if as soon as data comes off the telescopes, a machine could look at it and find unexpected patterns,” Nord says.

    No matter exactly the form future advances take, the data keeps coming faster and faster, and researchers are increasingly convinced that artificial intelligence is going to be necessary to help them keep up.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 11:29 am on September 21, 2018 Permalink | Reply
    Tags: Andrew Peterson, Brown awarded $3.5M to speed up atomic-scale computer simulations, , Computational power is growing rapidly which lets us perform larger and more realistic simulations, Different simulations often have the same sets of calculations underlying them- so finding what can be re-used saves a lot of time and money, Machine learning,   

    From Brown University: “Brown awarded $3.5M to speed up atomic-scale computer simulations” 

    Brown University
    From Brown University

    September 20, 2018
    Kevin Stacey
    kevin_stacey@brown.edu
    401-863-3766

    1
    Andrew Peterson. No photo credit.

    With a new grant from the U.S. Department of Energy, a Brown University-led research team will use machine learning to speed up atom-level simulations of chemical reactions and the properties of materials.

    “Simulations provide insights into materials and chemical processes that we can’t readily get from experiments,” said Andrew Peterson, an associate professor in Brown’s School of Engineering who will lead the work.

    “Computational power is growing rapidly, which lets us perform larger and more realistic simulations. But as the size of the simulations grows, the time involved in running them can grow exponentially. This paradox means that even with the growth in computational power, our field still cannot perform truly large-scale simulations. Our goal is to speed those simulations up dramatically — ideally by orders of magnitude — using machine learning.”

    The grant provides $3.5 million dollars for the work over four years. Peterson will work with two Brown colleagues — Franklin Goldsmith, assistant professor of engineering, and Brenda Rubenstein, assistant professor of chemistry — as well as researchers from Carnegie Mellon, Georgia Tech and MIT.

    The idea behind the work is that different simulations often have the same sets of calculations underlying them. Peterson and his colleagues aim to use machine learning to find those underlying similarities and fast-forward through them.

    “What we’re doing is taking the results of calculations from prior simulations and using them to predict the outcome of calculations that haven’t been done yet,” Peterson said. “If we can eliminate the need to do similar calculations over and over again, we can speed things up dramatically, potentially by orders of magnitude.”

    The team will focus their work initially on simulations of electrocatalysis — the kinds of chemical reactions that are important in devices like fuel cells and batteries. These are complex, often multi-step reactions that are fertile ground for simulation-driven research, Peterson says.

    Atomic-scale simulations have demonstrated usefulness in Peterson’s own work in the design of new catalysts. In a recent example, Peterson worked with Brown chemist Shouheng Sun on a gold nanoparticle catalyst that can perform a reaction necessary for converting carbon dioxide into useful forms of carbon. Peterson’s simulations showed it was the sharp edges of the oddly shaped catalyst that were particularly active for the desired reaction.

    “That led us to change the geometry of the catalyst to a nanowire — something that’s basically all edges — to maximize its reactivity,” Peterson said. “We might have eventually tried a nanowire by trial and error, but because of the computational insights we were able to get there much more quickly.”

    The researchers will use a software package that Peterson’s research group developed previously as a starting point. The software, called AMP (Atomistic Machine-learning Package) is open-source and already widely used in the simulation community, Peterson says.

    The Department of Energy grant will bring atomic-scale simulations — and the insights they produce — to bear on ever larger and more complex simulations. And while the work under the grant will focus on electrocatalysis, the tools the team develops should be widely applicable to other types of material and chemical simulations.

    Peterson is hopeful that the investment that the federal government is making in machine learning will be repaid by making better use of valuable computing resources.

    “Modern supercomputers cost millions of dollars to build, and simulation time on them is precious,” Peterson said. “If we’re able to free up time on those machines for additional simulations to be run, that translates into vastly increased return-on-investment for those machines. It’s real money.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Brown

    Brown U Robinson Hall
    Located in historic Providence, Rhode Island and founded in 1764, Brown University is the seventh-oldest college in the United States. Brown is an independent, coeducational Ivy League institution comprising undergraduate and graduate programs, plus the Alpert Medical School, School of Public Health, School of Engineering, and the School of Professional Studies.

    With its talented and motivated student body and accomplished faculty, Brown is a leading research university that maintains a particular commitment to exceptional undergraduate instruction.

    Brown’s vibrant, diverse community consists of 6,000 undergraduates, 2,000 graduate students, 400 medical school students, more than 5,000 summer, visiting and online students, and nearly 700 faculty members. Brown students come from all 50 states and more than 100 countries.

    Undergraduates pursue bachelor’s degrees in more than 70 concentrations, ranging from Egyptology to cognitive neuroscience. Anything’s possible at Brown—the university’s commitment to undergraduate freedom means students must take responsibility as architects of their courses of study.

     
  • richardmitnick 2:04 pm on September 12, 2018 Permalink | Reply
    Tags: , , Machine learning, , , ,   

    From Fermi National Accelerator Lab: “MicroBooNE demonstrates use of convolutional neural networks on liquid-argon TPC data for first time” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermi National Accelerator Lab is an enduring source of strength for the US contribution to scientific research world wide.

    September 12, 2018
    Victor Genty, Kazuhiro Terao and Taritree

    It is hard these days not to encounter examples of machine learning out in the world. Chances are, if your phone unlocks using facial recognition or if you’re using voice commands to control your phone, you are likely using machine learning algorithms — in particular deep neural networks.

    What makes these algorithms so powerful is that they learn relationships between high-level concepts we wish to find in an image (faces) or sound wave (words) with sets of low-level patterns (lines, shapes, colors, textures, individual sounds), which represent them in the data. Furthermore, these low-level patterns and relationships do not have to be conceived of or hand-designed by humans, but instead are learned directly from examples of the data. Not having to come up with new patterns to find for each new problem is why deep neural networks have been able to advance the state of the art for so many different types of problems: from analyzing video for self-driving cars to assisting robots in learning how to manipulate objects.

    Here at Fermilab, there has been a lot of effort in having these deep neural networks help us analyze the data from our particle detectors so that we can more quickly and effectively use it to look for new physics. These applications are a continuation of the high-energy physics community’s long history in adopting and furthering the use of machine learning algorithms.

    Recently, the MicroBooNE neutrino experiment published a paper describing how they used convolutional neural networks — a particular type of deep neural network — to sort individual pixels coming from images made by a particular type of detector known as a liquid-argon time projection (LArTPC) chamber. The experiment designed a convolutional neural network called U-ResNet to distinguish between two types of pixels: those that were a part of a track-like particle trajectory from those that were a part of a shower-like particle trajectory.

    1
    This plot shows a comparison of U-ResNet performance on data and simulation, where the true pixel labels are provided by a physicist. The sample used is 100 events that contain a charged-current neutrino interaction candidate with neutral pions produced at the event vertex. The horizontal axis shows the fraction of pixels where the prediction by U-ResNet differed from the labels for each event. The error bars indicate only a statistical uncertainty.

    Track-like trajectories, made by particles such as a muon or proton, consist of a line with small curvature. Shower-like trajectories, produced by particles such as an electron or photon, are more complex topological features with many branching trajectories. This distinction is important because separating these type of topologies is known to be difficult for traditional algorithms. Not only that, shower-like shapes are produced when electrons and photons interact in the detector, and these two particles are often an important signal or background in physics analyses.

    MicroBooNE researchers demonstrated that these networks not only performed well but also worked in a similar fashion when presented with simulated data and real data. The latter is the first time this has been demonstrated for data from LArTPCs.

    Showing that networks behave the same on simulated and real data is critical, because these networks are typically trained on simulated data. Recall that these networks learn by looking at many examples. In industry, gathering large “training” data sets is an arduous and expensive task. However, particle physicists have a secret weapon — they can create as much simulated data as they want, since all experiments produce a highly detailed model of their detectors and data acquisition systems in order to produce as faithful a representation of the data as possible.

    However, these models are never perfect. And so a big question was, “Is the simulated data close enough to the real data to properly train these neural networks?” The way MicroBooNE answered this question is by performing a Turing test that compares the performance of the network to that of a physicist. They demonstrated that the accuracy of the human was similar to the machine when labeling simulated data, for which an absolute accuracy can be defined. They then compared the labels for real data. Here the disagreement between labels was low, and similar between machine and human (See the top figure. See the figure below for an example of how a human and computer labeled the same data event.) In addition, a number of qualitative studies looked at the correlation between manipulations of the image and the label provided by the network. They showed that the correlations follow human-like intuitions. For example, as a line segment gets shorter, the network becomes less confident if the segment is due to a track or a shower. This suggests that the low-level correlations being used are the same physically motivated correlations a physicist would use if engineering an algorithm by hand.

    2
    This example image shows a charged-current neutrino interaction with decay gamma rays from a neutral pion (left). The label image (middle) is shown with the output of U-ResNet (right) where track and shower pixels are shown in yellow and cyan color respectively.

    Demonstrating this simulated-versus-real data milestone is important because convolutional neural networks are valuable to current and future neutrino experiments that will use LArTPCs. This track-shower labeling is currently being employed in upcoming MicroBooNE analyses. Furthermore, for the upcoming Deep Underground Neutrino Experiment (DUNE), convolutional neural networks are showing much promise toward having the performance necessary to achieve DUNE’s physics goals, such as the measurement of CP violation, a possible explanation of the asymmetry in the presence of matter and antimatter in the current universe.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford



    SURF building in Lead SD USA

    The more demonstrations there are that these algorithms work on real LArTPC data, the more confidence the community can have that convolutional neural networks will help us learn about the properties of the neutrino and the fundamental laws of nature once DUNE begins to take data.

    Science paper:
    A Deep Neural Network for Pixel-Level Electromagnetic Particle Identification in the MicroBooNE Liquid Argon Time Projection Chamber
    https://arxiv.org/abs/1808.07269

    Victor Genty, Kazuhiro Terao and Taritree Wongjirad are three of the scientists who analyzed this result. Victor Genty is a graduate student at Columbia University. Kazuhiro Terao is a physicist at SLAC National Accelerator Laboratory. Taritree Wongjirad is an assistant professor at Tufts University.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.


    FNAL/MINERvA

    FNAL DAMIC

    FNAL Muon g-2 studio

    FNAL Short-Baseline Near Detector under construction

    FNAL Mu2e solenoid

    Dark Energy Camera [DECam], built at FNAL

    FNAL DUNE Argon tank at SURF

    FNAL/MicrobooNE

    FNAL Don Lincoln

    FNAL/MINOS

    FNAL Cryomodule Testing Facility

    FNAL Minos Far Detector

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    FNAL ICARUS

    FNAL Holometer

     
  • richardmitnick 6:53 pm on September 11, 2018 Permalink | Reply
    Tags: , , , , , , , Machine learning, , The notorious repeating fast radio source FRB 121102,   

    From Breakthrough Listen via Science Alert: “Astronomers Have Detected an Astonishing 72 New Mystery Radio Bursts From Space “ 

    From Breakthrough Listen Project

    via

    ScienceAlert

    Science Alert

    11 SEP 2018
    MICHELLE STARR

    A massive number of new signals have been discovered coming from the notorious repeating fast radio source FRB 121102 – and we can thank artificial intelligence for these findings.

    Researchers at the search for extraterrestrial intelligence (SETI) project Breakthrough Listen applied machine learning to comb through existing data, and found 72 fast radio bursts that had previously been missed.

    Fast radio bursts (FRBs) are among the most mysterious phenomena in the cosmos. They are extremely powerful, generating as much energy as hundreds of millions of Suns. But they are also extremely short, lasting just milliseconds; and most of them only occur once, without warning.

    This means they can’t be predicted; so it’s not like astronomers are able to plan observations. They are only picked up later in data from other radio observations of the sky.

    Except for one source. FRB 121102 is a special individual – because ever since its discovery in 2012, it has been caught bursting again and again, the only FRB source known to behave this way.

    Because we know FRB 121102 to be a repeating source of FRBs, this means we can try to catch it in the act. This is exactly what researchers at Breakthrough Listen did last year. On 26 August 2017, they pointed the Green Bank Telescope in West Virginia at its location for five hours.

    In the 400 terabytes of data from that observation, the researchers discovered 21 FRBs using standard computer algorithms, all from within the first hour. They concluded that the source goes through periods of frenzied activity and quiescence.

    But the powerful new algorithm used to reanalyse that August 26 data suggests that FRB 121102 is a lot more active and possibly complex than originally thought. Researchers trained what is known as a convolutional neural network to look for the signals, then set it loose on the data like a truffle pig.

    It returned triumphant with 72 previously undetected signals, bringing the total number that astronomers have observed from the object to around 300.

    “This work is only the beginning of using these powerful methods to find radio transients,” said astronomer Gerry Zhang of the University of California Berkeley, which runs Breakthrough Listen.

    “We hope our success may inspire other serious endeavours in applying machine learning to radio astronomy.”

    The new result has helped us learn a little more about FRB 121102, putting constraints on the periodicity of the bursts. It suggests that, the researchers said, there’s no pattern to the way we receive them – unless the pattern is shorter than 10 milliseconds.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Listen

    Breakthrough Listen is the largest ever scientific research program aimed at finding evidence of civilizations beyond Earth. The scope and power of the search are on an unprecedented scale:

    The program includes a survey of the 1,000,000 closest stars to Earth. It scans the center of our galaxy and the entire galactic plane. Beyond the Milky Way, it listens for messages from the 100 closest galaxies to ours.

    The instruments used are among the world’s most powerful. They are 50 times more sensitive than existing telescopes dedicated to the search for intelligence.

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    UCSC Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA



    GBO radio telescope, Green Bank, West Virginia, USA

    The radio surveys cover 10 times more of the sky than previous programs. They also cover at least 5 times more of the radio spectrum – and do it 100 times faster. They are sensitive enough to hear a common aircraft radar transmitting to us from any of the 1000 nearest stars.

    We are also carrying out the deepest and broadest ever search for optical laser transmissions. These spectroscopic searches are 1000 times more effective at finding laser signals than ordinary visible light surveys. They could detect a 100 watt laser (the energy of a normal household bulb) from 25 trillion miles away.

    Listen combines these instruments with innovative software and data analysis techniques.

    The initiative will span 10 years and commit a total of $100,000,000.

     
  • richardmitnick 10:29 am on September 7, 2018 Permalink | Reply
    Tags: AIM-Adaptable Interpretable Machine Learning, , Black-box models, , Machine learning, ,   

    From MIT News: “Taking machine thinking out of the black box” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Anne McGovern | Lincoln Laboratory

    1
    Members of a team developing Adaptable Interpretable Machine Learning at Lincoln Laboratory are: (l-r) Melva James, Stephanie Carnell, Jonathan Su, and Neela Kaushik. Photo: Glen Cooper.

    Adaptable Interpretable Machine Learning project is redesigning machine learning models so humans can understand what computers are thinking.

    Software applications provide people with many kinds of automated decisions, such as identifying what an individual’s credit risk is, informing a recruiter of which job candidate to hire, or determining whether someone is a threat to the public. In recent years, news headlines have warned of a future in which machines operate in the background of society, deciding the course of human lives while using untrustworthy logic.

    Part of this fear is derived from the obscure way in which many machine learning models operate. Known as black-box models, they are defined as systems in which the journey from input to output is next to impossible for even their developers to comprehend.

    “As machine learning becomes ubiquitous and is used for applications with more serious consequences, there’s a need for people to understand how it’s making predictions so they’ll trust it when it’s doing more than serving up an advertisement,” says Jonathan Su, a member of the technical staff in MIT Lincoln Laboratory’s Informatics and Decision Support Group.

    Currently, researchers either use post hoc techniques or an interpretable model such as a decision tree to explain how a black-box model reaches its conclusion. With post hoc techniques, researchers observe an algorithm’s inputs and outputs and then try to construct an approximate explanation for what happened inside the black box. The issue with this method is that researchers can only guess at the inner workings, and the explanations can often be wrong. Decision trees, which map choices and their potential consequences in a tree-like construction, work nicely for categorical data whose features are meaningful, but these trees are not interpretable in important domains, such as computer vision and other complex data problems.

    Su leads a team at the laboratory that is collaborating with Professor Cynthia Rudin at Duke University, along with Duke students Chaofan Chen, Oscar Li, and Alina Barnett, to research methods for replacing black-box models with prediction methods that are more transparent. Their project, called Adaptable Interpretable Machine Learning (AIM), focuses on two approaches: interpretable neural networks as well as adaptable and interpretable Bayesian rule lists (BRLs).

    A neural network is a computing system composed of many interconnected processing elements. These networks are typically used for image analysis and object recognition. For instance, an algorithm can be taught to recognize whether a photograph includes a dog by first being shown photos of dogs. Researchers say the problem with these neural networks is that their functions are nonlinear and recursive, as well as complicated and confusing to humans, and the end result is that it is difficult to pinpoint what exactly the network has defined as “dogness” within the photos and what led it to that conclusion.

    To address this problem, the team is developing what it calls “prototype neural networks.” These are different from traditional neural networks in that they naturally encode explanations for each of their predictions by creating prototypes, which are particularly representative parts of an input image. These networks make their predictions based on the similarity of parts of the input image to each prototype.

    As an example, if a network is tasked with identifying whether an image is a dog, cat, or horse, it would compare parts of the image to prototypes of important parts of each animal and use this information to make a prediction. A paper on this work: “This looks like that: deep learning for interpretable image recognition,” was recently featured in an episode of the “Data Science at Home” podcast. A previous paper, “Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions,” used entire images as prototypes, rather than parts.

    The other area the research team is investigating is BRLs, which are less-complicated, one-sided decision trees that are suitable for tabular data and often as accurate as other models. BRLs are made of a sequence of conditional statements that naturally form an interpretable model. For example, if blood pressure is high, then risk of heart disease is high. Su and colleagues are using properties of BRLs to enable users to indicate which features are important for a prediction. They are also developing interactive BRLs, which can be adapted immediately when new data arrive rather than recalibrated from scratch on an ever-growing dataset.

    Stephanie Carnell, a graduate student from the University of Florida and a summer intern in the Informatics and Decision Support Group, is applying the interactive BRLs from the AIM program to a project to help medical students become better at interviewing and diagnosing patients. Currently, medical students practice these skills by interviewing virtual patients and receiving a score on how much important diagnostic information they were able to uncover. But the score does not include an explanation of what, precisely, in the interview the students did to achieve their score. The AIM project hopes to change this.

    “I can imagine that most medical students are pretty frustrated to receive a prediction regarding success without some concrete reason why,” Carnell says. “The rule lists generated by AIM should be an ideal method for giving the students data-driven, understandable feedback.”

    The AIM program is part of ongoing research at the laboratory in human-systems engineering — or the practice of designing systems that are more compatible with how people think and function, such as understandable, rather than obscure, algorithms.

    “The laboratory has the opportunity to be a global leader in bringing humans and technology together,” says Hayley Reynolds, assistant leader of the Informatics and Decision Support Group. “We’re on the cusp of huge advancements.”

    Melva James is another technical staff member in the Informatics and Decision Support Group involved in the AIM project. “We at the laboratory have developed Python implementations of both BRL and interactive BRLs,” she says. “[We] are concurrently testing the output of the BRL and interactive BRL implementations on different operating systems and hardware platforms to establish portability and reproducibility. We are also identifying additional practical applications of these algorithms.”

    Su explains: “We’re hoping to build a new strategic capability for the laboratory — machine learning algorithms that people trust because they understand them.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 11:31 am on September 4, 2018 Permalink | Reply
    Tags: Aftershocks can often be as horrifying as the main event, , , , Machine learning, , , , This New AI Tool Could Solve a Deadly Earthquake Problem We Currently Can't Fix   

    From Harvard University via Science Alert: “This New AI Tool Could Solve a Deadly Earthquake Problem We Currently Can’t Fix” 

    Harvard University
    From Harvard University

    via

    Science Alert

    4 SEP 2018
    DAVID NIELD

    1
    (mehmetakgu/iStock)

    It could literally save lives.

    The aftershocks of a devastating earthquake can often be as horrifying as the main event. Now scientists have developed a system for predicting where such post-quake tremors could take place, and they’ve used an ingenious application of artificial intelligence (AI) to make this happen.

    Knowing more about what’s coming next can be a matter of life or death for communities reeling from a large quake. The aftershocks can often cause further injuries and fatalities, damage buildings, and complicate rescue efforts.

    A team led by researchers from Harvard University has trained AI to crunch huge amounts of sensor data and apply deep learning to make more accurate predictions.

    The researchers behind the new system say it’s not ready to be deployed yet, but is already more reliable at pinpointing aftershocks than current prediction models.

    In the years ahead, it could become a vital part of the prediction systems used by seismologists.

    “There are three things you want to know about earthquakes – you want to know when they are going to occur, how big they’re going to be and where they’re going to be,” says one of the team, Brendan Meade from Harvard University in Massachusetts.

    “Prior to this work we had empirical laws for when they would occur and how big they were going to be, and now we’re working the third leg, where they might occur.”

    The idea to use deep learning to tackle this came to Meade when he was on a sabbatical at Google – a company where AI is being deployed in many different areas of computing and science.

    Machine learning is just one facet of AI, and is exactly what it sounds like: machines learning from sets of data, so they can cope with new problems that they haven’t been specifically programmed to tackle.

    Deep learning is a more advanced type of machine learning, applying what are called neural networks to try and mimic the thinking processes of the brain.

    In simple terms it means the AI can see more possible results at once, and weigh up a more complex map of factors and considerations, sort-of like neurons in a brain would.

    It’s perfect for earthquakes, with so many variables to consider – from the strength of the shock to the position of the tectonic plates to the type of ground involved. Deep learning could potentially tease out patterns that human analysts could never spot.

    To put this to use with aftershocks, Meade and his colleagues tapped into a database of over 131,000 pairs of earthquake and aftershock readings, taken from 199 previous earthquakes.

    Having let the AI engine chew through those, they then got it to predict the activity of more than 30,000 similar pairs, suggesting the likelihood of aftershocks hitting locations based on a grid of 5 square kilometre (1.9 square mile) units.

    The results were ahead of the Coulomb failure stress change model currently in use. If 1 represents perfect accuracy, and .5 represents flipping a coin, the Coulomb model scored 0.583, and the new AI system managed 0.849.

    “I’m very excited for the potential for machine learning going forward with these kind of problems – it’s a very important problem to go after,” says one of the researchers, Phoebe DeVries from Harvard University.

    “Aftershock forecasting in particular is a challenge that’s well-suited to machine learning because there are so many physical phenomena that could influence aftershock behaviour and machine learning is extremely good at teasing out those relationships.”

    A key ingredient, the researchers say, was the addition of the von Mises yield criterion into the AI’s algorithms – a calculation that can predict when materials will break under stress. Previously used in fields like metallurgy, the calculation hasn’t been extensively used in modelling earthquakes before now.

    There’s still a way to go here – the researchers point out their current AI models are only designed to deal with one type of aftershock trigger, and simple fault lines: it’s not yet a system that can be applied to any kind of quake around the world.

    What’s more, it’s too slow right now to predict the deadly aftershocks that can happen a day or two after the first earthquake.

    However, the good news is that neural networks are designed to continually get better over time, which means with more data and more learning cycles, the system should steadily improve.

    “I think we’ve really just scratched the surface of what could be done with aftershock forecasting… and that’s really exciting,” says DeVries.

    The research has been published in Nature.

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 5:05 pm on August 21, 2018 Permalink | Reply
    Tags: KPI-driven decision-making, Machine learning, MIT Sloan School   

    From MIT Sloan: “Improving Strategic Execution With Machine Learning” 

    MIT News

    From MIT Sloan

    8.21.18
    Michael Schrage
    David Kiron

    MIT SMR’s 2018 Strategic Measurement study reveals how organizations using machine learning to enhance KPI-driven decision-making are pulling ahead of their competitors.

    Machine learning (ML) is changing how leaders use metrics to drive business performance, customer experience, and growth. A small but growing group of companies is investing in ML to augment strategic decision-making with key performance indicators (KPIs). Our research,1 based on a global survey and more than a dozen interviews with executives and academics, suggests that ML is literally, and figuratively, redefining how businesses create and measure value.

    KPIs traditionally have had a retrospective, reporting bias, but by surfacing hidden variables that anticipate “key performance,” machine learning is making KPIs more predictive and prescriptive. With more forward-looking KPIs, progressive leaders can treat strategic measures as high-octane data fuel for training machine-learning algorithms to optimize business processes. Our survey and interviews suggest that this flip ― transforming KPIs from analytic outputs to data inputs ― is at an early, albeit promising, stage.

    Those companies that are already taking action on machine learning ― investing in ML and actively using it to engage customers ― differ radically from companies that are not yet investing in ML. They are far more likely to:

    These differences all depend on treating data as a valuable corporate asset. We see a strong correlation between companies that embrace ML and data-driven decision-making.
    Augmenting Execution With Machine Learning

    Nearly three quarters of survey respondents believe their organization’s current functional KPIs would be better achieved with greater investment in automation and machine-learning technologies. Our interviews with senior executives identified a variety of innovative ML practices. Without exception, the companies with the most intriguing and ambitious ML initiatives were the ones with the most serious commitment ― cultural and organizational ― to managing data as a valuable corporate asset.

    About the Research

    This report explores some of the key findings from the authors’ 2018 research study of KPIs and machine learning in today’s corporate landscape. The research, which involved a survey of 4,700 executives and managers and interviews with more than a dozen corporate leaders and academics, has far-reaching implications for modern businesses. We focused our analysis on 3,225 executive-level respondents; more than half were marketing executives.

    The marketing function is often an early adopter of machine learning in the enterprise. Applications in advertising, customer segmentation, and customer intelligence have become common.2 Even among marketers, however, slightly less than half of surveyed companies have incentives or internal functional KPIs to use more automation and ML technologies. (See Figure 1.) It is highly unlikely this finding reflects ML saturation in the enterprise. Most of the executives we interviewed for our study are focused more on ML’s potential than its actual development or deployment.

    1

    Kelly Watkins, vice president of global marketing at Slack, is exploring machine-learning solutions. For Slack, an essential KPI is determining which businesses using the company’s free workplace collaboration app are good candidates for converting to paid subscriptions for premium features. “This is an effort that the marketing organization, product organization, and the sales organization are working on together,” Watkins says. “Can we train lead-scoring algorithms to really get a sense of, based on a variety of criteria, what’s the best place for sales reps to start among the options that they have for outreach?”

    Watkins also envisions implementing machine learning to handle routine tasks currently performed by Slack employees. She says her intention is to “enable folks in my organization to use their minds to solve strategic problems and to be more consistently looking for insights in the data that can shift the strategy and shift execution, up-leveling their daily mode of operating.” In short, Watkins sees an ML future transforming both efficiency and strategy.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

    MIT Sloan Management Review leads the discourse among academic researchers, business executives and other influential thought leaders about advances in management practice, particularly those shaped by technology, that are transforming how people lead and innovate. MIT SMR disseminates new management research and innovative ideas so that thoughtful executives can capitalize on the opportunities generated by rapid organizational, technological and societal change.

    We distribute our content on the web, in print and on mobile and portable platforms, as well as via licensees and libraries around the world.

     
  • richardmitnick 3:20 pm on August 1, 2018 Permalink | Reply
    Tags: , , Machine learning, , , , ,   

    From Symmetry: “Machine learning proliferates in particle physics” 

    Symmetry Mag
    From Symmetry

    08/01/18
    Manuel Gnida

    1

    A new review in Nature chronicles the many ways machine learning is popping up in particle physics research.

    Experiments at the Large Hadron Collider produce about a million gigabytes of data every second.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    Even after reduction and compression, the data amassed in just one hour at the LHC is similar to the data volume Facebook collects in an entire year.

    Luckily, particle physicists don’t have to deal with all of that data all by themselves. They partner with a form of artificial intelligence that learns how to do complex analyses on its own, called machine learning.

    “Compared to a traditional computer algorithm that we design to do a specific analysis, we design a machine learning algorithm to figure out for itself how to do various analyses, potentially saving us countless man-hours of design and analysis work,” says College of William & Mary physicist Alexander Radovic, who works on the NOvA neutrino experiment.

    FNAL NOvA detector in northern Minnesota


    FNAL/NOvA experiment map

    Radovic and a group of researchers summarize current applications and future prospects of machine learning in particle physics in a paper published today in Nature.

    Sifting through big data

    To handle the gigantic data volumes produced in modern experiments like the ones at the LHC, researchers apply what they call “triggers”—dedicated hardware and software that decide in real time which data to keep for analysis and which data to toss out.

    In LHCb, an experiment that could shed light on why there is so much more matter than antimatter in the universe, machine learning algorithms make at least 70 percent of these decisions, says LHCb scientist Mike Williams from the Massachusetts Institute of Technology, one of the authors of the Nature summary.

    CERN LHCb chamber, LHC


    CERN/LHCb detector

    “Machine learning plays a role in almost all data aspects of the experiment, from triggers to the analysis of the remaining data,” he says.

    Machine learning has proven extremely successful in the area of analysis. The gigantic ATLAS and CMS detectors at the LHC, which enabled the discovery of the Higgs boson, each have millions of sensing elements whose signals need to be put together to obtain meaningful results.

    CERN ATLAS

    CERN/CMS Detector

    “These signals make up a complex data space,” says Michael Kagan of the US Department of Energy’s SLAC National Accelerator Laboratory, who works on ATLAS and was also an author on the Nature review. “We need to understand the relationship between them to come up with conclusions—for example, that a certain particle track in the detector was produced by an electron, a photon or something else.”

    Neutrino experiments also benefit from machine learning. NOvA [above], which is managed by Fermi National Accelerator Laboratory, studies how neutrinos change from one type to another as they travel through the Earth. These neutrino oscillations could potentially reveal the existence of a new neutrino type that some theories predict to be a particle of dark matter. NOvA’s detectors are watching out for charged particles produced when neutrinos hit the detector material, and machine learning algorithms identify them.

    From machine learning to deep learning

    Recent developments in machine learning often called “deep learning” promise to take applications in particle physics even further. Deep learning typically refers to the use of neural networks: computer algorithms with an architecture inspired by the dense network of neurons in the human brain.

    These neural nets learn on their own how to perform certain analysis tasks during a training period in which they are shown sample data, such as simulations, and are told how well they performed.

    Until recently, the success of neural nets was limited because training them used to be very hard, says co-author Kazuhiro Terao, a SLAC researcher working on the MicroBooNE neutrino experiment, which studies neutrino oscillations as part of Fermilab’s short-baseline neutrino program and will become a component of the future Deep Underground Neutrino Experiment at the Long-Baseline Neutrino Facility.

    FNAL/MicroBooNE

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    “These difficulties limited us to neural networks that were only a couple of layers deep,” he says. “Thanks to advances in algorithms and computing hardware, we now know much better how to build and train more capable networks hundreds or thousands of layers deep.”

    Many of the advances in deep learning are driven by tech giants’ commercial applications and the data explosion they have generated over the past two decades. “NOvA, for example, uses a neural network inspired by the architecture of the GoogleNet,” Radovic says. “It improved the experiment in ways that otherwise could have only been achieved by collecting 30 percent more data.”

    A fertile ground for innovation

    Machine learning algorithms become more sophisticated and fine-tuned day by day, opening up unprecedented opportunities to solve particle physics problems.

    Many of the new tasks they could be used for are related to computer vision, Kagan says. “It’s similar to facial recognition, except that in particle physics, image features are more abstract and complex than ears and noses.”

    Some experiments like NOvA and MicroBooNE produce data that can easily be translated into actual images, and AI can be readily used to identify features in them. In LHC experiments, on the other hand, images first need to be reconstructed from a murky pool of data generated by millions of sensor elements.

    “But even if the data don’t look like images, we can still use computer vision methods if we’re able to process the data in the right way,” Radovic says.

    One area where this approach could be very useful is the analysis of particle jets produced in large numbers at the LHC. Jets are narrow sprays of particles whose individual tracks are extremely challenging to separate. Computer vision technology could help identify features in jets.

    Another emerging application of deep learning is the simulation of particle physics data that predict, for example, what happens in particle collisions at the LHC and can be compared to the actual data. Simulations like these are typically slow and require immense computing power. AI, on the other hand, could do simulations much faster, potentially complementing the traditional approach.

    “Just a few years ago, nobody would have thought that deep neural networks can be trained to ‘hallucinate’ data from random noise,” Kagan says. “Although this is very early work, it shows a lot of promise and may help with the data challenges of the future.”

    Benefiting from healthy skepticism

    Despite all obvious advances, machine learning enthusiasts frequently face skepticism from their collaboration partners, in part because machine learning algorithms mostly work like “black boxes” that provide very little information about how they reached a certain conclusion.

    “Skepticism is very healthy,” Williams says. “If you use machine learning for triggers that discard data, like we do in LHCb, then you want to be extremely cautious and set the bar very high.”

    Therefore, establishing machine learning in particle physics requires constant efforts to better understand the inner workings of the algorithms and to do cross-checks with real data whenever possible.

    “We should always try to understand what a computer algorithm does and always evaluate its outcome,” Terao says. “This is true for every algorithm, not only machine learning. So, being skeptical shouldn’t stop progress.”

    Rapid progress has some researchers dreaming of what could become possible in the near future. “Today we’re using machine learning mostly to find features in our data that can help us answer some of our questions,” Terao says. “Ten years from now, machine learning algorithms may be able to ask their own questions independently and recognize when they find new physics.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: