Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:49 am on October 20, 2019 Permalink | Reply
    Tags: A lot in common with facial recognition at Facebook and other social media., , , , , , , , Improving on standard methods for estimating the dark matter content of the universe through artificial intelligence., Machine learning, The scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-​450 dataset., Using cutting-​edge machine learning algorithms for cosmological data analysis.,   

    From ETH Zürich: “Artificial intelligence probes dark matter in the universe” 

    ETH Zurich bloc

    From ETH Zürich

    18.09.2019
    Oliver Morsch

    A team of physicists and computer scientists at ETH Zürich has developed a new approach to the problem of dark matter and dark energy in the universe. Using machine learning tools, they programmed computers to teach themselves how to extract the relevant information from maps of the universe.

    1
    Excerpt from a typical computer-​generated dark matter map used by the researchers to train the neural network. (Source: ETH Zürich)

    Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-​inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

    At ETH Zürich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-​edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

    Facial recognition for cosmology

    While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: “Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-​tale signs of dark matter and dark energy.” As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter – including the dark variety – slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as “weak gravitational lensing”, distorts the images of those galaxies very subtly, much like far-​away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

    Weak gravitational lensing NASA/ESA Hubble

    Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-​designed statistics such as so-​called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

    Neural networks teach themselves

    “In our recent work, we have used a completely new methodology”, says Alexandre Refregier. “Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job.” This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier’s group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

    2
    Once the neural network has been trained, it can be used to extract cosmological parameters from actual images of the night sky. (Visualisations: ETH Zürich)

    In a first step, the scientists trained the neural networks by feeding them computer-​generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter – for instance, the ratio between the total amount of dark matter and dark energy – should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

    More accurate than human-​made analysis

    The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-​made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time – which is expensive.

    Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-​450 dataset. “This is the first time such machine learning tools have been used in this context,” says Fluri, “and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.”

    As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey.

    Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ETH Zurich campus
    ETH Zürich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zürich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zürich, underlining the excellent reputation of the university.

     
  • richardmitnick 9:15 am on October 19, 2019 Permalink | Reply
    Tags: "Hybrid Device among First to Meld Quantum and Conventional Computing", Among the first demonstrations of quantum hardware teaming up with conventional computing power., Bayesian optimization, , Machine learning,   

    From Joint Quantum Institute: “Hybrid Device among First to Meld Quantum and Conventional Computing” 

    JQI bloc

    From Joint Quantum Institute

    October 18, 2019

    Research Contact
    Daiwei Zhu
    daiwei@umd.edu

    Norbert Linke
    linke@umd.edu

    Christopher Monroe
    monroe@umd.edu

    Media Contact
    Chris Cesare
    ccesare@umd.edu

    1
    Close-up photo of an ion trap. Credit: S. Debnath and E. Edwards/JQI

    Researchers at the University of Maryland (UMD) have trained a small hybrid quantum computer to reproduce the features in a particular set of images.

    The result, which was published Oct. 18, 2019 in the journal Science Advances, is among the first demonstrations of quantum hardware teaming up with conventional computing power—in this case to do generative modeling, a machine learning task in which a computer learns to mimic the structure of a given dataset and generate examples that capture the essential character of the data.

    “We combined one of the highest performance quantum computers with one of the most powerful AI programs—over the internet—to form a unique kind of hybrid machine,” says Joint Quantum Institute (JQI) Fellow Norbert Linke, an assistant professor of physics at UMD and a co-author of the new paper.

    The researchers used four trapped atomic ions for the quantum half of their hybrid computer, with each ion representing a quantum bit, or qubit—the basic unit of information in a quantum computer. To manipulate the qubits, researchers punch commands into an ordinary computer, which interprets them and orchestrates a sequence of laser pulses that zap the qubits.

    The UMD quantum computer is fully programmable, with connections between every pair of qubits. “We can implement any quantum function by executing a standard set of gates between the qubits,” says JQI and Joint Center for Quantum Information and Computer Science (QuICS) Fellow Christopher Monroe, a physics professor at UMD who was also a co-author of the new paper. “We just needed to optimize the parameters of each gate to train our machine learning algorithm. This is how quantum optimization works.”

    Monroe, Linke and their colleagues trained their computer to produce an output that matched the “bars-and-stripes” set, a collection of images with blocks of color arranged vertically or horizontally to look like bars or stripes—a standard dataset in generative modeling because of its simplicity.

    “Machine learning is generally categorized into two types,” says Daiwei Zhu, the lead author of the paper and a graduate student in physics at JQI. “One enables you to tell whether something is a cat or dog, and the other lets you generate an image of a cat or dog. We’re performing a scaled-back version of the latter task.”

    Turning the hybrid system into a properly trained generative model meant finding the laser sequence that would turn a simple input state into an output capable of capturing the patterns in the bars-and-stripes set—something that qubits could do more efficiently than regular bits. “In essence, the power of this lies in the nature of quantum superposition,” says Zhu, referring to the ability of qubits to store multiple states—in this case, the entire set of bars-and-stripes images with four pixels—simultaneously.

    Through a series of iterative steps, the researchers attempted to nudge the output of their hybrid computer closer and closer to the quantum bars-and-stripes state. They began by preparing the input qubits, subjecting them to a random sequence of laser pulses and measuring the resulting output. Those measurement results were then fed to a conventional, or “classical,” computer, which crunched the numbers and suggested adjustments to the laser pulses to make the output look more like the bars-and-stripes state.

    By adjusting the laser parameters and repeating the procedure, the team could test whether the output eventually converged on the desired quantum state. They found that in some cases it did, and in some cases it didn’t.

    The researchers studied the convergence using two different patterns of connectivity between qubits. In one, each qubit was able to interact with all the others, a situation that the team called all-to-all connectivity. In a second, a central qubit interacted with the other three, none of which interacted directly with one another. They called this star connectivity. (This was an artificial constraint, as the four ions are naturally able to interact in the all-to-all fashion. But it could be relevant to experiments with a larger number of ions.)

    The all-to-all interactions produced states closer to bars-and-stripes after training short sequences of pulses. But the experimenters had another setting to play with: They also studied the performance of two different number crunching methods used on the conventional half of the hybrid computer.

    One method, called particle swarm optimization, worked well when all-to-all interactions were available, but it failed to converge on the bars-and-stripes output for star connectivity. A second method, which was suggested by three researchers at the Oxford, UK AI company Mind Foundry Limited, proved much more successful across the board.

    The second method, called Bayesian optimization, was made available over the internet, which enabled the researchers to train sequences of laser pulses that could produce the bars-and-stripes state for both all-to-all and star connectivity. Not only that, but it significantly reduced the number of steps in the iterative training process, effectively cutting in half the time it took to converge on the correct output.

    “What our experiment shows is that a quantum-classical hybrid machine, while in principle more powerful than either of the components individually, still needs the right classical piece to work,” says Linke. “Using these schemes to solve problems in chemistry or logistics will require both a boost in quantum computer performance and tailored classical optimization strategies.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    JQI supported by Gordon and Betty Moore Foundation

    We are on the verge of a new technological revolution as the strange and unique properties of quantum physics become relevant and exploitable in the context of information science and technology.

    The Joint Quantum Institute (JQI) is pursuing that goal through the work of leading quantum scientists from the Department of Physics of the University of Maryland (UMD), the National Institute of Standards and Technology (NIST) and the Laboratory for Physical Sciences (LPS). Each institution brings to JQI major experimental and theoretical research programs that are dedicated to the goals of controlling and exploiting quantum systems.

     
  • richardmitnick 1:24 pm on October 2, 2019 Permalink | Reply
    Tags: , , Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer; cosmology; brain imaging and materials science among others., , Bigger and better telescopes and accelerators and of course supercomputers on which they could run larger multiscale simulations, Machine learning, , The influx of massive data sets and the computing power to sift sort and analyze it., The size of the simulations we are running is so big the problems that we are trying to solve are getting bigger so that these AI methods can no longer be seen as a luxury but as must-have technology.   

    From Argonne Leadership Computing Facility: “Artificial Intelligence: Transforming science, improving lives” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    September 30, 2019
    Mary Fitzpatrick
    John Spizzirri

    Commitment to developing artificial intelligence (AI) as a national research strategy in the United States may have unequivocally defined 2019 as the Year of AI — particularly at the federal level, more specifically throughout the U.S. Department of Energy (DOE) and its national laboratory complex.

    In February, the White House established the Executive Order on Maintaining American Leadership in Artificial Intelligence (American AI Initiative) to expand the nation’s leadership role in AI research. Its goals are to fuel economic growth, enhance national security and improve quality of life.

    The initiative injects substantial and much-needed research dollars into federal facilities across the United States, promoting technology advances and innovation and enhancing collaboration with nongovernment partners and allies abroad.

    In response, DOE has made AI — along with exascale supercomputing and quantum computing — a major element of its $5.5 billion scientific R&D budget and established the Artificial Intelligence and Technology Office, which will serve to coordinate AI work being done across the DOE.

    At DOE facilities like Argonne National Laboratory, researchers have already begun using AI to design better materials and processes, safeguard the nation’s power grid, accelerate treatments in brain trauma and cancer and develop next-generation microelectronics for applications in AI-enabled devices.

    Over the last two years, Argonne has made significant strides toward implementing its own AI initiative. Leveraging the Laboratory’s broad capabilities and world-class facilities, it has set out to explore and expand new AI techniques; encourage collaboration; automate traditional research methods, as well as lab facilities and drive discovery.

    In July, it hosted an AI for Science town hall, the first of four such events that also included Oak Ridge and Lawrence Berkeley national laboratories and DOE’s Office of Science.

    3

    Engaging nearly 350 members of the AI community, the town hall served to stimulate conversation around expanding the development and use of AI, while addressing critical challenges by using the initiative framework called AI for Science.

    “AI for Science requires new research and infrastructure, and we have to move a lot of data around and keep track of thousands of models,” says Rick Stevens, Associate Laboratory Director for Argonne’s Computing, Environment and Life Sciences (CELS) Directorate and a professor of computer science at the University of Chicago.

    1
    Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, is helping to develop the CANDLE computer architecture on the patient level, which is meant to help guide drug treatment choices for tumors based on a much wider assortment of data than currently used.

    “How do we distribute this production capability to thousands of people? We need to have system software with different capabilities for AI than for simulation software to optimize workflows. And these are just a few of the issues we have to begin to consider.”

    The conversation has just begun and continues through Laboratory-wide talks and events, such as a recent AI for Science workshop aimed at growing interest in AI capabilities through technical hands-on sessions.

    Argonne also will host DOE’s Innovation XLab Artificial Intelligence Summit in Chicago, meant to showcase the assets and capabilities of the national laboratories and facilitate an exchange of information and ideas between industry, universities, investors and end-use customers with Lab innovators and experts.
    What exactly is AI?

    Ask any number of researchers to define AI and you’re bound to get — well, first, a long pause and perhaps a chuckle — a range of answers from the more conventional ​“utilizing computing to mimic the way we interpret data but at a scale not possible by human capability” to ​“a technology that augments the human brain.”

    Taken together, AI might well be viewed as a multi-component toolbox that enables computers to learn, recognize patterns, solve problems, explore complex datasets and adapt to changing conditions — much like humans, but one day, maybe better.

    While the definitions and the tools may vary, the goals remain the same: utilize or develop the most advanced AI technologies to more effectively address the most pressing issues in science, medicine and technology, and accelerate discovery in those areas.

    At Argonne, AI has become a critical tool for modeling and prediction across almost all areas where the Laboratory has significant domain expertise: chemistry, materials, photon science, environmental and manufacturing sciences, biomedicine, genomics and cosmology.

    A key component of Argonne’s AI toolbox is a technique called machine learning and its derivatives, such as deep learning. The latter is built on neural networks comprising many layers of artificial neurons that learn internal representations of data, mimicking human information-gathering-processing systems like the brain.

    “Deep learning is the use of multi-layered neural networks to do machine learning, a program that gets smarter or more accurate as it gets more data to learn from. It’s very successful at learning to solve problems,” says Stevens.

    A staunch supporter of AI, particularly deep learning, Stevens is principal investigator on a multi-institutional effort that is developing the deep neural network application CANDLE (CANcer Distributed Learning Environment), that integrates deep learning with novel data, modeling and simulation techniques to accelerate cancer research.

    Coupled with the power of Argonne’s forthcoming exascale computer Aurora — which has the capacity to deliver a billion billion calculations per second — the CANDLE environment will enable a more personalized and effective approach to cancer treatment.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    And that is just a small sample of AI’s potential in science. Currently, all across Argonne, researchers are involved in more than 60 AI-related investigations, many of them driven by machine learning.

    Argonne Distinguished Fellow Valerie Taylor’s work looks at how applications execute on computers and large-scale, high-performance computing systems. Using machine learning, she and her colleagues model an execution’s behavior and then use that model to provide feedback on how to best modify the application for better performance.

    “Better performance may be shorter execution time or, using generated metrics such as energy, it may be reducing the average power,” says Taylor, director of Argonne’s Mathematics and Computer Science (MCS) division. ​“We use statistical analysis to develop the models and identify hints on how to modify the application.”

    Material scientists are exploring the use of machine learning to optimize models of complex material properties in the discovery and design of new materials that could benefit energy storage, electronics, renewable energy resources and additive manufacturing, to name just a few areas.

    And still more projects address complex transportation and vehicle efficiency issues by enhancing engine design, minimizing road congestion, increasing energy efficiency and improving safety.

    Beyond the deep

    Beyond deep learning, there are many sub-ranges of AI that people have been working on for years, notes Stevens. ​“And while machine learning now dominates, something else might emerge as a strength.”

    Natural language processing, for example, is commercially recognizable as voice-activated technologies — think Siri — and on-the-fly language translators. Exceeding those capabilities is its ability to review, analyze and summarize information about a given topic from journal articles, reports and other publications, and extract and coalesce select information from massive and disparate datasets.

    Immersive visualization can place us into 3D worlds of our own making, interject objects or data into our current reality or improve upon human pattern recognition. Argonne researchers have found application for virtual and augmented reality in the 3D visualization of complicated data sets and the detection of flaws or instabilities in mechanical systems.

    And of course, there is robotics — a program started at Argonne in the late 1940s and rebooted in 1999 — that is just beginning to take advantage of Argonne’s expanding AI toolkit, whether to conduct research in a specific domain or improve upon its more utilitarian use in decommissioning nuclear power plants.

    Until recently, according to Stevens, AI has been a loose collection of methods using very different underlying mechanisms, and the people using them weren’t necessarily communicating their progress or potentials with one another.

    But with a federal initiative in hand and a Laboratory-wide vision, that is beginning to change.

    Among those trying to find new ways to collaborate and combine these different AI methods is Marius Stan, a computational scientist in Argonne’s Applied Materials division (AMD) and a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

    Stan leads a research area called Intelligent Materials Design that focuses on combining different elements of AI to discover and design new materials and to optimize and control complex synthesis and manufacturing processes.

    Work on the latter has created a collaboration between Stan and colleagues in the Applied Materials and Energy Systems divisions, and the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    Merging machine learning and computer vision with the Flame Spray Pyrolysis technology at Argonne’s Materials Engineering Research Facility, the team has developed an AI ​“intelligent software” that can optimize, in real time, the manufacturing process.

    “Our idea was to use the AI to better understand and control in real time — first in a virtual, experimental setup, then in reality — a complex synthesis process,” says Stan.

    Automating the process leads to a safer and much faster process compared to those led by humans. But even more intriguing is the potential that the AI process might observe materials with better properties than did the researchers.

    What drove us to AI?

    Whether or not they concur on a definition, most researchers will agree that the impetus for the escalation of AI in scientific research was the influx of massive data sets and the computing power to sift, sort and analyze it.

    Not only was the push coming from big corporations brimming with user data, but the tools that drive science were getting more expansive — bigger and better telescopes and accelerators and of course supercomputers, on which they could run larger, multiscale simulations.

    “The size of the simulations we are running is so big, the problems that we are trying to solve are getting bigger, so that these AI methods can no longer be seen as a luxury, but as must-have technology,” notes Prasanna Balaprakash, a computer scientist in MCS and ALCF.

    Data and compute size also drove the convergence of more traditional techniques, such as simulation and data analysis, with machine and deep learning. Where analysis of data generated by simulation would eventually lead to changes in an underlying model, that data is now being fed back into machine learning models and used to guide more precise simulations.

    “More or less anybody who is doing large-scale computation is adopting an approach that puts machine learning in the middle of this complex computing process and AI will continue to integrate with simulation in new ways,” says Stevens.

    “And where the majority of users are in theory-modeling-simulation, they will be integrated with experimentalists on data-intense efforts. So the population of people who will be part of this initiative will be more diverse.”

    But while AI is leading to faster time-to-solution and more precise results, the number of data points, parameters and iterations required to get to those results can still prove monumental.

    Focused on the automated design and development of scalable algorithms, Balaprakash and his Argonne colleagues are developing new types of AI algorithms and methods to more efficiently solve large-scale problems that deal with different ranges of data. These additions are intended to make existing systems scale better on supercomputers, like those housed at the ALCF; a necessity in the light of exascale computing.

    “We are developing an automated machine learning system for a wide range of scientific applications, from analyzing cancer drug data to climate modeling,” says Balaprakash. ​“One way to speed up a simulation is to replace the computationally expensive part with an AI-based predictive model that can make the simulation faster.”

    Industry support

    The AI techniques that are expected to drive discovery are only as good as the tech that drives them, making collaboration between industry and the national labs essential.

    “Industry is investing a tremendous amount in building up AI tools,” says Taylor. ​“Their efforts shouldn’t be duplicated, but they should be leveraged. Also, industry comes in with a different perspective, so by working together, the solutions become more robust.”

    Argonne has long had relationships with computing manufacturers to deliver a succession of ever-more powerful machines to handle the exponential growth in data size and simulation scale. Its most recent partnership is that with semiconductor chip manufacturer Intel and supercomputer manufacturer Cray to develop the exascale machine Aurora.

    But the Laboratory is also collaborating with a host of other industrial partners in the development or provision of everything from chip design to deep learning-enabled video cameras.

    One of these, Cerebras, is working with Argonne to test a first-of-its-kind AI accelerator that provides a 100–500 times improvement over existing AI accelerators. As its first U.S. customer, Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer, cosmology, brain imaging and materials science, among others.

    The National Science Foundation-funded Array of Things, a partnership between Argonne, the University of Chicago and the City of Chicago, actively seeks commercial vendors to supply technologies for its edge computing network of programmable, multi-sensor devices.

    But Argonne and the other national labs are not the only ones to benefit from these collaborations. Companies understand the value in working with such organizations, recognizing that the AI tools developed by the labs, combined with the kinds of large-scale problems they seek to solve, offer industry unique benefits in terms of business transformation and economic growth, explains Balaprakash.

    “Companies are interested in working with us because of the type of scientific applications that we have for machine learning,” he adds ​“What we have is so diverse, it makes them think a lot harder about how to architect a chip or design software for these types of workloads and science applications. It’s a win-win for both of us.”

    AI’s future, our future

    “There is one area where I don’t see AI surpassing humans any time soon, and that is hypotheses formulation,” says Stan, ​“because that requires creativity. Humans propose interesting projects and for that you need to be creative, make correlations, propose something out of the ordinary. It’s still human territory but machines may soon take the lead.

    “It may happen,” he says, and adds that he’s working on it.

    In the meantime, Argonne researchers continue to push the boundaries of existing AI methods and forge new components for the AI toolbox. Deep learning techniques like neuromorphic algorithms that exhibit the adaptive nature of insects in an equally small computational space can be used at the ​“edge” — where there are few computing resources; as in cell phones or urban sensors.

    An optimizing neural network called a neural architecture search, where one neural network system improves another, is helping to automate deep-learning-based predictive model development in several scientific and engineering domains, such as cancer drug discovery and weather forecasting using supercomputers.

    Just as big data and better computational tools drove the convergence of simulation, data analysis and visualization, the introduction of the exascale computer Aurora into the Argonne complex of leadership-class tools and experts will only serve to accelerate the evolution of AI and witness its full assimilation into traditional techniques.

    The tools may change, the definitions may change, but AI is here to stay as an integral part of the scientific method and our lives.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:13 am on September 23, 2019 Permalink | Reply
    Tags: "Computing and the search for new planets", , Machine learning, ,   

    From MIT News: “Computing and the search for new planets” 

    MIT News

    From MIT News

    September 23, 2019
    Brittany Flaherty

    1
    Worlds orbiting stars other than our sun are “exoplanets,” and they come in many sizes, from gas giants larger than Jupiter to small, rocky planets. This illustration of a “super-Earth” represents the type of planet that the TESS mission aims to find outside our solar system.
    Image: M. Kornmesser/ESO

    2
    A set of flight camera electronics on one of the TESS cameras, developed by the MIT Kavli Institute for Astrophysics and Space Research, transmits exoplanet data from the camera to a computer aboard the spacecraft that processes it before transmitting it back to scientists on Earth. Photo: Kavli Institute

    3
    Liang Yu, PhD ’19, a recent physics graduate, built upon an existing code to write the machine learning tool that the TESS team is now using to identify exoplanets. Photo: Debbie Meinbresse

    Computing and the search for new planets.
    MIT planetary scientists partner with computer scientists to find exoplanets.

    When MIT launched the MIT Stephen A. Schwarzman College of Computing this fall, one of the goals was to drive further innovation in computing across all of MIT’s schools. Researchers are already expanding beyond traditional applications of computer science and using these techniques to advance a range of scientific fields, from cancer medicine to anthropology to design — and to the discovery of new planets.

    Computation has already proven useful for the Transiting Exoplanet Survey Satellite (TESS), a NASA-funded mission led by MIT.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    Launched from Cape Canaveral in April 2018, TESS is a satellite that takes images of the sky as it orbits the Earth. These images can help researchers find planets orbiting stars beyond our sun, called exoplanets. This work, which is now halfway complete, will reveal more about the other planets within what NASA calls our “solar neighborhood.”

    “TESS just completed the first of its two-year prime mission, surveying the southern night sky,” says Sara Seager, an astrophysicist and planetary scientist at MIT and deputy director of science for TESS. “TESS found over 1,000 planet candidates and about 20 confirmed planets, some in multiple-planet systems.”

    While TESS has enabled some impressive discoveries so far, finding these exoplanets is no simple task. TESS is collecting images of more than 200,000 distant stars, saving an image of these planets every two minutes, as well as saving an image of a large swath of sky every 30 minutes. Seager says every two weeks, which is how long it takes the satellite to orbit the Earth, TESS sends about 350 gigabytes of data (once uncompressed) to Earth. While Seager says this is not as much data as people might expect (a 2019 Macbook Pro has up to 512 gigabytes of storage), analyzing the data involves taking many complex factors into consideration.

    Seager, who says she has long been interested in how computation can be used as a tool for science, began discussing the project with Victor Pankratius, a former principal research scientist in MIT’s Kavli Institute for Astrophysics and Space Research, who is now the director and head of global software engineering at Bosch Sensortec. A trained computer scientist, Pankratius says that after arriving at MIT in 2013, he started thinking about scientific fields that produce big data, but that have not yet fully benefited from computing techniques. After speaking with astronomers like Seager, he learned more about the data their instruments collect and became interested in applying computer-aided discovery techniques to the search for exoplanets.

    “The universe is a big place,” Pankratius says. “So I think leveraging what we have on the computer science side is a great thing.”

    The basic idea underlying TESS’ mission is that like our own solar system, in which the Earth and other planets revolve around a central star (the sun), there are other planets beyond our solar system revolving around different stars. The images TESS collects produce light curves — data that show how the brightness of the star changes over time. Researchers are analyzing these light curves to find drops in brightness, which could indicate that a planet is passing in front of the star and temporarily blocking some of its light.

    Planet transit. NASA/Ames

    “Every time a planet orbits, you would see this brightness go down,” Pankratius says. “It’s almost like a heartbeat.”

    The trouble is that not every dip in brightness is necessarily caused by a passing planet. Seager says machine learning currently comes into play during the “triage” phase of their TESS data analysis, helping them distinguish between potential planets and other things that could cause dips in brightness, like variable stars, which naturally vary in their brightness, or instrument noise.

    Analysis on planets that pass through triage is still done by scientists who have learned how to “read” light curves. But the team is now using thousands of light curves that have been classified by eye to teach neural networks how to identify exoplanet transits. Computation is helping them narrow down which light curves they should examine in more detail. Liang Yu PhD ’19, a recent physics graduate, built upon an existing code to write the machine learning tool that the team is now using.

    While helpful for homing in on the most relevant data, Seager says machine learning cannot yet be used to simply find exoplanets. “We still have a lot of work to do,” she says.

    Pankratius agrees. “What we want to do is basically create computer-aided discovery systems that do this for all [stars] all the time,” he says. “You want to just press a button and say, show me everything. But right now it’s still people with some automation vetting all of these light curves.”

    Seager and Pankratius also co-taught a course that focused on various aspects of computation and artificial intelligence (AI) development in planetary science. Seager says inspiration for the course arose from a growing interest from students to learn about AI and its applications to cutting-edge data science.

    In 2018, the course allowed students to use actual data collected by TESS to explore machine learning applications for this data. Modeled after another course Seager and Pankratius taught, students in the course were able to choose a scientific problem and learn the computation skills to solve that problem. In this case, students learned about AI techniques and applications to TESS. Seager says students had a great response to the unique class.

    “As a student, you could actually make a discovery,” Pankratius says. “You can build a machine learning algorithm, run it on this data, and who knows, maybe you will find something new.”

    Much of the data TESS collects is also readily available as part of a larger citizen science project. Pankratius says anyone with the right tools could start making discoveries of their own. Thanks to cloud connectivity, this is even possible on a cell phone.

    “If you get bored on your bus ride home, why not search for planets?” he says.

    Pankratius says this type of collaborative work allows experts in each domain to share their knowledge and learn from each other, rather than each trying to get caught up in the other’s field.

    “Over time, science has become more specialized, so we need ways to integrate the specialists better,” Pankratius says. The college of computing could help forge more such collaborations, he adds. Pankratius also says it could attract researchers who work at the intersection of these disciplines, who can bridge gaps in understanding between experts.

    This type of work integrating computer science is already becoming increasingly common across scientific fields, Seager notes. “Machine learning is ‘in vogue’ right now,” she says.

    Pankratius says that is in part because there is more evidence that leveraging computer science techniques is an effective way to address various types of problems and growing data sets.

    “We now have demonstrations in different areas that the computer-aided discovery approach doesn’t just work,” Pankratius says. “It actually leads to new discoveries.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 9:03 am on September 19, 2019 Permalink | Reply
    Tags: "Harnessing artificial intelligence for climate science", , , , , , Machine learning   

    From European Space Agency: “Harnessing artificial intelligence for climate science” 

    ESA Space For Europe Banner

    From European Space Agency

    18 September 2019

    Over 700 Earth observation satellites are orbiting our planet, transmitting hundreds of terabytes of data to downlink stations every day. Processing and extracting useful information is a huge data challenge, with volumes rising quasi-exponentially.

    1
    Neural networks help map ocean colour.

    And, it’s not just a problem of the data deluge: our climate system, and environmental processes more widely, work in complex and non-linear ways. Artificial intelligence and, in particular, machine learning is helping to meet these challenges, as the need for accurate knowledge about global climate change becomes more urgent.

    ESA’s Climate Change Initiative provides the systematic information needed by the UN Framework Convention on Climate Change. By funding teams of scientists to create world-class accurate, long-term, datasets that characterise Earth’s changing climate system, the initiative is providing a whole-globe view.

    Derived from satellites, these datasets cover 21 ‘essential climate variables’, from greenhouse gas concentrations to sea levels and the changing state of our polar ice sheets. Spanning four decades, these empirical records underpin the global climate models that help predict future change.

    A book from 1984 – Künstliche Intelligenz by E.D. Schmitter – bears testimony to Carsten Brockmann’s long interest in artificial intelligence. Today he is applying this knowledge at an ever-increasing pace to his other interest, climate change.

    “What was theoretical back then is now becoming best practice,” says Dr Brockmann who believes artificial intelligence has the power to address pressing challenges facing climate researchers.

    Artificial intelligence algorithms – computer systems that learn and act in response to their environment – can improve detection rates in Earth observation. For example, it is common to use the ‘random forests’ algorithm, which uses a training dataset to learn to detect different land-cover types or areas burnt by wildfires. In machine learning, computer algorithms are trained, in the statistical sense, to split, sort and transform data to improve dataset classification, prediction, or pattern discovery.

    Dr Brockmann says, “Connections between different variables in a dataset are caused by the underlying physics or chemistry, but if you tried to invert the mathematics, often too much is unknown, and so unsolvable.

    “For humans it’s often hard to find connections or make predictions from these complex and nonlinear climate data.”

    Artificial intelligence helps by building up connections automatically. Exposing the data to artificial intelligence methods enables the algorithms to ‘play’ with data and find statistical connections. These ‘convolutional neural network’ algorithms have the potential to resolve climate science problems that vary in space and time.

    For example, in Climate Change Initiative scientists in the Aerosol project need to determine changes in reflected sunlight owing to the presence of dust, smoke and pollution in the atmosphere, called aerosol optical depth.

    Thomas Popp, who is science leader for the project, thinks there could be further benefits by using artificial intelligence to retrieve additional aerosol parameters, such as their composition or absorption from several sensors at once. “I want to combine several different satellite instruments and do one retrieval. This would mean gathering aerosol measurements across visible, thermal and the ultraviolet spectral range, from sensors with different viewing angles.”

    He says solving this as a big data problem could make these data automatically fit together and be consistent.

    “Explainable artificial intelligence is another evolving area that could help unveil the physics or chemistry behind the data”, says Dr Brockmann, who is in the Climate Change Initiative’s Ocean Colour science team.

    “In artificial intelligence, computer algorithms learn to deal with an input dataset to generate an output, but we don’t understand the hidden layers and connections in neural networks: the so-called black box.

    “We can’t see what’s inside this black box, and even if we could, it wouldn’t tell us anything. In explainable artificial intelligence, techniques are being developed to shine a light into this black box to understand the physical connections.”

    Dr Brockmann and Popp joined leading climate and artificial intelligence experts to explore how to fully exploit Earth observation data during ESA’s ɸ-week, which was held last week. Things have come a long way since Dr Brockmann bought his little book and he commented, “It was a very exciting week!”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA50 Logo large

     
  • richardmitnick 12:48 pm on September 18, 2019 Permalink | Reply
    Tags: A new approach to the problems of dark matter and dark energy, , , , , , Deep artificial neural networks, , Facial recognition for cosmology, Machine learning,   

    From ETH Zürich: “Artificial intelligence probes dark matter in the universe” 

    ETH Zurich bloc

    From ETH Zürich

    18.09.2019
    Oliver Morsch

    A team of physicists and computer scientists at ETH Zürich has developed a new approach to the problem of dark matter and dark energy in the universe. Using machine learning tools, they programmed computers to teach themselves how to extract the relevant information from maps of the universe.

    1
    Excerpt from a typical computer-generated dark matter map used by the researchers to train the neural network. (Source: ETH Zürich)

    Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

    At ETH Zürich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

    Facial recognition for cosmology

    While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: “Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy.” As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter – including the dark variety – slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as “weak gravitational lensing”, distorts the images of those galaxies very subtly, much like far-away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

    Weak gravitational lensing NASA/ESA Hubble

    Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-designed statistics such as so-called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

    Neural networks teach themselves

    “In our recent work, we have used a completely new methodology”, says Alexandre Refregier. “Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job.” This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier’s group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

    2
    Once the neural network has been trained, it can be used to extract cosmological parameters from actual images of the night sky. (Visualisations: ETH Zürich)

    In a first step, the scientists trained the neural networks by feeding them computer-generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter – for instance, the ratio between the total amount of dark matter and dark energy – should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

    More accurate than human-made analysis

    The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time – which is expensive.

    Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-450 dataset. “This is the first time such machine learning tools have been used in this context,” says Fluri, “and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.”

    As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Timeline of the Inflationary Universe WMAP

    The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

    According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.

    Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ETH Zurich campus
    ETH Zürich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zürich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zürich, underlining the excellent reputation of the university.

     
  • richardmitnick 11:37 am on August 15, 2019 Permalink | Reply
    Tags: , Azure ML, , , Every proton collision at the Large Hadron Collider is different but only a few are special. The special collisions generate particles in unusual patterns — possible manifestations of new rule-break, Fermilab is the lead U.S. laboratory for the CMS experiment., , , Machine learning, , , , The challenge: more data more computing power   

    From Fermi National Accelerator Lab- “A glimpse into the future: accelerated computing for accelerated particles” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    August 15, 2019
    Leah Hesla

    Every proton collision at the Large Hadron Collider is different, but only a few are special. The special collisions generate particles in unusual patterns — possible manifestations of new, rule-breaking physics — or help fill in our incomplete picture of the universe.

    Finding these collisions is harder than the proverbial search for the needle in the haystack. But game-changing help is on the way. Fermilab scientists and other collaborators successfully tested a prototype machine-learning technology that speeds up processing by 30 to 175 times compared to traditional methods.

    Confronting 40 million collisions every second, scientists at the LHC use powerful, nimble computers to pluck the gems — whether it’s a Higgs particle or hints of dark matter — from the vast static of ordinary collisions.

    Rifling through simulated LHC collision data, the machine learning technology successfully learned to identify a particular postcollision pattern — a particular spray of particles flying through a detector — as it flipped through an astonishing 600 images per second. Traditional methods process less than one image per second.

    The technology could even be offered as a service on external computers. Using this offloading model would allow researchers to analyze more data more quickly and leave more LHC computing space available to do other work.

    It is a promising glimpse into how machine learning services are supporting a field in which already enormous amounts of data are only going to get bigger.

    1
    Particles emerging from proton collisions at CERN’s Large Hadron Collider travel through through this stories-high, many-layered instrument, the CMS detector. In 2026, the LHC will produce 20 times the data it does currently, and CMS is currently undergoing upgrades to read and process the data deluge. Photo: Maximilien Brice, CERN

    The challenge: more data, more computing power

    Researchers are currently upgrading the LHC to smash protons at five times its current rate.

    By 2026, the 17-mile circular underground machine at the European laboratory CERN will produce 20 times more data than it does now.

    LHC

    CERN map


    CERN LHC Maximilien Brice and Julien Marius Ordan


    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS

    CERN ATLAS Image Claudia Marcelloni CERN/ATLAS

    ALICE

    CERN/ALICE Detector

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    CMS is one of the particle detectors at the Large Hadron Collider, and CMS collaborators are in the midst of some upgrades of their own, enabling the intricate, stories-high instrument to take more sophisticated pictures of the LHC’s particle collisions. Fermilab is the lead U.S. laboratory for the CMS experiment.

    If LHC scientists wanted to save all the raw collision data they’d collect in a year from the High-Luminosity LHC, they’d have to find a way to store about 1 exabyte (about 1 trillion personal external hard drives), of which only a sliver may unveil new phenomena. LHC computers are programmed to select this tiny fraction, making split-second decisions about which data is valuable enough to be sent downstream for further study.

    Currently, the LHC’s computing system keeps roughly one in every 100,000 particle events. But current storage protocols won’t be able to keep up with the future data flood, which will accumulate over decades of data taking. And the higher-resolution pictures captured by the upgraded CMS detector won’t make the job any easier. It all translates into a need for more than 10 times the computing resources than the LHC has now.

    The recent prototype test shows that, with advances in machine learning and computing hardware, researchers expect to be able to winnow the data emerging from the upcoming High-Luminosity LHC when it comes online.

    “The hope here is that you can do very sophisticated things with machine learning and also do them faster,” said Nhan Tran, a Fermilab scientist on the CMS experiment and one of the leads on the recent test. “This is important, since our data will get more and more complex with upgraded detectors and busier collision environments.”

    2
    Particle physicists are exploring the use of computers with machine learning capabilities for processing images of particle collisions at CMS, teaching them to rapidly identify various collision patterns. Image: Eamonn Maguire/Antarctic Design

    Machine learning to the rescue: the inference difference

    Machine learning in particle physics isn’t new. Physicists use machine learning for every stage of data processing in a collider experiment.

    But with machine learning technology that can chew through LHC data up to 175 times faster than traditional methods, particle physicists are ascending a game-changing step on the collision-computation course.

    The rapid rates are thanks to cleverly engineered hardware in the platform, Microsoft’s Azure ML, which speeds up a process called inference.

    To understand inference, consider an algorithm that’s been trained to recognize the image of a motorcycle: The object has two wheels and two handles that are attached to a larger metal body. The algorithm is smart enough to know that a wheelbarrow, which has similar attributes, is not a motorcycle. As the system scans new images of other two-wheeled, two-handled objects, it predicts — or infers — which are motorcycles. And as the algorithm’s prediction errors are corrected, it becomes pretty deft at identifying them. A billion scans later, it’s on its inference game.

    Most machine learning platforms are built to understand how to classify images, but not physics-specific images. Physicists have to teach them the physics part, such as recognizing tracks created by the Higgs boson or searching for hints of dark matter.

    Researchers at Fermilab, CERN, MIT, the University of Washington and other collaborators trained Azure ML to identify pictures of top quarks — a short-lived elementary particle that is about 180 times heavier than a proton — from simulated CMS data. Specifically, Azure was to look for images of top quark jets, clouds of particles pulled out of the vacuum by a single top quark zinging away from the collision.

    “We sent it the images, training it on physics data,” said Fermilab scientist Burt Holzman, a lead on the project. “And it exhibited state-of-the-art performance. It was very fast. That means we can pipeline a large number of these things. In general, these techniques are pretty good.”

    One of the techniques behind inference acceleration is to combine traditional with specialized processors, a marriage known as heterogeneous computing architecture.

    Different platforms use different architectures. The traditional processors are CPUs (central processing units). The best known specialized processors are GPUs (graphics processing units) and FPGAs (field programmable gate arrays). Azure ML combines CPUs and FPGAs.

    “The reason that these processes need to be accelerated is that these are big computations. You’re talking about 25 billion operations,” Tran said. “Fitting that onto an FPGA, mapping that on, and doing it in a reasonable amount of time is a real achievement.”

    And it’s starting to be offered as a service, too. The test was the first time anyone has demonstrated how this kind of heterogeneous, as-a-service architecture can be used for fundamental physics.

    5
    Data from particle physics experiments are stored on computing farms like this one, the Grid Computing Center at Fermilab. Outside organizations offer their computing farms as a service to particle physics experiments, making more space available on the experiments’ servers. Photo: Reidar Hahn

    At your service

    In the computing world, using something “as a service” has a specific meaning. An outside organization provides resources — machine learning or hardware — as a service, and users — scientists — draw on those resources when needed. It’s similar to how your video streaming company provides hours of binge-watching TV as a service. You don’t need to own your own DVDs and DVD player. You use their library and interface instead.

    Data from the Large Hadron Collider is typically stored and processed on computer servers at CERN and partner institutions such as Fermilab. With machine learning offered up as easily as any other web service might be, intensive computations can be carried out anywhere the service is offered — including off site. This bolsters the labs’ capabilities with additional computing power and resources while sparing them from having to furnish their own servers.

    “The idea of doing accelerated computing has been around decades, but the traditional model was to buy a computer cluster with GPUs and install it locally at the lab,” Holzman said. “The idea of offloading the work to a farm off site with specialized hardware, providing machine learning as a service — that worked as advertised.”

    The Azure ML farm is in Virginia. It takes only 100 milliseconds for computers at Fermilab near Chicago, Illinois, to send an image of a particle event to the Azure cloud, process it, and return it. That’s a 2,500-kilometer, data-dense trip in the blink of an eye.

    “The plumbing that goes with all of that is another achievement,” Tran said. “The concept of abstracting that data as a thing you just send somewhere else, and it just comes back, was the most pleasantly surprising thing about this project. We don’t have to replace everything in our own computing center with a whole bunch of new stuff. We keep all of it, send the hard computations off and get it to come back later.”

    Scientists look forward to scaling the technology to tackle other big-data challenges at the LHC. They also plan to test other platforms, such as Amazon AWS, Google Cloud and IBM Cloud, as they explore what else can be accomplished through machine learning, which has seen rapid evolution over the past few years.

    “The models that were state-of-the-art for 2015 are standard today,” Tran said.

    As a tool, machine learning continues to give particle physics new ways of glimpsing the universe. It’s also impressive in its own right.

    “That we can take something that’s trained to discriminate between pictures of animals and people, do some modest amount computation, and have it tell me the difference between a top quark jet and background?” Holzman said. “That’s something that blows my mind.”

    This work is supported by the DOE .

    See the full here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 9:33 am on August 15, 2019 Permalink | Reply
    Tags: "Deepfakes: danger in the digital age", , , Infocalypse- A term used to label the age of cybercriminals digital misinformation clickbait and data misuse., Machine learning   

    From CSIROscope: “Deepfakes: danger in the digital age” 

    CSIRO bloc

    From CSIROscope

    15 August 2019
    Alison Donnellan

    As we dive deeper into the digital age, fake news, online deceit and widespread use of social media are having a profound impact on every element of society. From swaying elections to manipulating science-proven facts.

    Deepfaking is the act of using artificial intelligence and machine learning technology to produce or alter video, image or audio content. It’s done using the sequence of the original to create a version of something that didn’t occur.

    So, what’s the deal with deepfakes?

    Once a topic only discussed in computer research labs, deepfakes were catapulted into mainstream media in 2017. This was after various online communities began swapping faces of high-profile personalities with actors in pornographic films.

    “You need a piece of machine learning to digest all of these video sequences. The machine eventually learns who the person is, how they are represented, how they move and evolve in the video,” says Dr Richard Nock, machine learning expert with our Data61 team.

    “So if you ask the machine to make a new sequence of this person, the machine is going to be able to automatically generate a new one.”

    “The piece of technology is almost always the same, which is where the name ‘deepfake’ comes from,” says Dr Nock. “It’s usually deep learning, a subset of machine learning, used to ask the machine to forge a new reality.”

    Let’s go… deeper

    As a result, deepfakes have been described as one of the contributing factors of the Infocalypse. A term used to label the age of cybercriminals, digital misinformation, clickbait and data misuse. As the technology behind the AI-generated videos improves, the ability for audiences to distinguish fact from fiction is becoming increasingly difficult.

    Creating a convincing deepfake is an unlikely feat for the general computer user. But an individual with advanced knowledge of machine learning (the specific software needed to digitally alter a piece of content) and access to the victim’s publicly-available social media profile for photographic, video and audio content, could do so.

    Now face-morphing apps inbuilt with automated AI and machine learning are becoming more advanced. So, deepfake creation could possibly come to be attainable to the general population in the future.

    One example of this is Snapchat’s introduction of the gender swap filter. The cost of a free download is all it takes for a Snapchat user to appear as someone else. The application’s gender swap filter completely alters the user’s appearance.

    There have been numerous instances of cat fishing (an individual that fabricates an online identity to trick others into exploitative emotional or romantic relationships) via online dating apps using the technology. Some people are using the experience as a social experiment and others as a ploy to extract sensitive information.

    To deepfake or not to deepfake

    Politicians, celebrities and those in the public spotlight are the most obvious victims of deep fakes. But the rise of posting multiple videos and selfies to public internet platforms places everyone at risk.

    ‘The creation of explicit images is one example of how deepfakes are being used to harass individuals online. One AI-powered app is creating images of what women might look like, according to the algorithm, unclothed.’

    According to Dr Nock, an alternative effect of election deepfakery could be an online exodus. Basically, a segment of the population placing their trust in the opinions of a closed circle of friends, whether it be physical or an online forum, such as Reddit.

    “Once you’ve passed that breaking point and no longer trust an information source, most people would start retracting themselves. Refraining themselves from accessing public media content because it cannot be trusted anymore. And eventually relying on their friends, which can be limiting if people are more exposed to opinions rather than the facts.”


    The Obama deepfake was a viral hit. There were over six million views of the video seemingly produced by the US president. The video brought to light the existence of deepfake technology alongside a warning for the trust users place in online content.

    Mitigating the threat of digital deceit

    There are three ways to prevent deepfakes according to Dr Nock:

    Invent a mechanism of authenticity. Whether that be a physical stamp such as blockchain or branding, to confirm that the information is from a trusted source and the video is depicting something that happened.
    Train machine learning to detect deep fakes created by other machines.
    These mechanisms would need to be widely adopted by different information sources in order to be successful.

    “Blockchain could work – if carefully crafted – but a watermark component would probably not,” explains Dr Nock. “Changing the format of an original document would eventually alter the watermark, while the document would obviously stay original. This would not happen with blockchain.”

    Machine learning is already detecting deep fakes. Researchers from UC Berkeley and the University of Southern California are using this method to distinguish unique head and face movement. These subtle personal quirks are currently not modeled by deep fake algorithms, with the technique returning a 92 per cent level of accuracy.

    While this research is comforting, bad actors will inevitably continue to reinvent and adapt AI-generated fakes.

    Machine learning is a powerful technology. And one that’s becoming more sophisticated over time. Deepfakes aside, machine learning is also bringing enormous positive benefits to areas like privacy, healthcare, transport and even self-driving cars.

    Our Data61 team acts as a network and partner with government, industry and universities, to advance the technologies of AI in many areas of society and industry, such as adversarial machine learning, cybersecurity and data protection, and rich data-driven insights.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 8:17 am on July 26, 2019 Permalink | Reply
    Tags: , , , Machine learning,   

    From National Geographics: “How artificial intelligence can tackle climate change” 

    National Geographic

    From National Geographics

    July 18, 2019
    Jackie Snow

    1
    Steam and smoke rise from the cooling towers and chimneys of a power plant. Artificial intelligence is being used to prove the case that plants that burn carbon-based fuels aren’t profitable. natgeo.com

    The biggest challenge on the planet might benefit from machine learning to help with solutions. Here are a just a few.

    Climate change is the biggest challenge facing the planet. It will need every solution possible, including technology like artificial intelligence (AI).

    Seeing a chance to help the cause, some of the biggest names in AI and machine learning—a discipline within the field—recently published a paper called Tackling Climate Change with Machine Learning The paper, which was discussed at a workshop during a major AI conference in June, was a “call to arms” to bring researchers together, said David Rolnick, a University of Pennsylvania postdoctoral fellow and one of the authors.

    “It’s surprising how many problems machine learning can meaningfully contribute to,” says Rolnick, who also helped organize the June workshop.

    The paper offers up 13 areas where machine learning can be deployed, including energy production, CO2 removal, education, solar geoengineering, and finance. Within these fields, the possibilities include more energy-efficient buildings, creating new low-carbon materials, better monitoring of deforestation, and greener transportation. However, despite the potential, Rolnick points out that this is early days and AI can’t solve everything.

    “AI is not a silver bullet,” he says.

    And though it might not be a perfect solution, it is bringing new insights into the problem. Here are three ways machine learning can help combat climate change.

    Better climate predictions

    This push builds on the work already done by climate informatics, a discipline created in 2011 that sits at the intersection of data science and climate science. Climate informatics covers a range of topics: from improving prediction of extreme events such as hurricanes, paleoclimatology, like reconstructing past climate conditions using data collected from things like ice cores, climate downscaling, or using large-scale models to predict weather on a hyper-local level, and the socio-economic impacts of weather and climate.

    AI can also unlock new insights from the massive amounts of complex climate simulations generated by the field of climate modeling, which has come a long way since the first system was created at Princeton in the 1960s. Of the dozens of models that have since come into existence, all represent atmosphere, oceans, land, cryosphere, or ice. But, even with agreement on basic scientific assumptions, Claire Monteleoni, a computer science professor at the University of Colorado, Boulder and a co-founder of climate informatics, points out that while the models generally agree in the short term, differences emerge when it comes to long-term forecasts.

    “There’s a lot of uncertainty,” Monteleoni said. “They don’t even agree on how precipitation will change in the future.”

    One project Monteleoni worked on uses machine learning algorithms to combine the predictions of the approximately 30 climate models used by the Intergovernmental Panel on Climate Change. Better predictions can help officials make informed climate policy, allow governments to prepare for change, and potentially uncover areas that could reverse some effects of climate change.

    Showing the effects of extreme weather

    Some homeowners have already experienced the effects of a changing environment. For others, it might seem less tangible. To make it more realistic for more people, researchers from Montreal Institute for Learning Algorithms (MILA), Microsoft, and ConscientAI Labs used GANs, a type of AI, to simulate what homes are likely to look like after being damaged by rising sea levels and more intense storms.

    “Our goal is not to convince people climate change is real, it’s to get people who do believe it is real to do more about that,” said Victor Schmidt, a co-author of the paper and Ph.D. candidate at MILA.

    So far, MILA researchers have met with Montreal city officials and NGOs eager to use the tool. Future plans include releasing an app to show individuals what their neighborhoods and homes might look like in the future with different climate change outcomes. But the app will need more data, and Schmidt said they eventually want to let people upload photos of floods and forest fires to improve the algorithm.

    “We want to empower these communities to help,” he said.

    Measuring where carbon is coming from

    Carbon Tracker is an independent financial think-tank working toward the UN goal of preventing new coal plants from being built by 2020. By monitoring coal plant emissions with satellite imagery, Carbon Tracker can use the data it gathers to convince the finance industry that carbon plants aren’t profitable.

    A grant from Google is expanding the nonprofit’s satellite imagery efforts to include gas-powered plants’ emissions and get a better sense of where air pollution is coming from. While there are continuous monitoring systems near power plants that can measure CO2 emissions more directly, they do not have global reach.

    “This can be used worldwide in places that aren’t monitoring,” said Durand D’souza, a data scientist at Carbon Tracker. “And we don’t have to ask permission.”

    AI can automate the analysis of images of power plants to get regular updates on emissions. It also introduces new ways to measure a plant’s impact, by crunching numbers of nearby infrastructure and electricity use. That’s handy for gas-powered plants that don’t have the easy-to-measure plumes that coal-powered plants have.

    Carbon Tracker will now crunch emissions for 4,000 to 5,000 power plants, getting much more information than currently available, and make it public. In the future, if a carbon tax passes, remote sensing Carbon Tracker’s could help put a price on emissions and pinpoint those responsible for it.

    “Machine learning is going to help a lot in this field,” D’souza said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The National Geographic Society has been inspiring people to care about the planet since 1888. It is one of the largest nonprofit scientific and educational institutions in the world. Its interests include geography, archaeology and natural science, and the promotion of environmental and historical conservation.

     
  • richardmitnick 2:14 pm on July 3, 2019 Permalink | Reply
    Tags: "With Little Training, An algorithm called Word2vec, , Machine learning, Machine-Learning Algorithms Can Uncover Hidden Scientific Knowledge", The project was motivated by the difficulty making sense of the overwhelming amount of published studies, The team collected the 3.3 million abstracts from papers published in more than 1000 journals between 1922 and 2018.   

    From Lawrence Berkeley National Lab: “With Little Training, Machine-Learning Algorithms Can Uncover Hidden Scientific Knowledge” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    July 3, 2019
    Julie Chao
    jhchao@lbl.gov
    (510) 486-6491

    Berkeley Lab study finds that text mining of scientific literature can lead to new discoveries.

    1
    (From left) Berkeley Lab researchers Vahe Tshitoyan, Anubhav Jain, Leigh Weston, and John Dagdelen used machine learning to analyze 3.3 million abstracts from materials science papers. (Credit: Marilyn Chung/Berkeley Lab)

    Sure, computers can be used to play grandmaster-level chess, but can they make scientific discoveries? Researchers at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory have shown that an algorithm with no training in materials science can scan the text of millions of papers and uncover new scientific knowledge.

    A team led by Anubhav Jain, a scientist in Berkeley Lab’s Energy Storage & Distributed Resources Division, collected 3.3 million abstracts of published materials science papers and fed them into an algorithm called Word2vec. By analyzing relationships between words the algorithm was able to predict discoveries of new thermoelectric materials years in advance and suggest as-yet unknown materials as candidates for thermoelectric materials.

    2
    Berkeley Lab researchers found that text mining of materials science abstracts could turn up novel thermoelectric materials. (Credit: Berkeley Lab)

    “Without telling it anything about materials science, it learned concepts like the periodic table and the crystal structure of metals,” said Jain. “That hinted at the potential of the technique. But probably the most interesting thing we figured out is, you can use this algorithm to address gaps in materials research, things that people should study but haven’t studied so far.”

    The findings were published July 3 in the journal Nature. The lead author of the study, “Unsupervised Word Embeddings Capture Latent Knowledge from Materials Science Literature,” is Vahe Tshitoyan, a Berkeley Lab postdoctoral fellow now working at Google. Along with Jain, Berkeley Lab scientists Kristin Persson and Gerbrand Ceder helped lead the study.

    “The paper establishes that text mining of scientific literature can uncover hidden knowledge, and that pure text-based extraction can establish basic scientific knowledge,” said Ceder, who also has an appointment at UC Berkeley’s Department of Materials Science and Engineering.

    Tshitoyan said the project was motivated by the difficulty making sense of the overwhelming amount of published studies. “In every research field there’s 100 years of past research literature, and every week dozens more studies come out,” he said. “A researcher can access only fraction of that. We thought, can machine learning do something to make use of all this collective knowledge in an unsupervised manner – without needing guidance from human researchers?”

    ‘King – queen + man = ?’

    The team collected the 3.3 million abstracts from papers published in more than 1,000 journals between 1922 and 2018. Word2vec took each of the approximately 500,000 distinct words in those abstracts and turned each into a 200-dimensional vector, or an array of 200 numbers.

    “What’s important is not each number, but using the numbers to see how words are related to one another,” said Jain, who leads a group working on discovery and design of new materials for energy applications using a mix of theory, computation, and data mining. “For example you can subtract vectors using standard vector math. Other researchers have shown that if you train the algorithm on nonscientific text sources and take the vector that results from ‘king minus queen,’ you get the same result as ‘man minus woman.’ It figures out the relationship without you telling it anything.”

    Similarly, when trained on materials science text, the algorithm was able to learn the meaning of scientific terms and concepts such as the crystal structure of metals based simply on the positions of the words in the abstracts and their co-occurrence with other words. For example, just as it could solve the equation “king – queen + man,” it could figure out that for the equation “ferromagnetic – NiFe + IrMn” the answer would be “antiferromagnetic.”

    Word2vec was even able to learn the relationships between elements on the periodic table when the vector for each chemical element was projected onto two dimensions.

    3
    Mendeleev’s periodic table is on the right. Word2vec’s representation of the elements, projected onto two dimensions, is on the left. (Credit: Berkeley Lab)

    Predicting discoveries years in advance

    So if Word2vec is so smart, could it predict novel thermoelectric materials? A good thermoelectric material can efficiently convert heat to electricity and is made of materials that are safe, abundant and easy to produce.

    The Berkeley Lab team took the top thermoelectric candidates suggested by the algorithm, which ranked each compound by the similarity of its word vector to that of the word “thermoelectric.” Then they ran calculations to verify the algorithm’s predictions.

    Of the top 10 predictions, they found all had computed power factors slightly higher than the average of known thermoelectrics; the top three candidates had power factors at above the 95th percentile of known thermoelectrics.

    Next they tested if the algorithm could perform experiments “in the past” by giving it abstracts only up to, say, the year 2000. Again, of the top predictions, a significant number turned up in later studies – four times more than if materials had just been chosen at random. For example, three of the top five predictions trained using data up to the year 2008 have since been discovered and the remaining two contain rare or toxic elements.

    The results were surprising. “I honestly didn’t expect the algorithm to be so predictive of future results,” Jain said. “I had thought maybe the algorithm could be descriptive of what people had done before but not come up with these different connections. I was pretty surprised when I saw not only the predictions but also the reasoning behind the predictions, things like the half-Heusler structure, which is a really hot crystal structure for thermoelectrics these days.”

    He added: “This study shows that if this algorithm were in place earlier, some materials could have conceivably been discovered years in advance.” Along with the study the researchers are releasing the top 50 thermoelectric materials predicted by the algorithm. They’ll also be releasing the word embeddings needed for people to make their own applications if they want to search on, say, a better topological insulator material.

    Up next, Jain said the team is working on a smarter, more powerful search engine, allowing researchers to search abstracts in a more useful way.

    The study was funded by Toyota Research Institute. Other study co-authors are Berkeley Lab researchers John Dagdelen, Leigh Weston, Alexander Dunn, and Ziqin Rong, and UC Berkeley researcher Olga Kononova.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: