Tagged: AI Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:42 am on December 9, 2019 Permalink | Reply
    Tags: "Stanford-led snapshot of artificial intelligence reveals challenges", AI, , , , Stanford’s ongoing 100-year study on artificial intelligence known as the AI100.   

    From Stanford University: “Stanford-led snapshot of artificial intelligence reveals challenges” 

    Stanford University Name
    From Stanford University

    November 26, 2019 [Just now in social media]
    Lara Streiff

    A periodic review of the artificial intelligence industry revealed the potential pitfalls of outsourcing our problems for technology to solve rather than addressing the causes, and of allowing outdated predictive modeling to go unchecked.

    1
    A Stanford-led artificial intelligence index called the AI100 periodically assesses the state of AI technology and makes predictions for the next century. (Image credit: Tricia Seibold)

    As part of Stanford’s ongoing 100-year study on artificial intelligence, known as the AI100, two workshops recently considered the issues of care technologies and predictive modeling to inform the future development of AI technologies.

    “We are now seeing a particular emphasis on the humanities and how they interact with AI,” said Russ Altman, Stanford professor of engineering and the faculty director of the AI100. The AI100 is a project of the Stanford Institute for Human-Centered Artificial Intelligence.

    After the first meeting of the AI100, the group planned to reconvene every five years to discuss the status of the AI industry. The idea was that reports from those meetings would capture the excitement and concerns regarding AI technologies at that time, make predictions for the next century and serve as a resource for policymakers and industry stakeholders shaping the future of AI in society.

    But the technology is moving faster than expected, and the organizers of the AI100 felt there were issues to discuss prior to the next scheduled session. The reports that resulted from those workshops paint a picture of the potential pitfalls of outsourcing our problems for technology to solve rather than addressing the causes, or allowing outdated predictive modeling to go unchecked. Together, they provide an intermediate snapshot that could guide discussions at the next full meeting, said Altman.

    “The reports capture the cyclical nature of public views and attitudes toward AI,” said Peter Stone, professor of computer science at the University of Texas in Austin who served as study panel chair for the last report, and is now chair of the standing committee. “There are times of hype and excitement with AI, and there are times of disappointment and disillusionment – we call these AI winters.”

    This longitudinal study aims to encapsulate all the ups and downs – creating a long-term view of artificial intelligence.

    Alexa doesn’t care about you

    Although artificial intelligence is widespread in healthcare apps, participants in the workshop debating AI’s capacity to care concluded that care itself isn’t something that can be encoded in technology. Based on that, they recommend that new technologies be integrated into existing human-to-human care relationships.

    “Care is not a problem to be solved; it is a fundamental part of living as humans,” said Fay Niker, a philosophy lecturer at the University of Stirling, and chair of the Coding Caring workshop. “The idea of a technical fix for something like loneliness, for example, is baffling.”

    The workshop participants frame care technologies as tools to supplement human care relationships like those between a caregiver and care-receiver. Technology can certainly give reminders to take medication or track health information, but is limited in the ability to display empathy or provide emotional support which cannot be commodified or reduced into outcome-oriented tasks.

    “We worry that meaningful human interaction could be frozen out by tech,” said Niker. “The hope is that the AI2020 report, and other work in this area, will contribute to preventing this ‘ice age’ by challenging and hence changing the culture and debate around the design and implementation of caring technologies in our societies.”

    Regulating predictive technologies

    AI technologies may be capable of learning, but they are not immune to becoming outdated, prompting participants in the second workshop to introduce the concept of “expiration dates” to govern their deployment over time. “They train on data from the past to predict the future,” said Altman. “Things change in any field, so you need to do an update or a reevaluation.”

    “It means we have to pay attention to the new data,” said David Robinson, a visiting scientist from Cornell’s College of Computing and Information Science, and one of the workshop organizers. Unless otherwise informed, the algorithm will blindly assume that the world has not changed, and will provide results without integrating newly introduced factors.

    Important decisions can hinge on these technologies, including risk assessment in the criminal justice system and screenings by child protection services. But Robinson stressed that it is the net combination of the algorithm results and the interpretation from those using the technology that results in a final decision. There should be as much scrutiny on the information that the AI is providing as there is on the users who are interpreting the algorithm’s results.

    Both workshops came to the conclusion that regulation is needed for AI technology, according to Altman, which should come as no surprise to those attuned to popular culture references of the field. Whether the industry can self-regulate, or what other entities should oversee the progress in the field, is still in question.

    Participants and organizers alike feel that the AI100 has a role to play in the future of AI technologies. “I hope that it really helps educate people and the general public on how they can and should interact with AI,” said Stone. Perhaps even more importantly, the outcomes from the AI reports can be referenced by those policymakers and industry insiders, shaping how these technologies are developed.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 12:52 pm on December 6, 2019 Permalink | Reply
    Tags: "Forecasting Volcanic Eruptions with Artificial Intelligence", AI, , ,   

    From Eos: “Forecasting Volcanic Eruptions with Artificial Intelligence” 

    From AGU
    Eos news bloc

    From Eos

    3 December 2019
    Emily Underwood

    A machine learning algorithm automatically detects telltale signs of volcanic unrest.

    1
    Ash from Sierra Negra, a volcano on Isla Isabela in the Galápagos Islands, drifts across the sky during an October 2005 eruption. Researchers used satellite data leading up to a 2018 eruption at Sierra Negra to test an algorithm designed to detect signals that indicate potential volcanic unrest. Credit: NASA image created by Jesse Allen, Earth Observatory, using data obtained courtesy of the MODIS Rapid Response Team

    NASA Terra MODIS schematic

    NASA Terra satellite

    Most of the roughly 1,400 active volcanoes around the world, including many in the United States, do not have on-site observatories. Lacking ground-level data, scientists are turning to satellites to keep tabs on volcanoes from space. Now using artificial intelligence, scientists have created a new satellite-based method of detecting warning signs of when a volcano is likely to erupt.

    Gaddes et al. took advantage of satellites that carry instruments equipped to collect imagery using interferometric synthetic aperture radar (InSAR), which can detect centimeter-scale deformations of Earth’s surface. Every time one of the satellites passes over a given volcano—typically once every 12 days—it can capture an InSAR image of the volcano from which ground movement away from or toward the satellite can be calculated.

    InSAR can often pick up the ominous expansion of the ground that occurs when magma moves within a volcano’s plumbing, but it is difficult to continuously monitor the huge number of images produced by the latest generation of SAR-equipped satellites. In addition, some volcanoes exhibit long-lasting deformation that poses no immediate threat, and new images must be compared with older ones to determine whether a deformation at a volcano is a warning sign or just business as usual.

    To solve these issues, the researchers turned to machine learning, a form of artificial intelligence that can glean subtle patterns in vast quantities of data. They developed an algorithm that can rapidly analyze InSAR data, compare current deformation to past activity, and automatically create an alert when a volcano’s unrest may be cause for concern.

    To test the algorithm’s viability, the team applied it to real data from the period leading up to the 2018 eruption of Sierra Negra, a volcano in the Galápagos Islands. The algorithm worked, flagging an increase in the ground’s inflation that began about a year before the eruption. Had the method been available at the time, the team writes, it would have accurately alerted researchers that Sierra Negra was likely to erupt. (Journal of Geophysical Research: Solid Earth)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 1:04 pm on December 4, 2019 Permalink | Reply
    Tags: "A new algorithm trains AI to avoid bad behaviors", AI, , “Seldonian algorithm”,   

    From Stanford University Engineering: “A new algorithm trains AI to avoid bad behaviors” 

    From Stanford University Engineering

    November 22, 2019
    Tom Abate

    1
    In a society where AI powers important decision-making, how do we minimize undesirable outcomes? | Illustration by Sarah Rieke

    Artificial intelligence has moved into the commercial mainstream thanks to the growing prowess of machine learning algorithms that enable computers to train themselves to do things like drive cars, control robots or automate decision-making.

    But as AI starts handling sensitive tasks, such as helping pick which prisoners get bail, policy makers are insisting that computer scientists offer assurances that automated systems have been designed to minimize, if not completely avoid, unwanted outcomes such as excessive risk or racial and gender bias.

    A team led by researchers at Stanford and the University of Massachusetts Amherst published a paper Nov. 22 in Science suggesting how to provide such assurances. The paper outlines a new technique that translates a fuzzy goal, such as avoiding gender bias, into the precise mathematical criteria that would allow a machine-learning algorithm to train an AI application to avoid that behavior.

    “We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems,” said Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper.

    Avoiding misbehavior

    The work is premised on the notion that if “unsafe” or “unfair” outcomes or behaviors can be defined mathematically, then it should be possible to create algorithms that can learn from data on how to avoid these unwanted results with high confidence. The researchers also wanted to develop a set of techniques that would make it easy for users to specify what sorts of unwanted behavior they want to constrain and enable machine learning designers to predict with confidence that a system trained using past data can be relied upon when it is applied in real-world circumstances.

    “We show how the designers of machine learning algorithms can make it easier for people who want to build AI into their products and services to describe unwanted outcomes or behaviors that the AI system will avoid with high-probability,” said Philip Thomas, an assistant professor of computer science at the University of Massachusetts Amherst and first author of the paper.

    Fairness and safety

    The researchers tested their approach by trying to improve the fairness of algorithms that predict GPAs of college students based on exam results, a common practice that can result in gender bias. Using an experimental dataset, they gave their algorithm mathematical instructions to avoid developing a predictive method that systematically overestimated or underestimated GPAs for one gender. With these instructions, the algorithm identified a better way to predict student GPAs with much less systematic gender bias than existing methods. Prior methods struggled in this regard either because they had no fairness filter built-in or because algorithms developed to achieve fairness were too limited in scope.

    The group developed another algorithm and used it to balance safety and performance in an automated insulin pump. Such pumps must decide how big or small a dose of insulin to give a patient at mealtimes. Ideally, the pump delivers just enough insulin to keep blood sugar levels steady. Too little insulin allows blood sugar levels to rise, leading to short term discomforts such as nausea, and elevated risk of long-term complications including cardiovascular disease. Too much and blood sugar crashes — a potentially deadly outcome.

    Machine learning can help by identifying subtle patterns in an individual’s blood sugar responses to doses, but existing methods don’t make it easy for doctors to specify outcomes that automated dosing algorithms should avoid, like low blood sugar crashes. Using a blood glucose simulator, Brunskill and Thomas showed how pumps could be trained to identify dosing tailored for that person — avoiding complications from over- or under-dosing. Though the group isn’t ready to test this algorithm on real people, it points to an AI approach that might eventually improve quality of life for diabetics.

    In their Science paper, Brunskill and Thomas use the term “Seldonian algorithm” to define their approach, a reference to Hari Seldon, a character invented by science fiction author Isaac Asimov, who once proclaimed three laws of robotics beginning with the injunction that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

    While acknowledging that the field is still far from guaranteeing the three laws, Thomas said this Seldonian framework will make it easier for machine learning designers to build behavior-avoidance instructions into all sorts of algorithms, in a way that can enable them to assess the probability that trained systems will function properly in the real world.

    Brunskill said this proposed framework builds on the efforts that many computer scientists are making to strike a balance between creating powerful algorithms and developing methods to ensure that their trustworthiness.

    “Thinking about how we can create algorithms that best respect values like safety and fairness is essential as society increasingly relies on AI,” Brunskill said.

    The paper also had co-authors from the University of Massachusetts Amherst and the Universidade Federal do Rio Grande do Sol.

    This work was supported in part by Adobe, the National Science Foundation and the Institute of Educational Science.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford Engineering has been at the forefront of innovation for nearly a century, creating pivotal technologies that have transformed the worlds of information technology, communications, health care, energy, business and beyond.

    The school’s faculty, students and alumni have established thousands of companies and laid the technological and business foundations for Silicon Valley. Today, the school educates leaders who will make an impact on global problems and seeks to define what the future of engineering will look like.
    Mission

    Our mission is to seek solutions to important global problems and educate leaders who will make the world a better place by using the power of engineering principles, techniques and systems. We believe it is essential to educate engineers who possess not only deep technical excellence, but the creativity, cultural awareness and entrepreneurial skills that come from exposure to the liberal arts, business, medicine and other disciplines that are an integral part of the Stanford experience.

    Our key goals are to:

    Conduct curiosity-driven and problem-driven research that generates new knowledge and produces discoveries that provide the foundations for future engineered systems
    Deliver world-class, research-based education to students and broad-based training to leaders in academia, industry and society
    Drive technology transfer to Silicon Valley and beyond with deeply and broadly educated people and transformative ideas that will improve our society and our world.

    The Future of Engineering

    The engineering school of the future will look very different from what it looks like today. So, in 2015, we brought together a wide range of stakeholders, including mid-career faculty, students and staff, to address two fundamental questions: In what areas can the School of Engineering make significant world‐changing impact, and how should the school be configured to address the major opportunities and challenges of the future?

    One key output of the process is a set of 10 broad, aspirational questions on areas where the School of Engineering would like to have an impact in 20 years. The committee also returned with a series of recommendations that outlined actions across three key areas — research, education and culture — where the school can deploy resources and create the conditions for Stanford Engineering to have significant impact on those challenges.

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 8:17 am on November 25, 2019 Permalink | Reply
    Tags: "Stanford and UMass Amherst develop algorithms that train AI to avoid specific misbehaviors", AI, , ,   

    From Stanford University: “Stanford, UMass Amherst develop algorithms that train AI to avoid specific misbehaviors” 

    Stanford University Name
    From Stanford University

    November 21, 2019
    Tom Abate

    Robots, self-driving cars and other intelligent machines could become better-behaved thanks to a new way to help machine learning designers build AI applications with safeguards against specific, undesirable outcomes such as racial and gender bias.


    Deboki Chakravarti. As robots, self-driving cars and other intelligent machines weave AI into everyday life, a new way of designing algorithms can help machine-learning developers build in safeguards against specific, undesirable outcomes like racial and gender bias, to help earn societal trust.

    Artificial intelligence has moved into the commercial mainstream thanks to the growing prowess of machine learning algorithms that enable computers to train themselves to do things like drive cars, control robots or automate decision-making.

    But as AI starts handling sensitive tasks, such as helping pick which prisoners get bail, policy makers are insisting that computer scientists offer assurances that automated systems have been designed to minimize, if not completely avoid, unwanted outcomes such as excessive risk or racial and gender bias.

    A team led by researchers at Stanford and the University of Massachusetts Amherst published a paper Nov. 22 in Science suggesting how to provide such assurances.

    The paper outlines a new technique that translates a fuzzy goal, such as avoiding gender bias, into the precise mathematical criteria that would allow a machine-learning algorithm to train an AI application to avoid that behavior.

    “We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems,” said Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper.

    Avoiding misbehavior

    The work is premised on the notion that if “unsafe” or “unfair” outcomes or behaviors can be defined mathematically, then it should be possible to create algorithms that can learn from data on how to avoid these unwanted results with high confidence. The researchers also wanted to develop a set of techniques that would make it easy for users to specify what sorts of unwanted behavior they want to constrain and enable machine learning designers to predict with confidence that a system trained using past data can be relied upon when it is applied in real-world circumstances.

    “We show how the designers of machine learning algorithms can make it easier for people who want to build AI into their products and services to describe unwanted outcomes or behaviors that the AI system will avoid with high-probability,” said Philip Thomas, an assistant professor of computer science at the University of Massachusetts Amherst and first author of the paper.

    Fairness and safety

    The researchers tested their approach by trying to improve the fairness of algorithms that predict GPAs of college students based on exam results, a common practice that can result in gender bias. Using an experimental dataset, they gave their algorithm mathematical instructions to avoid developing a predictive method that systematically overestimated or underestimated GPAs for one gender. With these instructions, the algorithm identified a better way to predict student GPAs with much less systematic gender bias than existing methods. Prior methods struggled in this regard either because they had no fairness filter built-in or because algorithms developed to achieve fairness were too limited in scope.

    The group developed another algorithm and used it to balance safety and performance in an automated insulin pump. Such pumps must decide how big or small a dose of insulin to give a patient at mealtimes. Ideally, the pump delivers just enough insulin to keep blood sugar levels steady. Too little insulin allows blood sugar levels to rise, leading to short term discomforts such as nausea, and elevated risk of long-term complications including cardiovascular disease. Too much and blood sugar crashes – a potentially deadly outcome.

    Machine learning can help by identifying subtle patterns in an individual’s blood sugar responses to doses, but existing methods don’t make it easy for doctors to specify outcomes that automated dosing algorithms should avoid, like low blood sugar crashes. Using a blood glucose simulator, Brunskill and Thomas showed how pumps could be trained to identify dosing tailored for that person – avoiding complications from over- or under-dosing. Though the group isn’t ready to test this algorithm on real people, it points to an AI approach that might eventually improve quality of life for diabetics.

    In their Science paper, Brunskill and Thomas use the term “Seldonian algorithm” to define their approach, a reference to Hari Seldon, a character invented by science fiction author Isaac Asimov, who once proclaimed three laws of robotics beginning with the injunction that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

    While acknowledging that the field is still far from guaranteeing the three laws, Thomas said this Seldonian framework will make it easier for machine learning designers to build behavior-avoidance instructions into all sorts of algorithms, in a way that can enable them to assess the probability that trained systems will function properly in the real world.

    Brunskill said this proposed framework builds on the efforts that many computer scientists are making to strike a balance between creating powerful algorithms and developing methods to ensure that their trustworthiness.

    “Thinking about how we can create algorithms that best respect values like safety and fairness is essential as society increasingly relies on AI,” Brunskill said.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Stanford University campus. No image credit

    Stanford University

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 1:29 pm on November 8, 2019 Permalink | Reply
    Tags: "Machine Learning Enhances Light-Beam Performance at the Advanced Light Source", AI, And little tweaks to enhance light-beam properties at these individual beamlines can feed back into the overall light-beam performance across the entire facility., , , Environmental science, , Many of these synchrotron facilities deliver different types of light for dozens of simultaneous experiments., , STXM-scanning transmission X-ray microscopy, ,   

    From Lawrence Berkeley National Lab: “Machine Learning Enhances Light-Beam Performance at the Advanced Light Source” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    November 8, 2019
    Glenn Roberts Jr.

    Successful demonstration of algorithm by Berkeley Lab-UC Berkeley team shows technique could be viable for scientific light sources around the globe.

    1
    Some members of the team that developed the machine-learning tool for the Advanced Light Source (ALS) are pictured in the ALS control room. Top row, from left: Changchun Sun, Simon Leemann, and Alex Hexemer. Bottom row, from left: Hiroshi Nishimura, C. Nathan Melton, and Yuping Lu. (Credit: Marilyn Chung/Berkeley Lab)

    Synchrotron light sources are powerful facilities that produce light in a variety of “colors,” or wavelengths – from the infrared to X-rays – by accelerating electrons to emit light in controlled beams.

    Synchrotrons like the Advanced Light Source at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) allow scientists to explore samples in a variety of ways using this light, in fields ranging from materials science, biology, and chemistry to physics and environmental science.

    LBNL Advanced Light Source

    LBNL Advanced Light Source

    1
    This image shows the profile of an electron beam at Berkeley Lab’s Advanced Light Source synchrotron, represented as pixels measured by a charged coupled device (CCD) sensor. When stabilized by a machine-learning algorithm, the beam has a horizontal size dimension of 49 microns (root mean squared) and vertical size dimension of 48 microns (root mean squared). Demanding experiments require that the corresponding light-beam size be stable on time scales ranging from less than seconds to hours to ensure reliable data. (Credit: Lawrence Berkeley National Laboratory)

    Researchers have found ways to upgrade these machines to produce more intense, focused, and consistent light beams that enable new, and more complex and detailed studies across a broad range of sample types.

    But some light-beam properties still exhibit fluctuations in performance that present challenges for certain experiments.

    Addressing a decades-old problem

    Many of these synchrotron facilities deliver different types of light for dozens of simultaneous experiments. And little tweaks to enhance light-beam properties at these individual beamlines can feed back into the overall light-beam performance across the entire facility. Synchrotron designers and operators have wrestled for decades with a variety of approaches to compensate for the most stubborn of these fluctuations.

    And now, a large team of researchers at Berkeley Lab and UC Berkeley has successfully demonstrated how machine-learning tools can improve the stability of the light beams’ size for experiments via adjustments that largely cancel out these fluctuations – reducing them from a level of a few percent down to 0.4 percent, with submicron (below 1 millionth of a meter) precision.

    The tools are detailed in a study published Nov. 6 in the journal Physical Review Letters.

    3
    This chart shows how vertical beam-size stability greatly improves when a neural network is implemented during Advanced Light Source operations. When the so-called “feed-forward” correction is implemented, the fluctuations in the vertical beam size are stabilized down to the sub-percent level (see yellow-highlighted section) from levels that otherwise range to several percent. (Credit: Lawrence Berkeley National Laboratory)

    Machine learning is a form of artificial intelligence in which computer systems analyze a set of data to build predictive programs that solve complex problems. The machine-learning algorithms used at the ALS are referred to as a form of “neural network” because they are designed to recognize patterns in the data in a way that loosely resembles human brain functions.

    In this study, researchers fed electron-beam data from the ALS, which included the positions of the magnetic devices used to produce light from the electron beam, into the neural network. The neural network recognized patterns in this data and identified how different device parameters affected the width of the electron beam. The machine-learning algorithm also recommended adjustments to the magnets to optimize the electron beam.

    Because the size of the electron beam mirrors the resulting light beam produced by the magnets, the algorithm also optimized the light beam that is used to study material properties at the ALS.

    Solution could have global impact

    The successful demonstration at the ALS shows how the technique could also generally be applied to other light sources, and will be especially beneficial for specialized studies enabled by an upgrade of the ALS known as the ALS-U project.

    That’s the beauty of this,” said Hiroshi Nishimura, a Berkeley Lab affiliate who retired last year and had engaged in early discussions and explorations of a machine-learning solution to the longstanding light-beam size-stability problem. “Whatever the accelerator is, and whatever the conventional solution is, this solution can be on top of that.”

    Steve Kevan, ALS director, said, “This is a very important advance for the ALS and ALS-U. For several years we’ve had trouble with artifacts in the images from our X-ray microscopes. This study presents a new feed-forward approach based on machine learning, and it has largely solved the problem.”

    The ALS-U project will increase the narrow focus of light beams from a level of around 100 microns down to below 10 microns and also create a higher demand for consistent, reliable light-beam properties.

    5
    An exterior view of the Advanced Light Source dome that houses dozens of beamlines. (Credit: Roy Kaltschmidt/Berkeley Lab)

    The machine-learning technique builds upon conventional solutions that have been improved over the decades since the ALS started up in 1993, and which rely on constant adjustments to magnets along the ALS ring that compensate in real time for adjustments at individual beamlines.

    Nishimura, who had been a part of the team that brought the ALS online more than 25 years ago, said he began to study the potential application of machine-learning tools for accelerator applications about four or five years ago. His conversations extended to experts in computing and accelerators at Berkeley Lab and at UC Berkeley, and the concept began to gel about two years ago.

    Successful testing during ALS operations

    Researchers successfully tested the algorithm at two different sites around the ALS ring earlier this year. They alerted ALS users conducting experiments about the testing of the new algorithm, and asked them to give feedback on any unexpected performance issues.

    “We had consistent tests in user operations from April to June this year,” said C. Nathan Melton, a postdoctoral fellow at the ALS who joined the machine-learning team in 2018 and worked closely with Shuai Liu, a former UC Berkeley graduate student who contributed considerably to the effort and is a co-author of the study.

    Simon Leemann, deputy for Accelerator Operations and Development at the ALS and the principal investigator in the machine-learning effort, said, “We didn’t have any negative feedback to the testing. One of the monitoring beamlines the team used is a diagnostic beamline that constantly measures accelerator performance, and another was a beamline where experiments were actively running.” Alex Hexemer, a senior scientist at the ALS and program lead for computing, served as the co-lead in developing the new tool.

    The beamline with the active experiments, Beamline 5.3.2.2, uses a technique known as scanning transmission X-ray microscopy or STXM, and scientists there reported improved light-beam performance in experiments.

    The machine-learning team noted that the enhanced light-beam performance is also well-suited for advanced X-ray techniques such as ptychography, which can resolve the structure of samples down to the level of nanometers (billionths of a meter); and X-ray photon correlation spectroscopy, or XPCS, which is useful for studying rapid changes in highly concentrated materials that don’t have a uniform structure.

    Other experiments that demand a reliable, highly focused light beam of constant intensity where it interacts with the sample can also benefit from the machine-learning enhancement, Leemann noted.

    “Experiments’ requirements are getting tougher, with smaller-area scans on samples,” he said. “We have to find new ways for correcting these imperfections.”

    He noted that the core problem that the light-source community has wrestled with – and that the machine-learning tools address – is the fluctuating vertical electron beam size at the source point of the beamline.

    The source point is the point where the electron beam at the light source emits the light that travels to a specific beamline’s experiment. While the electron beam’s width at this point is naturally stable, its height (or vertical source size) can fluctuate.

    Opening the ‘black box’ of artificial intelligence

    “This is a very nice example of team science,” Leemann said, noting that the effort overcame some initial skepticism about the viability of machine learning for enhancing accelerator performance, and opened up the “black box” of how such tools can produce real benefits.

    “This is not a tool that has traditionally been a part of the accelerator community. We managed to bring people from two different communities together to fix a really tough problem.” About 15 Berkeley Lab researchers participated in the effort.

    “Machine learning fundamentally requires two things: The problem needs to be reproducible, and you need huge amounts of data,” Leemann said. “We realized we could put all of our data to use and have an algorithm recognize patterns.”

    The data showed the little blips in electron-beam performance as adjustments were made at individual beamlines, and the algorithm found a way to tune the electron beam so that it negated this impact better than conventional methods could.

    “The problem consists of roughly 35 parameters – way too complex for us to figure out ourselves,” Leemann said. “What the neural network did once it was trained – it gave us a prediction for what would happen for the source size in the machine if it did nothing at all to correct it.

    “There is an additional parameter in this model that describes how the changes we make in a certain type of magnet affects that source size. So all we then have to do is choose the parameter that – according to this neural-network prediction – results in the beam size we want to create and apply that to the machine,” Leemann added.

    The algorithm-directed system can now make corrections at a rate of up to 10 times per second, though three times a second appears to be adequate for improving performance at this stage, Leemann said.

    The search for new machine-learning applications

    The machine-learning team received two years of funding from the U.S. Department of Energy in August 2018 to pursue this and other machine-learning projects in collaboration with the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory. “We have plans to keep developing this and we also have a couple of new machine-learning ideas we’d like to try out,” Leemann said.

    SLAC/SSRL

    Nishimura said that the buzzwords “artificial intelligence” seem to have trended in and out of the research community for many years, though, “This time it finally seems to be something real.”

    The Advanced Light Source and Stanford Synchrotron Radiation Lightsource are DOE Office of Science User Facilities. This work involved researchers in Berkeley Lab’s Computational Research Division and was supported by the Department of Energy’s Basic Energy Sciences and Advanced Scientific Computing Research programs.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    LBNL Molecular Foundry

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

     
  • richardmitnick 8:23 am on October 14, 2019 Permalink | Reply
    Tags: "Machine learning helps UW meet “always-on” wireless connectivity", AI, , Problems with wireless connectivity,   

    From University of Washington: “Machine learning helps UW meet “always-on” wireless connectivity” 

    U Washington

    From University of Washington

    September 26, 2019
    Ignacio Lobos

    1

    When a biology lecturer noticed Poll Everywhere, a classroom response app, was failing to accept some of his students’ answers, he knew he had a serious problem.

    To find out what was happening, he sought help from UW-IT and Academic Technologies. They leveraged machine learning, analytics and data-driven insights to pinpoint an issue with the wireless connectivity and fix the problem.

    For David Morton, director of UW-IT’s Network & Telecommunications Design, this particular glitch represented something larger: “Our students have much higher expectations of technology: it just needs to work all the time.”

    After all, their grades can depend on “always-on network connectivity,” he explained.

    However, it is not just students who need secure and dependable wireless networks. Faculty and staff are increasingly relying on complex applications and smart devices.

    “Maintaining reliable communications is critical to everything we’re doing,” Morton said. “So, we’re leveraging machine learning to improve our systems, and in turn improving the classroom experience for students and faculty.”

    Artificial intelligence keeps Wi-Fi humming along

    Morton’s team uses Aruba NetInsight, a cloud-based system that employs artificial intelligence, to help track the health of the UW wireless network. The system analyzes the entire network, identifies performance problems in real time, and offers recommendations on how to fix them. As it tracks performance at the UW — and at 11 other major universities that also use the application — it learns as it amasses useful data that helps all institutions with critical decisions, such as where to expand Wi-Fi.

    The glitch in the biology lecturer’s classroom was indeed complex — when a wireless connection went down, it automatically switched some students to another connection, leaving their wireless devices in limbo as the switch took place, and their answers unrecorded.

    “It would have taken us countless hours of engineering sleuthing to track the problem and create a solution to prevent it from happening again,” Morton said. “But with machine learning, we zeroed in on the issues much faster.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    u-washington-campus
    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.
    So what defines us —the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

     
  • richardmitnick 10:10 am on July 30, 2019 Permalink | Reply
    Tags: "Human intelligence is the key to the Artificial Intelligence age", AI, AI is the collection of interrelated technologies such as natural language processing; speech recognition; computer vision; machine learning and automated reasoning., Encouraging Australians to embrace emerging technology, Giving machines the ability to perform tasks and solve problems otherwise requiring human cognition.,   

    From University of New South Wales: “Human intelligence is the key to the Artificial Intelligence age” 

    U NSW bloc

    From University of New South Wales

    30 Jul 2019

    Louise Templeton
    Corporate Communications
    02-9385 0857
    LOUISE.TEMPLETON@UNSW.EDU.AU

    Artificial Intelligence (AI) can enhance Australia’s wellbeing, lift the economy, improve environmental sustainability and create a more inclusive and fair society.

    1
    A new report highlights how the nation would benefit from AI. No image credit found.

    A report from the Australian Council of Learned Academies (ACOLA), titled: The Effective and Ethical Development of Artificial Intelligence – An Opportunity to Improve our Wellbeing, encourages Australians to embrace emerging technology.

    The panel, co-chaired by UNSW Sydney Professor Toby Walsh, urges Australians to reflect on what AI-enabled future the nation wants, as the future impact of AI on our society will be ultimately determined by decisions taken today.

    AI is the collection of interrelated technologies, such as natural language processing, speech recognition, computer vision, machine learning and automated reasoning, that gives machines the ability to perform tasks and solve problems that would otherwise require human cognition.

    “With careful planning, AI offers great opportunities for Australia, provided we ensure that the use of the technology does not compromise our human values. As a nation, we should look to set the global example for the responsible adoption of AI,” Professor Walsh said.

    Launching the report, Australia’s Chief Scientist Dr Alan Finkel emphasized that nations had choices.

    “This report was commissioned by the National Science and Technology Council, to develop an intellectual context for our human society to turn to in deciding what living well in this new era will mean,” Dr Finkel said.

    “What kind of society do we want to be? That is the crucial question for all Australians, and for governments as our elected representatives.”

    The findings recognize the importance of having a national strategy, a community awareness campaign, safe and accessible digital infrastructure, a responsive regulatory system; and a diverse and highly skilled workforce.

    “By bringing together Australia’s leading experts from the sciences, technology and engineering, humanities, arts and social sciences, this ACOLA report comprehensively examines the key issues arising from the development and implementation of AI technologies, and importantly places the wellbeing of society at the centre of any development,” Professor Hugh Bradlow, Chair of the ACOLA Board, said.

    ACOLA’s report is the fourth in the Horizon Scanning series, each scoping the human implications of fast-evolving technologies in the decade ahead.
    The project was supported by the Australian Research Council; the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet.

    ACOLA’s expert working group:
    Professor Toby Walsh FAA (co-chair), Professor Neil Levy FAHA (co-chair), Professor Genevieve Bell FTSE, Professor Anthony Elliot FASSA, Professor Fiona Wood AM FAHMS, Professor James Maclaurin, Professor Iven Mareels FTSE.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
  • richardmitnick 8:17 am on July 26, 2019 Permalink | Reply
    Tags: AI, , , ,   

    From National Geographics: “How artificial intelligence can tackle climate change” 

    National Geographic

    From National Geographics

    July 18, 2019
    Jackie Snow

    1
    Steam and smoke rise from the cooling towers and chimneys of a power plant. Artificial intelligence is being used to prove the case that plants that burn carbon-based fuels aren’t profitable. natgeo.com

    The biggest challenge on the planet might benefit from machine learning to help with solutions. Here are a just a few.

    Climate change is the biggest challenge facing the planet. It will need every solution possible, including technology like artificial intelligence (AI).

    Seeing a chance to help the cause, some of the biggest names in AI and machine learning—a discipline within the field—recently published a paper called Tackling Climate Change with Machine Learning The paper, which was discussed at a workshop during a major AI conference in June, was a “call to arms” to bring researchers together, said David Rolnick, a University of Pennsylvania postdoctoral fellow and one of the authors.

    “It’s surprising how many problems machine learning can meaningfully contribute to,” says Rolnick, who also helped organize the June workshop.

    The paper offers up 13 areas where machine learning can be deployed, including energy production, CO2 removal, education, solar geoengineering, and finance. Within these fields, the possibilities include more energy-efficient buildings, creating new low-carbon materials, better monitoring of deforestation, and greener transportation. However, despite the potential, Rolnick points out that this is early days and AI can’t solve everything.

    “AI is not a silver bullet,” he says.

    And though it might not be a perfect solution, it is bringing new insights into the problem. Here are three ways machine learning can help combat climate change.

    Better climate predictions

    This push builds on the work already done by climate informatics, a discipline created in 2011 that sits at the intersection of data science and climate science. Climate informatics covers a range of topics: from improving prediction of extreme events such as hurricanes, paleoclimatology, like reconstructing past climate conditions using data collected from things like ice cores, climate downscaling, or using large-scale models to predict weather on a hyper-local level, and the socio-economic impacts of weather and climate.

    AI can also unlock new insights from the massive amounts of complex climate simulations generated by the field of climate modeling, which has come a long way since the first system was created at Princeton in the 1960s. Of the dozens of models that have since come into existence, all represent atmosphere, oceans, land, cryosphere, or ice. But, even with agreement on basic scientific assumptions, Claire Monteleoni, a computer science professor at the University of Colorado, Boulder and a co-founder of climate informatics, points out that while the models generally agree in the short term, differences emerge when it comes to long-term forecasts.

    “There’s a lot of uncertainty,” Monteleoni said. “They don’t even agree on how precipitation will change in the future.”

    One project Monteleoni worked on uses machine learning algorithms to combine the predictions of the approximately 30 climate models used by the Intergovernmental Panel on Climate Change. Better predictions can help officials make informed climate policy, allow governments to prepare for change, and potentially uncover areas that could reverse some effects of climate change.

    Showing the effects of extreme weather

    Some homeowners have already experienced the effects of a changing environment. For others, it might seem less tangible. To make it more realistic for more people, researchers from Montreal Institute for Learning Algorithms (MILA), Microsoft, and ConscientAI Labs used GANs, a type of AI, to simulate what homes are likely to look like after being damaged by rising sea levels and more intense storms.

    “Our goal is not to convince people climate change is real, it’s to get people who do believe it is real to do more about that,” said Victor Schmidt, a co-author of the paper and Ph.D. candidate at MILA.

    So far, MILA researchers have met with Montreal city officials and NGOs eager to use the tool. Future plans include releasing an app to show individuals what their neighborhoods and homes might look like in the future with different climate change outcomes. But the app will need more data, and Schmidt said they eventually want to let people upload photos of floods and forest fires to improve the algorithm.

    “We want to empower these communities to help,” he said.

    Measuring where carbon is coming from

    Carbon Tracker is an independent financial think-tank working toward the UN goal of preventing new coal plants from being built by 2020. By monitoring coal plant emissions with satellite imagery, Carbon Tracker can use the data it gathers to convince the finance industry that carbon plants aren’t profitable.

    A grant from Google is expanding the nonprofit’s satellite imagery efforts to include gas-powered plants’ emissions and get a better sense of where air pollution is coming from. While there are continuous monitoring systems near power plants that can measure CO2 emissions more directly, they do not have global reach.

    “This can be used worldwide in places that aren’t monitoring,” said Durand D’souza, a data scientist at Carbon Tracker. “And we don’t have to ask permission.”

    AI can automate the analysis of images of power plants to get regular updates on emissions. It also introduces new ways to measure a plant’s impact, by crunching numbers of nearby infrastructure and electricity use. That’s handy for gas-powered plants that don’t have the easy-to-measure plumes that coal-powered plants have.

    Carbon Tracker will now crunch emissions for 4,000 to 5,000 power plants, getting much more information than currently available, and make it public. In the future, if a carbon tax passes, remote sensing Carbon Tracker’s could help put a price on emissions and pinpoint those responsible for it.

    “Machine learning is going to help a lot in this field,” D’souza said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The National Geographic Society has been inspiring people to care about the planet since 1888. It is one of the largest nonprofit scientific and educational institutions in the world. Its interests include geography, archaeology and natural science, and the promotion of environmental and historical conservation.

     
  • richardmitnick 7:52 am on July 15, 2019 Permalink | Reply
    Tags: AI, , , MTU-Michigan Technical University,   

    From Michigan Technical University: “AI Searches for New Nanomaterials” 

    Michigan Tech bloc

    From Michigan Technical University

    By Jenny Woodman
    Computing And Data Science
    Science And Engineering

    The researchers-

    Yoke Khin Yap
    ykyap@mtu.edu
    906-487-2900

    Susanta Ghosh
    susantag@mtu.edu
    906-487-2689

    1
    Michigan Technological University

    There’s a method to my development of new nanomaterials.

    Marilyn Monroe may have convinced previous generations that diamonds are a girl’s best friend, but in the future people may not sing the praises of sparkly carbon-based gemstones; they might be singing about new nanomaterials related to graphene and nanotubes. Thanks to a multidisciplinary collaboration using artificial intelligence (AI), physicist Yoke Khin Yap at Michigan Technological University and a team of researchers hope to develop new nanomaterials that are smaller and much stronger than diamonds.

    About Those Nanotubes…

    Nanotubes and graphene are a class of low-dimensional materials that can be composed of carbon or some combination of carbon, boron or nitrogen (B-C-N nanomaterials). The molecular bonds that join the atoms to form these nanomaterials are remarkably strong and they have wide-ranging applications, from water purification to biomedical research, from solar cells to semiconductors for computing and electronics.

    DFT, AI, and Other Helpful Acronyms

    The work happens at a nanoscale — and Yap’s team needs an atomic window. Thanks to an instrumentation grant from the National Science Foundation (NSF), Michigan Tech purchased a scanning transmission electron microscope (STEM) in 2018. According to Yap, the system “allows us to image nanomaterials at the atomic resolution, and at the same time we can touch them; we can probe them; we can characterize them in situ.”

    2
    360-Degree View of the STEM. Michigan Technological University

    But getting a material to the microscope is a long road. What if new materials could be invented before they’re seen?

    Yap’s work was inspired by Density Functional Theory (DFT) developed in the 1970s. DFT is widely used in both academia and industry to study materials and predict behaviors at an atomic level, including the structure and electronic properties. In the early days of the theory, there were limitations; DFT was not terribly accurate. According to Yap, DFT told experimentalists important information about the properties of a new structure, but offered little insight about how to make those theoretical materials a reality. He adds that DFT predictions are better suited to the nanoscale, involving 100 atoms or fewer. Therefore, predicting new nanomaterials is the sweet spot of DFT.

    But AI makes the process even more interesting and sophisticated.

    Computer scientists plug in data from current published research on new materials and then compare that information using a popular type of vector-space modeling called Word Embedding. The researchers are looking for places where keywords, such as carbon nanotubes and graphene, might overlap with properties such as band gap or mobility, said Yap.

    “They use a computer to dig into all the theoretical predictions that are being published by physicists and chemists,” Yap said. “This brings together all the kinds of theory out there and we find there is a subset that potentially more people agree upon.”

    From there, the researchers take those data-mined results and feed them into a convolutional neural network (CNN), which is a type of machine deep-learning network wherein the computer applies logic to make predictions. By applying layer upon layer of filters, the CNN reveals even more patterns in the data.

    Susanta Ghosh is an assistant professor of mechanical engineering at Michigan Tech and works on new materials with Yap. He said, “The biggest challenge in the Materials by Design paradigm is the high-dimensionality of the material design space due to the vast amount of possible combinations or conditions that lead to different materials.”

    In other words, the design process is complicated by a staggeringly high number of dimensions.

    “Data-driven modeling integrated with experiments or simulations is showing tremendous promise to overcome this challenge,” Ghosh added. “Data-driven modeling, such as machine learning, is opening new possibilities for creating structure-property relations across diverse material length and time scales and enabling optimization in the microstructural space for material design.”

    The process is lengthy — and like DFT it’s limited to identifying what materials are possible. Yap said further research with another layer of DFT modeling on the dynamic of chemical reaction may reinforce the prediction from CNN and offer insights about how to actually fabricate the materials. He anticipates seeing new materials in the next 10 to 15 years.

    “It is complicated and it will become a very interdisciplinary collaboration between theorists, computer scientists, experimental physicists and chemists to make the new nanomaterials,” Yap said.

    This multidisciplinary approach advances existing knowledge and theory about the next generation of materials for computing, medicine, engineering and more. With AI and atomic imaging, possibilities won’t lose their shape and nanomaterials — not diamonds — are a physicist’s best friend.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Michigan Tech Campus
    Michigan Technological University (http://www.mtu.edu) is a leading public research university developing new technologies and preparing students to create the future for a prosperous and sustainable world. Michigan Tech offers more than 130 undergraduate and graduate degree programs in engineering; forest resources; computing; technology; business; economics; natural, physical and environmental sciences; arts; humanities; and social sciences.
    The College of Sciences and Arts (CSA) fills one of the most important roles on the Michigan Tech campus. We play a part in the education of every student who comes through our doors. We take pride in offering essential foundational courses in the natural sciences and mathematics, as well as the social sciences and humanities—courses that underpin every major on campus. With twelve departments, 28 majors, 30-or-so specializations, and more than 50 minors, CSA has carefully developed programs to suit many interests and skill sets. From sound design and audio technology to actuarial science, applied cognitive science and human factors to rhetoric and technical communication, the college offers many unique programs.

     
  • richardmitnick 1:06 pm on June 30, 2019 Permalink | Reply
    Tags: AI, , , ,   

    From COSMOS Magazine: “Thanks to AI, we know we can teleport qubits in the real world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    26 June 2019
    Gabriella Bernardi

    Deep learning shows its worth in the word of quantum computing.

    1
    We’re coming to terms with quantum computing, (qu)bit by (qu)bit.
    MEHAU KULYK/GETTY IMAGES

    Italian researchers have shown that it is possible to teleport a quantum bit (or qubit) in what might be called a real-world situation.

    And they did it by letting artificial intelligence do much of the thinking.

    The phenomenon of qubit transfer is not new, but this work, which was led by Enrico Prati of the Institute of Photonics and Nanotechnologies in Milan, is the first to do it in a situation where the system deviates from ideal conditions.

    Moreover, it is the first time that a class of machine-learning algorithms known as deep reinforcement learning has been applied to a quantum computing problem.

    The findings are published in a paper in the journal Communications Physics.

    One of the basic problems in quantum computing is finding a fast and reliable method to move the qubit – the basic piece of quantum information – in the machine. This piece of information is coded by a single electron that has to be moved between two positions without passing through any of the space in between.

    In the so-called “adiabatic”, or thermodynamic, quantum computing approach, this can be achieved by applying a specific sequence of laser pulses to a chain of an odd number of quantum dots – identical sites in which the electron can be placed.

    It is a purely quantum process and a solution to the problem was invented by Nikolay Vitanov of the Helsinki Institute of Physics in 1999. Given its nature, rather distant from the intuition of common sense, this solution is called a “counterintuitive” sequence.

    However, the method applies only in ideal conditions, when the electron state suffers no disturbances or perturbations.

    Thus, Prati and colleagues Riccardo Porotti and Dario Tamaschelli of the University of Milan and Marcello Restelli of the Milan Polytechnic, took a different approach.

    “We decided to test the deep learning’s artificial intelligence, which has already been much talked about for having defeated the world champion at the game Go, and for more serious applications such as the recognition of breast cancer, applying it to the field of quantum computers,” Prati says.

    Deep learning techniques are based on artificial neural networks arranged in different layers, each of which calculates the values for the next one so that the information is processed more and more completely.

    Usually, a set of known answers to the problem is used to “train” the network, but when these are not known, another technique called “reinforcement learning” can be used.

    In this approach two neural networks are used: an “actor” has the task of finding new solutions, and a “critic” must assess the quality of these solution. Provided a reliable way to judge the respective results can be given by the researchers, these two networks can examine the problem independently.

    The researchers, then, set up this artificial intelligence method, assigning it the task of discovering alone how to control the qubit.

    “So, we let artificial intelligence find its own solution, without giving it preconceptions or examples,” Prati says. “It found another solution that is faster than the original one, and furthermore it adapts when there are disturbances.”

    In other words, he adds, artificial intelligence “has understood the phenomenon and generalised the result better than us”.

    “It is as if artificial intelligence was able to discover by itself how to teleport qubits regardless of the disturbance in place, even in cases where we do not already have any solution,” he explains.

    “With this work we have shown that the design and control of quantum computers can benefit from the using of artificial intelligence.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: