Tagged: Machine learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:37 am on August 15, 2019 Permalink | Reply
    Tags: , Azure ML, , , Every proton collision at the Large Hadron Collider is different but only a few are special. The special collisions generate particles in unusual patterns — possible manifestations of new rule-break, Fermilab is the lead U.S. laboratory for the CMS experiment., , , Machine learning, , , , The challenge: more data more computing power   

    From Fermi National Accelerator Lab- “A glimpse into the future: accelerated computing for accelerated particles” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    August 15, 2019
    Leah Hesla

    Every proton collision at the Large Hadron Collider is different, but only a few are special. The special collisions generate particles in unusual patterns — possible manifestations of new, rule-breaking physics — or help fill in our incomplete picture of the universe.

    Finding these collisions is harder than the proverbial search for the needle in the haystack. But game-changing help is on the way. Fermilab scientists and other collaborators successfully tested a prototype machine-learning technology that speeds up processing by 30 to 175 times compared to traditional methods.

    Confronting 40 million collisions every second, scientists at the LHC use powerful, nimble computers to pluck the gems — whether it’s a Higgs particle or hints of dark matter — from the vast static of ordinary collisions.

    Rifling through simulated LHC collision data, the machine learning technology successfully learned to identify a particular postcollision pattern — a particular spray of particles flying through a detector — as it flipped through an astonishing 600 images per second. Traditional methods process less than one image per second.

    The technology could even be offered as a service on external computers. Using this offloading model would allow researchers to analyze more data more quickly and leave more LHC computing space available to do other work.

    It is a promising glimpse into how machine learning services are supporting a field in which already enormous amounts of data are only going to get bigger.

    1
    Particles emerging from proton collisions at CERN’s Large Hadron Collider travel through through this stories-high, many-layered instrument, the CMS detector. In 2026, the LHC will produce 20 times the data it does currently, and CMS is currently undergoing upgrades to read and process the data deluge. Photo: Maximilien Brice, CERN

    The challenge: more data, more computing power

    Researchers are currently upgrading the LHC to smash protons at five times its current rate.

    By 2026, the 17-mile circular underground machine at the European laboratory CERN will produce 20 times more data than it does now.

    LHC

    CERN map


    CERN LHC Maximilien Brice and Julien Marius Ordan


    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS

    CERN ATLAS Image Claudia Marcelloni CERN/ATLAS

    ALICE

    CERN/ALICE Detector

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    CMS is one of the particle detectors at the Large Hadron Collider, and CMS collaborators are in the midst of some upgrades of their own, enabling the intricate, stories-high instrument to take more sophisticated pictures of the LHC’s particle collisions. Fermilab is the lead U.S. laboratory for the CMS experiment.

    If LHC scientists wanted to save all the raw collision data they’d collect in a year from the High-Luminosity LHC, they’d have to find a way to store about 1 exabyte (about 1 trillion personal external hard drives), of which only a sliver may unveil new phenomena. LHC computers are programmed to select this tiny fraction, making split-second decisions about which data is valuable enough to be sent downstream for further study.

    Currently, the LHC’s computing system keeps roughly one in every 100,000 particle events. But current storage protocols won’t be able to keep up with the future data flood, which will accumulate over decades of data taking. And the higher-resolution pictures captured by the upgraded CMS detector won’t make the job any easier. It all translates into a need for more than 10 times the computing resources than the LHC has now.

    The recent prototype test shows that, with advances in machine learning and computing hardware, researchers expect to be able to winnow the data emerging from the upcoming High-Luminosity LHC when it comes online.

    “The hope here is that you can do very sophisticated things with machine learning and also do them faster,” said Nhan Tran, a Fermilab scientist on the CMS experiment and one of the leads on the recent test. “This is important, since our data will get more and more complex with upgraded detectors and busier collision environments.”

    2
    Particle physicists are exploring the use of computers with machine learning capabilities for processing images of particle collisions at CMS, teaching them to rapidly identify various collision patterns. Image: Eamonn Maguire/Antarctic Design

    Machine learning to the rescue: the inference difference

    Machine learning in particle physics isn’t new. Physicists use machine learning for every stage of data processing in a collider experiment.

    But with machine learning technology that can chew through LHC data up to 175 times faster than traditional methods, particle physicists are ascending a game-changing step on the collision-computation course.

    The rapid rates are thanks to cleverly engineered hardware in the platform, Microsoft’s Azure ML, which speeds up a process called inference.

    To understand inference, consider an algorithm that’s been trained to recognize the image of a motorcycle: The object has two wheels and two handles that are attached to a larger metal body. The algorithm is smart enough to know that a wheelbarrow, which has similar attributes, is not a motorcycle. As the system scans new images of other two-wheeled, two-handled objects, it predicts — or infers — which are motorcycles. And as the algorithm’s prediction errors are corrected, it becomes pretty deft at identifying them. A billion scans later, it’s on its inference game.

    Most machine learning platforms are built to understand how to classify images, but not physics-specific images. Physicists have to teach them the physics part, such as recognizing tracks created by the Higgs boson or searching for hints of dark matter.

    Researchers at Fermilab, CERN, MIT, the University of Washington and other collaborators trained Azure ML to identify pictures of top quarks — a short-lived elementary particle that is about 180 times heavier than a proton — from simulated CMS data. Specifically, Azure was to look for images of top quark jets, clouds of particles pulled out of the vacuum by a single top quark zinging away from the collision.

    “We sent it the images, training it on physics data,” said Fermilab scientist Burt Holzman, a lead on the project. “And it exhibited state-of-the-art performance. It was very fast. That means we can pipeline a large number of these things. In general, these techniques are pretty good.”

    One of the techniques behind inference acceleration is to combine traditional with specialized processors, a marriage known as heterogeneous computing architecture.

    Different platforms use different architectures. The traditional processors are CPUs (central processing units). The best known specialized processors are GPUs (graphics processing units) and FPGAs (field programmable gate arrays). Azure ML combines CPUs and FPGAs.

    “The reason that these processes need to be accelerated is that these are big computations. You’re talking about 25 billion operations,” Tran said. “Fitting that onto an FPGA, mapping that on, and doing it in a reasonable amount of time is a real achievement.”

    And it’s starting to be offered as a service, too. The test was the first time anyone has demonstrated how this kind of heterogeneous, as-a-service architecture can be used for fundamental physics.

    5
    Data from particle physics experiments are stored on computing farms like this one, the Grid Computing Center at Fermilab. Outside organizations offer their computing farms as a service to particle physics experiments, making more space available on the experiments’ servers. Photo: Reidar Hahn

    At your service

    In the computing world, using something “as a service” has a specific meaning. An outside organization provides resources — machine learning or hardware — as a service, and users — scientists — draw on those resources when needed. It’s similar to how your video streaming company provides hours of binge-watching TV as a service. You don’t need to own your own DVDs and DVD player. You use their library and interface instead.

    Data from the Large Hadron Collider is typically stored and processed on computer servers at CERN and partner institutions such as Fermilab. With machine learning offered up as easily as any other web service might be, intensive computations can be carried out anywhere the service is offered — including off site. This bolsters the labs’ capabilities with additional computing power and resources while sparing them from having to furnish their own servers.

    “The idea of doing accelerated computing has been around decades, but the traditional model was to buy a computer cluster with GPUs and install it locally at the lab,” Holzman said. “The idea of offloading the work to a farm off site with specialized hardware, providing machine learning as a service — that worked as advertised.”

    The Azure ML farm is in Virginia. It takes only 100 milliseconds for computers at Fermilab near Chicago, Illinois, to send an image of a particle event to the Azure cloud, process it, and return it. That’s a 2,500-kilometer, data-dense trip in the blink of an eye.

    “The plumbing that goes with all of that is another achievement,” Tran said. “The concept of abstracting that data as a thing you just send somewhere else, and it just comes back, was the most pleasantly surprising thing about this project. We don’t have to replace everything in our own computing center with a whole bunch of new stuff. We keep all of it, send the hard computations off and get it to come back later.”

    Scientists look forward to scaling the technology to tackle other big-data challenges at the LHC. They also plan to test other platforms, such as Amazon AWS, Google Cloud and IBM Cloud, as they explore what else can be accomplished through machine learning, which has seen rapid evolution over the past few years.

    “The models that were state-of-the-art for 2015 are standard today,” Tran said.

    As a tool, machine learning continues to give particle physics new ways of glimpsing the universe. It’s also impressive in its own right.

    “That we can take something that’s trained to discriminate between pictures of animals and people, do some modest amount computation, and have it tell me the difference between a top quark jet and background?” Holzman said. “That’s something that blows my mind.”

    This work is supported by the DOE .

    See the full here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 9:33 am on August 15, 2019 Permalink | Reply
    Tags: "Deepfakes: danger in the digital age", , , Infocalypse- A term used to label the age of cybercriminals digital misinformation clickbait and data misuse., Machine learning   

    From CSIROscope: “Deepfakes: danger in the digital age” 

    CSIRO bloc

    From CSIROscope

    15 August 2019
    Alison Donnellan

    As we dive deeper into the digital age, fake news, online deceit and widespread use of social media are having a profound impact on every element of society. From swaying elections to manipulating science-proven facts.

    Deepfaking is the act of using artificial intelligence and machine learning technology to produce or alter video, image or audio content. It’s done using the sequence of the original to create a version of something that didn’t occur.

    So, what’s the deal with deepfakes?

    Once a topic only discussed in computer research labs, deepfakes were catapulted into mainstream media in 2017. This was after various online communities began swapping faces of high-profile personalities with actors in pornographic films.

    “You need a piece of machine learning to digest all of these video sequences. The machine eventually learns who the person is, how they are represented, how they move and evolve in the video,” says Dr Richard Nock, machine learning expert with our Data61 team.

    “So if you ask the machine to make a new sequence of this person, the machine is going to be able to automatically generate a new one.”

    “The piece of technology is almost always the same, which is where the name ‘deepfake’ comes from,” says Dr Nock. “It’s usually deep learning, a subset of machine learning, used to ask the machine to forge a new reality.”

    Let’s go… deeper

    As a result, deepfakes have been described as one of the contributing factors of the Infocalypse. A term used to label the age of cybercriminals, digital misinformation, clickbait and data misuse. As the technology behind the AI-generated videos improves, the ability for audiences to distinguish fact from fiction is becoming increasingly difficult.

    Creating a convincing deepfake is an unlikely feat for the general computer user. But an individual with advanced knowledge of machine learning (the specific software needed to digitally alter a piece of content) and access to the victim’s publicly-available social media profile for photographic, video and audio content, could do so.

    Now face-morphing apps inbuilt with automated AI and machine learning are becoming more advanced. So, deepfake creation could possibly come to be attainable to the general population in the future.

    One example of this is Snapchat’s introduction of the gender swap filter. The cost of a free download is all it takes for a Snapchat user to appear as someone else. The application’s gender swap filter completely alters the user’s appearance.

    There have been numerous instances of cat fishing (an individual that fabricates an online identity to trick others into exploitative emotional or romantic relationships) via online dating apps using the technology. Some people are using the experience as a social experiment and others as a ploy to extract sensitive information.

    To deepfake or not to deepfake

    Politicians, celebrities and those in the public spotlight are the most obvious victims of deep fakes. But the rise of posting multiple videos and selfies to public internet platforms places everyone at risk.

    ‘The creation of explicit images is one example of how deepfakes are being used to harass individuals online. One AI-powered app is creating images of what women might look like, according to the algorithm, unclothed.’

    According to Dr Nock, an alternative effect of election deepfakery could be an online exodus. Basically, a segment of the population placing their trust in the opinions of a closed circle of friends, whether it be physical or an online forum, such as Reddit.

    “Once you’ve passed that breaking point and no longer trust an information source, most people would start retracting themselves. Refraining themselves from accessing public media content because it cannot be trusted anymore. And eventually relying on their friends, which can be limiting if people are more exposed to opinions rather than the facts.”


    The Obama deepfake was a viral hit. There were over six million views of the video seemingly produced by the US president. The video brought to light the existence of deepfake technology alongside a warning for the trust users place in online content.

    Mitigating the threat of digital deceit

    There are three ways to prevent deepfakes according to Dr Nock:

    Invent a mechanism of authenticity. Whether that be a physical stamp such as blockchain or branding, to confirm that the information is from a trusted source and the video is depicting something that happened.
    Train machine learning to detect deep fakes created by other machines.
    These mechanisms would need to be widely adopted by different information sources in order to be successful.

    “Blockchain could work – if carefully crafted – but a watermark component would probably not,” explains Dr Nock. “Changing the format of an original document would eventually alter the watermark, while the document would obviously stay original. This would not happen with blockchain.”

    Machine learning is already detecting deep fakes. Researchers from UC Berkeley and the University of Southern California are using this method to distinguish unique head and face movement. These subtle personal quirks are currently not modeled by deep fake algorithms, with the technique returning a 92 per cent level of accuracy.

    While this research is comforting, bad actors will inevitably continue to reinvent and adapt AI-generated fakes.

    Machine learning is a powerful technology. And one that’s becoming more sophisticated over time. Deepfakes aside, machine learning is also bringing enormous positive benefits to areas like privacy, healthcare, transport and even self-driving cars.

    Our Data61 team acts as a network and partner with government, industry and universities, to advance the technologies of AI in many areas of society and industry, such as adversarial machine learning, cybersecurity and data protection, and rich data-driven insights.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    SKA/ASKAP radio telescope at the Murchison Radio-astronomy Observatory (MRO) in Mid West region of Western Australia

    So what can we expect these new radio projects to discover? We have no idea, but history tells us that they are almost certain to deliver some major surprises.

    Making these new discoveries may not be so simple. Gone are the days when astronomers could just notice something odd as they browse their tables and graphs.

    Nowadays, astronomers are more likely to be distilling their answers from carefully-posed queries to databases containing petabytes of data. Human brains are just not up to the job of making unexpected discoveries in these circumstances, and instead we will need to develop “learning machines” to help us discover the unexpected.

    With the right tools and careful insight, who knows what we might find.

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 8:17 am on July 26, 2019 Permalink | Reply
    Tags: , , , Machine learning,   

    From National Geographics: “How artificial intelligence can tackle climate change” 

    National Geographic

    From National Geographics

    July 18, 2019
    Jackie Snow

    1
    Steam and smoke rise from the cooling towers and chimneys of a power plant. Artificial intelligence is being used to prove the case that plants that burn carbon-based fuels aren’t profitable. natgeo.com

    The biggest challenge on the planet might benefit from machine learning to help with solutions. Here are a just a few.

    Climate change is the biggest challenge facing the planet. It will need every solution possible, including technology like artificial intelligence (AI).

    Seeing a chance to help the cause, some of the biggest names in AI and machine learning—a discipline within the field—recently published a paper called Tackling Climate Change with Machine Learning The paper, which was discussed at a workshop during a major AI conference in June, was a “call to arms” to bring researchers together, said David Rolnick, a University of Pennsylvania postdoctoral fellow and one of the authors.

    “It’s surprising how many problems machine learning can meaningfully contribute to,” says Rolnick, who also helped organize the June workshop.

    The paper offers up 13 areas where machine learning can be deployed, including energy production, CO2 removal, education, solar geoengineering, and finance. Within these fields, the possibilities include more energy-efficient buildings, creating new low-carbon materials, better monitoring of deforestation, and greener transportation. However, despite the potential, Rolnick points out that this is early days and AI can’t solve everything.

    “AI is not a silver bullet,” he says.

    And though it might not be a perfect solution, it is bringing new insights into the problem. Here are three ways machine learning can help combat climate change.

    Better climate predictions

    This push builds on the work already done by climate informatics, a discipline created in 2011 that sits at the intersection of data science and climate science. Climate informatics covers a range of topics: from improving prediction of extreme events such as hurricanes, paleoclimatology, like reconstructing past climate conditions using data collected from things like ice cores, climate downscaling, or using large-scale models to predict weather on a hyper-local level, and the socio-economic impacts of weather and climate.

    AI can also unlock new insights from the massive amounts of complex climate simulations generated by the field of climate modeling, which has come a long way since the first system was created at Princeton in the 1960s. Of the dozens of models that have since come into existence, all represent atmosphere, oceans, land, cryosphere, or ice. But, even with agreement on basic scientific assumptions, Claire Monteleoni, a computer science professor at the University of Colorado, Boulder and a co-founder of climate informatics, points out that while the models generally agree in the short term, differences emerge when it comes to long-term forecasts.

    “There’s a lot of uncertainty,” Monteleoni said. “They don’t even agree on how precipitation will change in the future.”

    One project Monteleoni worked on uses machine learning algorithms to combine the predictions of the approximately 30 climate models used by the Intergovernmental Panel on Climate Change. Better predictions can help officials make informed climate policy, allow governments to prepare for change, and potentially uncover areas that could reverse some effects of climate change.

    Showing the effects of extreme weather

    Some homeowners have already experienced the effects of a changing environment. For others, it might seem less tangible. To make it more realistic for more people, researchers from Montreal Institute for Learning Algorithms (MILA), Microsoft, and ConscientAI Labs used GANs, a type of AI, to simulate what homes are likely to look like after being damaged by rising sea levels and more intense storms.

    “Our goal is not to convince people climate change is real, it’s to get people who do believe it is real to do more about that,” said Victor Schmidt, a co-author of the paper and Ph.D. candidate at MILA.

    So far, MILA researchers have met with Montreal city officials and NGOs eager to use the tool. Future plans include releasing an app to show individuals what their neighborhoods and homes might look like in the future with different climate change outcomes. But the app will need more data, and Schmidt said they eventually want to let people upload photos of floods and forest fires to improve the algorithm.

    “We want to empower these communities to help,” he said.

    Measuring where carbon is coming from

    Carbon Tracker is an independent financial think-tank working toward the UN goal of preventing new coal plants from being built by 2020. By monitoring coal plant emissions with satellite imagery, Carbon Tracker can use the data it gathers to convince the finance industry that carbon plants aren’t profitable.

    A grant from Google is expanding the nonprofit’s satellite imagery efforts to include gas-powered plants’ emissions and get a better sense of where air pollution is coming from. While there are continuous monitoring systems near power plants that can measure CO2 emissions more directly, they do not have global reach.

    “This can be used worldwide in places that aren’t monitoring,” said Durand D’souza, a data scientist at Carbon Tracker. “And we don’t have to ask permission.”

    AI can automate the analysis of images of power plants to get regular updates on emissions. It also introduces new ways to measure a plant’s impact, by crunching numbers of nearby infrastructure and electricity use. That’s handy for gas-powered plants that don’t have the easy-to-measure plumes that coal-powered plants have.

    Carbon Tracker will now crunch emissions for 4,000 to 5,000 power plants, getting much more information than currently available, and make it public. In the future, if a carbon tax passes, remote sensing Carbon Tracker’s could help put a price on emissions and pinpoint those responsible for it.

    “Machine learning is going to help a lot in this field,” D’souza said.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The National Geographic Society has been inspiring people to care about the planet since 1888. It is one of the largest nonprofit scientific and educational institutions in the world. Its interests include geography, archaeology and natural science, and the promotion of environmental and historical conservation.

     
  • richardmitnick 2:14 pm on July 3, 2019 Permalink | Reply
    Tags: "With Little Training, An algorithm called Word2vec, , Machine learning, Machine-Learning Algorithms Can Uncover Hidden Scientific Knowledge", The project was motivated by the difficulty making sense of the overwhelming amount of published studies, The team collected the 3.3 million abstracts from papers published in more than 1000 journals between 1922 and 2018.   

    From Lawrence Berkeley National Lab: “With Little Training, Machine-Learning Algorithms Can Uncover Hidden Scientific Knowledge” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    July 3, 2019
    Julie Chao
    jhchao@lbl.gov
    (510) 486-6491

    Berkeley Lab study finds that text mining of scientific literature can lead to new discoveries.

    1
    (From left) Berkeley Lab researchers Vahe Tshitoyan, Anubhav Jain, Leigh Weston, and John Dagdelen used machine learning to analyze 3.3 million abstracts from materials science papers. (Credit: Marilyn Chung/Berkeley Lab)

    Sure, computers can be used to play grandmaster-level chess, but can they make scientific discoveries? Researchers at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory have shown that an algorithm with no training in materials science can scan the text of millions of papers and uncover new scientific knowledge.

    A team led by Anubhav Jain, a scientist in Berkeley Lab’s Energy Storage & Distributed Resources Division, collected 3.3 million abstracts of published materials science papers and fed them into an algorithm called Word2vec. By analyzing relationships between words the algorithm was able to predict discoveries of new thermoelectric materials years in advance and suggest as-yet unknown materials as candidates for thermoelectric materials.

    2
    Berkeley Lab researchers found that text mining of materials science abstracts could turn up novel thermoelectric materials. (Credit: Berkeley Lab)

    “Without telling it anything about materials science, it learned concepts like the periodic table and the crystal structure of metals,” said Jain. “That hinted at the potential of the technique. But probably the most interesting thing we figured out is, you can use this algorithm to address gaps in materials research, things that people should study but haven’t studied so far.”

    The findings were published July 3 in the journal Nature. The lead author of the study, “Unsupervised Word Embeddings Capture Latent Knowledge from Materials Science Literature,” is Vahe Tshitoyan, a Berkeley Lab postdoctoral fellow now working at Google. Along with Jain, Berkeley Lab scientists Kristin Persson and Gerbrand Ceder helped lead the study.

    “The paper establishes that text mining of scientific literature can uncover hidden knowledge, and that pure text-based extraction can establish basic scientific knowledge,” said Ceder, who also has an appointment at UC Berkeley’s Department of Materials Science and Engineering.

    Tshitoyan said the project was motivated by the difficulty making sense of the overwhelming amount of published studies. “In every research field there’s 100 years of past research literature, and every week dozens more studies come out,” he said. “A researcher can access only fraction of that. We thought, can machine learning do something to make use of all this collective knowledge in an unsupervised manner – without needing guidance from human researchers?”

    ‘King – queen + man = ?’

    The team collected the 3.3 million abstracts from papers published in more than 1,000 journals between 1922 and 2018. Word2vec took each of the approximately 500,000 distinct words in those abstracts and turned each into a 200-dimensional vector, or an array of 200 numbers.

    “What’s important is not each number, but using the numbers to see how words are related to one another,” said Jain, who leads a group working on discovery and design of new materials for energy applications using a mix of theory, computation, and data mining. “For example you can subtract vectors using standard vector math. Other researchers have shown that if you train the algorithm on nonscientific text sources and take the vector that results from ‘king minus queen,’ you get the same result as ‘man minus woman.’ It figures out the relationship without you telling it anything.”

    Similarly, when trained on materials science text, the algorithm was able to learn the meaning of scientific terms and concepts such as the crystal structure of metals based simply on the positions of the words in the abstracts and their co-occurrence with other words. For example, just as it could solve the equation “king – queen + man,” it could figure out that for the equation “ferromagnetic – NiFe + IrMn” the answer would be “antiferromagnetic.”

    Word2vec was even able to learn the relationships between elements on the periodic table when the vector for each chemical element was projected onto two dimensions.

    3
    Mendeleev’s periodic table is on the right. Word2vec’s representation of the elements, projected onto two dimensions, is on the left. (Credit: Berkeley Lab)

    Predicting discoveries years in advance

    So if Word2vec is so smart, could it predict novel thermoelectric materials? A good thermoelectric material can efficiently convert heat to electricity and is made of materials that are safe, abundant and easy to produce.

    The Berkeley Lab team took the top thermoelectric candidates suggested by the algorithm, which ranked each compound by the similarity of its word vector to that of the word “thermoelectric.” Then they ran calculations to verify the algorithm’s predictions.

    Of the top 10 predictions, they found all had computed power factors slightly higher than the average of known thermoelectrics; the top three candidates had power factors at above the 95th percentile of known thermoelectrics.

    Next they tested if the algorithm could perform experiments “in the past” by giving it abstracts only up to, say, the year 2000. Again, of the top predictions, a significant number turned up in later studies – four times more than if materials had just been chosen at random. For example, three of the top five predictions trained using data up to the year 2008 have since been discovered and the remaining two contain rare or toxic elements.

    The results were surprising. “I honestly didn’t expect the algorithm to be so predictive of future results,” Jain said. “I had thought maybe the algorithm could be descriptive of what people had done before but not come up with these different connections. I was pretty surprised when I saw not only the predictions but also the reasoning behind the predictions, things like the half-Heusler structure, which is a really hot crystal structure for thermoelectrics these days.”

    He added: “This study shows that if this algorithm were in place earlier, some materials could have conceivably been discovered years in advance.” Along with the study the researchers are releasing the top 50 thermoelectric materials predicted by the algorithm. They’ll also be releasing the word embeddings needed for people to make their own applications if they want to search on, say, a better topological insulator material.

    Up next, Jain said the team is working on a smarter, more powerful search engine, allowing researchers to search abstracts in a more useful way.

    The study was funded by Toyota Research Institute. Other study co-authors are Berkeley Lab researchers John Dagdelen, Leigh Weston, Alexander Dunn, and Ziqin Rong, and UC Berkeley researcher Olga Kononova.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Bringing Science Solutions to the World

    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

    DOE Seal

     
  • richardmitnick 1:06 pm on June 30, 2019 Permalink | Reply
    Tags: , , , Machine learning,   

    From COSMOS Magazine: “Thanks to AI, we know we can teleport qubits in the real world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    26 June 2019
    Gabriella Bernardi

    Deep learning shows its worth in the word of quantum computing.

    1
    We’re coming to terms with quantum computing, (qu)bit by (qu)bit.
    MEHAU KULYK/GETTY IMAGES

    Italian researchers have shown that it is possible to teleport a quantum bit (or qubit) in what might be called a real-world situation.

    And they did it by letting artificial intelligence do much of the thinking.

    The phenomenon of qubit transfer is not new, but this work, which was led by Enrico Prati of the Institute of Photonics and Nanotechnologies in Milan, is the first to do it in a situation where the system deviates from ideal conditions.

    Moreover, it is the first time that a class of machine-learning algorithms known as deep reinforcement learning has been applied to a quantum computing problem.

    The findings are published in a paper in the journal Communications Physics.

    One of the basic problems in quantum computing is finding a fast and reliable method to move the qubit – the basic piece of quantum information – in the machine. This piece of information is coded by a single electron that has to be moved between two positions without passing through any of the space in between.

    In the so-called “adiabatic”, or thermodynamic, quantum computing approach, this can be achieved by applying a specific sequence of laser pulses to a chain of an odd number of quantum dots – identical sites in which the electron can be placed.

    It is a purely quantum process and a solution to the problem was invented by Nikolay Vitanov of the Helsinki Institute of Physics in 1999. Given its nature, rather distant from the intuition of common sense, this solution is called a “counterintuitive” sequence.

    However, the method applies only in ideal conditions, when the electron state suffers no disturbances or perturbations.

    Thus, Prati and colleagues Riccardo Porotti and Dario Tamaschelli of the University of Milan and Marcello Restelli of the Milan Polytechnic, took a different approach.

    “We decided to test the deep learning’s artificial intelligence, which has already been much talked about for having defeated the world champion at the game Go, and for more serious applications such as the recognition of breast cancer, applying it to the field of quantum computers,” Prati says.

    Deep learning techniques are based on artificial neural networks arranged in different layers, each of which calculates the values for the next one so that the information is processed more and more completely.

    Usually, a set of known answers to the problem is used to “train” the network, but when these are not known, another technique called “reinforcement learning” can be used.

    In this approach two neural networks are used: an “actor” has the task of finding new solutions, and a “critic” must assess the quality of these solution. Provided a reliable way to judge the respective results can be given by the researchers, these two networks can examine the problem independently.

    The researchers, then, set up this artificial intelligence method, assigning it the task of discovering alone how to control the qubit.

    “So, we let artificial intelligence find its own solution, without giving it preconceptions or examples,” Prati says. “It found another solution that is faster than the original one, and furthermore it adapts when there are disturbances.”

    In other words, he adds, artificial intelligence “has understood the phenomenon and generalised the result better than us”.

    “It is as if artificial intelligence was able to discover by itself how to teleport qubits regardless of the disturbance in place, even in cases where we do not already have any solution,” he explains.

    “With this work we have shown that the design and control of quantum computers can benefit from the using of artificial intelligence.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:32 pm on May 27, 2019 Permalink | Reply
    Tags: , , , , DLR's German Remote Sensing Data Center (DFD), , Machine learning, the Leibniz Computer Centre (LRZ)   

    From DLR German Aerospace Center: Terra_Byte – Top computing power for researching global change 

    DLR Bloc

    From DLR German Aerospace Center

    Contacts

    Falk Dambowsky
    German Aerospace Center (DLR)
    Media Relations
    Tel.: +49 2203 601-3959

    Prof. Dr Stefan Dech
    German Aerospace Center (DLR)
    Earth Observation Center (EOC) – German Remote Sensing Data Center
    Tel.: +49 8153 28-2885
    Fax: +49 8153 28-3444

    Dr. rer. nat. Vanessa Keuck
    German Aerospace Center (DLR)
    Programme Strategy Space Research and Technology
    Tel.: +49 228 601-5555

    Dr Ludger Palm
    Leibniz Computer Centre (LRZ)
    Tel.: +49 89 35831-8792

    1
    Germany’s SuperMUC-NG supercomputer goes live. DatacenterDynamics

    One of Europe’s largest supercomputing centres – the Leibniz Computer Centre (LRZ) of the Bavarian Academy of Sciences – and Europe’s largest space research institution – the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) – will work together to evaluate the vast quantities of data acquired by Earth observation satellites alongside other global data sources, such as social networks, on the state of our planet on a daily basis.

    “The collaboration between DLR and the LRZ marks a milestone in the development of future-oriented research within Bavaria, a hub for science! This project illustrates the wealth of resources that the Munich research landscape has to offer in this area,” said Bavaria’s Minister of Science and Arts/Culture, Bernd Sibler, at the signing of the cooperation agreement between the partner institutions on 27 May 2019 in Garching. “The collaboration between DLR and the LRZ marks a milestone in the development of future studies within Bavaria, an established hub for science research. This project illustrates the wealth of resources that the Munich research landscape has to offer in this area.” With this cooperation, DLR and the LRZ are pooling their vast expertise in the fields of satellite-based Earth observation and supercomputing.

    “To understand the processes of global change and their development we must be able to evaluate the data from our satellites as effectively as possible,” stressed Hansjörg Dittus, DLR Executive Board Member for Space Research and Technology. “In future, the cooperation between DLR and the LRZ will make it possible to analyse vast quantities of data using the latest methods independently and highly efficiently, to aid in our understanding of global change and its consequences. Examples of this are increasing urbanisation, the expansion of agricultural land use across the globe at the expense of natural ecosystems and the rapid changes occurring in the Earth’s polar regions and in the atmosphere, which will have an undisputed impact on humankind. We will contribute our innovations and technology from space research, as well as our own sensor data to the analysis.”

    Dieter Kranzlmüller, Director of the LRZ, says, “The collaboration between these two leading research institutions brings together two partners that complement each other perfectly and contribute their relevant expertise, resources and research topics. The Leibniz Computing Centre has proven experience as an innovative provider of IT services and a high-performance computing centre. It is also a reliable and capable partner for Bavarian universities and will, in future, cooperate with DLR and its institutes in Oberpfaffenhofen.”

    Huge volumes of Earth observation data

    Every day, Earth observation satellites generate vast quantities of data at such high resolution that conventional evaluation methods have long been pushed to their limits. “Only the combination of the online availability of a wide range of historical and current data stocks with cutting-edge supercomputing systems will make it possible for our researchers to derive high-resolution global information that will enable us to make statements about the development and evolution of Earth. Artificial intelligence methods are playing an increasingly important role in fully automated analysis. This enables us to identify phenomena and developments in ways that would be difficult to detect using conventional methods,” says Stefan Dech, Director of the DLR German Remote Sensing Data Center. This cooperation is key for the DLR institutes in Oberpfaffenhofen involved in research into satellite-based Earth observation. We can now carry out a range of global methodological and geoscientific analyses, which have been the sole preserve of example cases up until now, due to the sheer quantity of data and limited computing power. The technological data concept jointly developed by DLR and the LRZ is particularly important, as it will link the LRZ up with DLR’s German Satellite Data Archive in Oberpfaffenhofen, and, in addition to making global data stocks available online, will link historical data from our archive and DLR’s own data,” continues Dech.

    A challenge for data analysis

    To cite one example, the volume of data from the European Earth observation programme Copernicus has already exceeded 10 petabytes.


    ESA Sentinels (Copernicus)

    One petabtye is equivalent to the content of around 223,000 DVDs – which would weigh approximately 3.5 tonnes. By 2024, the Sentinel satellites of the Copernicus programme will have produced over 40 petabytes of data. These will be supplemented by even more petabytes worth of data from national Earth observation missions, such as DLR’s TerraSAR-X and TanDEM-X radar satellites and US Landsat data.

    DLR TerraSAR-X Satellite

    DLR TanDEM-X satellite

    NASA LandSat 8

    However, it is not only the large amounts of data from the satellite missions that are currently presenting scientists with challenges, but also data on global change that are published on social networks. While these are valuable sources, challenges arise because these data are extremely disparate, their accuracy is uncertain and they are only available for a limited period of time.

    DLR researchers are thus increasingly using artificial intelligence (AI) and machine learning methods to identify trends in global change and analyses of natural disasters and environmental contexts in global and regional time series spanning several decades. But these methods require that the necessary data be available online, on high-performance data analytics platforms (HPDAs). The technical objective of this collaboration is to set up such a platform, providing researchers with access to all of the necessary Earth observation data via DLR’s German Satellite Data Archive (D-SDA) in Oberpfaffenhofen and data distribution points of various providers of freely available satellite data.

    DLR’s German Remote Sensing Data Center (DFD) will coordinate the activities of the participating DLR institutes. In addition to the DFD, the Remote Sensing Technology Institute, the Institute for Atmospheric Physics and the Microwaves and Radar Institute in Oberpfaffenhofen are involved in the project. The Institute of Data Science in Jena and the Simulation and Software Technology Facility in Cologne are also involved in the implementation of the technology.

    Cooperation on global change

    As part of the collaboration, DLR will address issues relating to environmental development and global change, methodological and algorithmic process development in physical modelling and artificial intelligence, the management of long-term archives and the processing of large data volumes.

    The LRZ focuses on the research and implementation of operational, scalable, secure and reliable IT services and technologies, the optimisation of processes and procedures, supercomputing and cloud computing, as well as the use of artificial intelligence and Big Data methods. The LRZ’s existing IT systems (including the MUC-NG supercomputer) and its experience with energy-efficient supercomputing will also prove useful.

    The plan is to make around 40 petabytes available online for thousands of computing cores. DLR and the LRZ are arranging joint investment in the project, with the first stage of expansion planned for late 2020. The new HPDA platform will be integrated into the LRZ’s existing infrastructure in Garching, near Munich. Most of the data on the platform will also be freely and openly available to scientists from Bavarian universities and higher education institutions.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DLR Center

    DLR is the national aeronautics and space research centre of the Federal Republic of Germany. Its extensive research and development work in aeronautics, space, energy, transport and security is integrated into national and international cooperative ventures. In addition to its own research, as Germany’s space agency, DLR has been given responsibility by the federal government for the planning and implementation of the German space programme. DLR is also the umbrella organisation for the nation’s largest project management agency.

    DLR has approximately 8000 employees at 16 locations in Germany: Cologne (headquarters), Augsburg, Berlin, Bonn, Braunschweig, Bremen, Goettingen, Hamburg, Juelich, Lampoldshausen, Neustrelitz, Oberpfaffenhofen, Stade, Stuttgart, Trauen, and Weilheim. DLR also has offices in Brussels, Paris, Tokyo and Washington D.C.

     
  • richardmitnick 6:15 pm on May 22, 2019 Permalink | Reply
    Tags: , , , Machine learning, ,   

    From ASCR Discovery: “Lessons machine-learned” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    From ASCR Discovery

    May 2019

    1
    The University of Arizona’s Joshua Levine is using his Department of Energy Early Career Research Program award to combine machine learning and topology data-analysis tools to better understand trends within climate simulations. These map pairs represent data from January 1950 (top) and January 2010. The left panels depict near-surface air temperatures from hot (red) to cool (blue). In the multicolored images, Levine has used topological, or shape-based, data analysis to organize and color-code the temperature data into a tree-like hierarchy. As the time passes, the data behavior around the North Pole (right panels) breaks into smaller chunks. These changes highlight the need for machine-learning tools to understand how these structures evolve over time. Images courtesy of Joshua Levine, University of Arizona, with data from CMIP6/ESGF.

    Quantifying the risks buried nuclear waste pose to soil and water near the Department of Energy’s (DOE’s) Hanford site in Washington state is not easy. Researchers can’t measure the earth’s permeability, a key factor in how far chemicals might travel, and mathematical models of how substances move underground are incomplete, says Paris Perdikaris of the University of Pennsylvania.

    But where traditional experimental and computational tools fall short, artificial intelligence algorithms can help, building their own inferences based on patterns in the data. “We can’t directly measure the quantities we’re interested in,” he says. “But using this underlying mathematical structure, we can construct machine-learning algorithms that can predict what we care about.”

    Perdikaris’ project is one of several sponsored by the DOE Early Career Research Program that apply machine-learning methods. One piece of his challenge is combining disparate data types such as images, simulations and time-resolved sensor information to find patterns. He will also constrain these models using physics and math, so the resulting predictions respect the underlying science and don’t make spurious connections based on data artifacts. “The byproduct of this is that you can significantly reduce the amount of data you need to make robust predictions. So you can save a lot in data efficiency terms.”

    Another key obstacle is quantifying the uncertainty within these calculations. Missing aspects of the physical model or physical data can affect the prediction’s quality. Besides studying subsurface transport, such algorithms could also be useful for designing new materials.

    Machine learning belongs to a branch of artificial intelligence algorithms that already support our smartphone assistants, manage our home devices and curate our movie and music playlists. Many machine-learning algorithms depend on tools known as neural networks, which mimic the human brain’s ability to filter, classify and draw insights from the patterns within data. Machine-learning methods could help scientists interpret a range of information. In some disciplines, experiments generate more data than researchers can hope to analyze on their own. In others, scientists might be looking for insights about their data and observations.

    But industry’s tools alone won’t solve science’s problems. Today’s machine-learning algorithms, though powerful, make inferences researchers can’t verify against established theory. And such algorithms might flag experimental noise as meaningful. But with algorithms designed to handle science’s tenets, machine learning could boost computational efficiency, allow researchers to compare, integrate and improve physical models, and shift the ways that scientists work.

    Much of industrial artificial intelligence work started with distinguishing, say, cats from Corvettes – analyzing millions of digital images in which data are abundant and have regular, pixelated structures. But with science, researchers don’t have the same luxury. Unlike the ubiquitous digital photos and language snippets that have powered image and voice recognition, scientific data can be expensive to generate, such as in molecular research experiments or large-scale simulations, says Argonne National Laboratory’s Prasanna Balaprakash.

    With his early-career award, he’s designing machine-learning methods that incorporate scientific knowledge. “How do we leverage that? How do we bring in the physics, the domain knowledge, so that an algorithm doesn’t need a lot of data to learn?” He’s also focused on adapting machine-learning algorithms to accept a wider range of data types, including graph-like structures used for encoding molecules or large-scale traffic network scenarios.

    Balaprakash also is exploring ways to automate the development of new machine-learning algorithms on supercomputers – a neural network for designing new neural networks. Writing these algorithms requires a lot of trial-and-error work, and a neural network built with one data type often can’t be used on a new data type.

    Although some fields have data bottlenecks, in other situations scientific instruments generate gobs of data – gigabytes, even petabytes, of results that are beyond human capability to review and analyze. Machine learning could help researchers sift this information and glean important insights. For example, experiments on Sandia National Laboratories’ Z machine, which compresses energy to produce X-rays and to study nuclear fusion, spew out data about material properties under these extreme conditions.

    Sandia Z machine

    When superheated, samples studied in the Z machine mix in a complex process that researchers don’t fully understand yet, says Sandia’s Eric Cyr. He’s exploring data-driven algorithms that can divine an initial model of this mixing, giving theoretical physicists a starting point to work from. In addition, combining machine-learning tools with simulation data could help researchers streamline their use of the Z machine, reducing the number of experiments needed to achieve accurate results and minimizing costs.

    To reach that goal, Cyr focuses on scalable machine algorithms, a technology known as layer-parallel methods. Today’s machine-learning algorithms have expanded from a handful of processing layers to hundreds. As researchers spread these layers over multiple graphics processing units (GPUs), the computational efficiency eventually breaks down. Cyr’s algorithms would split the neural-network layers across processors as the algorithm trains on the problem of interest, he says. “That way if you want to double the number of layers, basically make your neural network twice as deep, you can use twice as many processors and do it in the same amount of time.”

    With problems such as climate and weather modeling, researchers struggle to incorporate the vast range of scales, from globe-circling currents to local eddies. To tackle this problem, Oklahoma State University’s Omer San will apply machine learning to study turbulence in these types of geophysical flows. Researchers must construct a computational grid to run these simulations, but they have to define the scale of the mesh, perhaps 100 kilometers across, to encompass the globe and produce a calculation of manageable size. At that scale, it’s impossible to simulate a range of smaller factors, such as vortices just a few meters wide that can produce important, outsized effects across the whole system because of nonlinear interactions. Machine learning could provide a way to add back in some of these fine details, San says, like software that sharpens a blurry photo.

    Machine learning also could help guide researchers as they choose from the available closure models, or ways to model smaller-scale features, as they examine various flow types. It could be a decision-support system, San says, using local data to determine whether Model A or Model B is a better choice. His group also is examining ways to connect existing numerical methods within neural networks, to allow those techniques to partially inform the systems during the learning process, rather than doing blind analysis. San wants “to connect all of these dots: physics, numerics and the learning framework.”

    Machine learning also promises to help researchers extend the use of mathematical strategies that already support data analysis. At the University of Arizona, Joshua Levine is combining machine learning with topological data-analysis tools.

    These strategies capture data’s shape, which can be useful for visualizing and understanding climate patterns, such as surface temperatures over time. Levine wants to extend topology, which helps researchers analyze a single simulation, to multiple climate simulations with different parameters to understand them as a whole.

    As climate scientists use different models, they often struggle to figure out which ones are correct. “More importantly, we don’t always know where they agree and disagree,” Levine says. “It turns out agreement is a little bit more tractable as a problem.” Researchers can do coarse comparisons – calculating the average temperature across the Earth and checking the models to see if those simple numbers agree. But that basic comparison says little about what happened within a simulation.

    Topology can help match those average values with their locations, Levine says. “So it’s not just that it was hotter over the last 50 years, but maybe it was much hotter in Africa over the last 50 years than it was in South America.”

    All of these projects involve blending machine learning with other disciplines to capitalize on each area’s relative strengths. Computational physics, for example, is built on well-defined principles and mathematical models. Such models provide a good baseline for study, Penn’s Perdikaris says. “But they’re a little bit sterilized and they don’t directly reflect the complexity of the real world.” By contrast, up to now machine learning has only relied on data and observations, he says, throwing away a scientist’s physical knowledge of the world. “Bridging the two approaches will be key in advancing our understanding and enhancing our ability to analyze and predict complex phenomena in the future.”

    Although Argonne’s Balaprakash notes that machine learning has been oversold in some cases, he also believes it will be a transformative research tool, much like the Hubble telescope was for astronomy. “It’s a really promising research area.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 12:01 pm on May 10, 2019 Permalink | Reply
    Tags: "Painting a fuller picture of how antibiotics act", An additional mechanism that helps some antibiotics kill bacteria., , “We wanted to fundamentally understand which previously undescribed metabolic pathways might be important for us to understand how antibiotics kill.”, “White-box” machine-learning, Exploiting this mechanism could help researchers to discover new drugs that could be used along with antibiotics to enhance their killing ability the researchers say., Machine learning, , Some of the metabolic byproducts of antibiotics are toxic and help contribute to killing the cells., The findings suggest that it may be possible to enhance the effects of some antibiotics by delivering them along with other drugs that stimulate metabolic activity.   

    From MIT News: “Painting a fuller picture of how antibiotics act” 

    MIT News
    MIT Widget

    From MIT News

    May 9, 2019
    Anne Trafton

    1
    MIT biological engineers used a novel machine-learning approach to discover a mechanism that helps certain antibiotics kill bacteria. Image: Chelsea Turner, MIT

    Most antibiotics work by interfering with critical functions such as DNA replication or construction of the bacterial cell wall. However, these mechanisms represent only part of the full picture of how antibiotics act.

    In a new study of antibiotic action, MIT researchers developed a new machine-learning approach to discover an additional mechanism that helps some antibiotics kill bacteria. This secondary mechanism involves activating the bacterial metabolism of nucleotides that the cells need to replicate their DNA.

    “There are dramatic energy demands placed on the cell as a result of the drug stress. These energy demands require a metabolic response, and some of the metabolic byproducts are toxic and help contribute to killing the cells,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering, and the senior author of the study. Collins is also the faculty co-lead of the Abdul Latif Jameel Clinic for Machine Learning in Health.

    Exploiting this mechanism could help researchers to discover new drugs that could be used along with antibiotics to enhance their killing ability, the researchers say.

    Jason Yang, an IMES research scientist, is the lead author of the paper, which appears in the May 9 issue of Cell. Other authors include Sarah Wright, a recent MIT MEng recipient; Meagan Hamblin, a former Broad Institute research technician; Miguel Alcantar, an MIT graduate student; Allison Lopatkin, an IMES postdoc; Douglas McCloskey and Lars Schrubbers of the Novo Nordisk Foundation Center for Biosustainability; Sangeeta Satish and Amir Nili, both recent graduates of Boston University; Bernhard Palsson, a professor of bioengineering at the University of California at San Diego; and Graham Walker, an MIT professor of biology.

    “White-box” machine-learning

    Collins and Walker have studied the mechanisms of antibiotic action for many years, and their work has shown that antibiotic treatment tends to create a great deal of cellular stress that makes huge energy demands on bacterial cells. In the new study, Collins and Yang decided to take a machine-learning approach to investigate how this happens and what the consequences are.

    Before they began their computer modeling, the researchers performed hundreds of experiments in E. coli. They treated the bacteria with one of three antibiotics — ampicillin, ciprofloxacin, or gentamicin, and in each experiment, they also added one of about 200 different metabolites, including an array of amino acids, carbohydrates, and nucleotides (the building blocks of DNA). For each combination of antibiotics and metabolites, they measured the effects on cell survival.

    “We used a diverse set of metabolic perturbations so that we could see the effects of perturbing nucleotide metabolism, amino acid metabolism, and other kinds of metabolic subnetworks,” Yang says. “We wanted to fundamentally understand which previously undescribed metabolic pathways might be important for us to understand how antibiotics kill.”

    Many other researchers have used machine-learning models to analyze data from biological experiments, by training an algorithm to generate predictions based on experimental data. However, these models are typically “black-box,” meaning that they don’t reveal the mechanisms that underlie their predictions.

    To get around that problem, the MIT team took a novel approach that they call “white-box” machine-learning. Instead of feeding their data directly into a machine-learning algorithm, they first ran it through a genome-scale computer model of E. coli metabolism that had been characterized by Palsson’s lab. This allowed them to generate an array of “metabolic states” described by the data. Then, they fed these states into a machine-learning algorithm, which was able to identify links between the different states and the outcomes of antibiotic treatment.

    Because the researchers already knew the experimental conditions that produced each state, they were able to determine which metabolic pathways were responsible for higher levels of cell death.

    “What we demonstrate here is that by having the network simulations first interpret the data and then having the machine-learning algorithm build a predictive model for our antibiotic lethality phenotypes, the items that get selected by that predictive model themselves directly map onto pathways that we’ve been able to experimentally validate, which is very exciting,” Yang says.

    Markus Covert, an associate professor of bioengineering at Stanford University, says the study is an important step toward showing that machine learning can be used to uncover the biological mechanisms that link inputs and outputs.

    “Biology, especially for medical applications, is all about mechanism,” says Covert, who was not involved in the research. “You want to find something that is druggable. For the typical biologist, it hasn’t been meaningful to find these kinds of links without knowing why the inputs and outputs are linked.”

    Metabolic stress

    This model yielded the novel discovery that nucleotide metabolism, especially metabolism of purines such as adenine, plays a key role in antibiotics’ ability to kill bacterial cells. Antibiotic treatment leads to cellular stress, which causes cells to run low on purine nucleotides. The cells’ efforts to ramp up production of these nucleotides, which are necessary for copying DNA, boost the cells’ overall metabolism and leads to a buildup of harmful metabolic byproducts that can kill the cells.

    “We now believe what’s going on is that in response to this very severe purine depletion, cells turn on purine metabolism to try to deal with that, but purine metabolism itself is very energetically expensive and so this amplifies the energic imbalance that the cells are already facing,” Yang says.

    The findings suggest that it may be possible to enhance the effects of some antibiotics by delivering them along with other drugs that stimulate metabolic activity. “If we can move the cells to a more energetically stressful state, and induce the cell to turn on more metabolic activity, this might be a way to potentiate antibiotics,” Yang says.

    The “white-box” modeling approach used in this study could also be useful for studying how different types of drugs affect diseases such as cancer, diabetes, or neurodegenerative diseases, the researchers say. They are now using a similar approach to study how tuberculosis survives antibiotic treatment and becomes drug-resistant.

    The research was funded by the Defense Threat Reduction Agency, the National Institutes of Health, the Novo Nordisk Foundation, the Paul G. Allen Frontiers Group, the Broad Institute of MIT and Harvard, and the Wyss Institute for Biologically Inspired Engineering.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 2:21 pm on April 16, 2019 Permalink | Reply
    Tags: , , , Machine learning, , Natural Sciences, The Brendan Iribe Center for Computer Science and Engineering, UMIACS-University of Maryland Institute for Advanced Computer Studies,   

    From University of Maryland CMNS: “University of Maryland Launches Center for Machine Learning” 

    U Maryland bloc

    From University of Maryland


    CMNS

    April 16, 2019

    Abby Robinson
    301-405-5845
    abbyr@umd.edu

    The University of Maryland recently launched a multidisciplinary center that uses powerful computing tools to address challenges in big data, computer vision, health care, financial transactions and more.

    The University of Maryland Center for Machine Learning will unify and enhance numerous activities in machine learning already underway on the Maryland campus.

    1
    University of Maryland computer science faculty member Thomas Goldstein (on left, with visiting graduate student) is a member of the new Center for Machine Learning. Goldstein’s research focuses on large-scale optimization and distributed algorithms for big data. Photo: John T. Consoli.

    Machine learning uses algorithms and statistical models so that computer systems can effectively perform a task without explicit instructions, relying instead on patterns and inference. At UMD, for example, computer vision experts are “training” computers to identify and match key facial characteristics by having machines analyze millions of images publicly available on social media.

    Researchers at UMD are exploring other applications such as groundbreaking work in cancer genomics; powerful algorithms to improve the selection process for organ transplants; and an innovative system that can quickly find, translate and summarize information from almost any language in the world.

    “We wanted to capitalize on the significant strengths we already have in machine learning, provide additional support, and embrace fresh opportunities arising from new facilities and partnerships,” said Mihai Pop, professor of computer science and director of the University of Maryland Institute for Advanced Computer Studies (UMIACS).

    The center officially launched with a workshop last month featuring talks and panel discussions from machine learning experts in auditory systems, biology and medicine, business, chemistry, natural language processing, and security.

    Initial funding for the center comes from the College of Computer, Mathematical, and Natural Sciences (CMNS) and UMIACS, which will provide technical and administrative support.

    An inaugural partner of the center, financial and technology leader Capital One, provided additional support, including endowing three faculty positions in machine learning and computer science. Those positions received matching funding from the state’s Maryland E-Nnovation Initiative.

    Capital One has also provided funding for research projects that align with the organization’s need to stay on the cutting edge in areas like fraud detection and enhancing the customer experience with more personalized, real-time features.

    “We are proud to be a part of the launch of the University of Maryland Center for Machine Learning, and are thrilled to extend our partnership with the university in this field,” said Dave Castillo, the company’s managing vice president at the Center for Machine Learning and Emerging Technology. “At Capital One, we believe forward-leaning technologies like machine learning can provide our customers greater protection, security, confidence and control of their finances. We look forward to advancing breakthrough work with the University of Maryland in years to come.”

    3
    University of Maryland computer science faculty members David Jacobs (left) and Furong Huang (right) are part of the new Center for Machine Learning. Jacobs is an expert in computer vision and is the center’s interim director; Huang is conducting research in neural networks. Photo: John T. Consoli.

    David Jacobs, a professor of computer science with an appointment in UMIACS, will serve as interim director of the new center.

    To jumpstart the center’s activities, Jacobs has recruited a core group of faculty members in computer science and UMIACS: John Dickerson, Soheil Feizi, Thomas Goldstein, Furong Huang and Aravind Srinivasan.

    Faculty members from mathematics, chemistry, biology, physics, linguistics, and data science are also heavily involved in machine learning applications, and Jacobs said he expects many of them to be active in the center through direct or affiliate appointments.

    “We want the center to be a focal point across the campus where faculty, students, and visiting scholars can come to learn about the latest technologies and theoretical applications based in machine learning,” he said.

    Key to the center’s success will be a robust computational infrastructure that is needed to perform complex computations involving massive amounts of data.

    This is where UMIACS plays an important role, Jacobs said, with the institute’s technical staff already supporting multiple machine learning activities in computer vision and computational linguistics.

    Plans call for CMNS, UMIACS and other organizations to invest substantially in new computing resources for the machine learning center, Jacobs added.

    4
    The Brendan Iribe Center for Computer Science and Engineering. Photo: John T. Consoli.

    The center will be located in the Brendan Iribe Center for Computer Science and Engineering, a new state-of-the-art facility at the entrance to campus that will be officially dedicated later this month. In addition to the very latest in computing resources, the Brendan Iribe Center promotes collaboration and connectivity through its open design and multiple meeting areas.

    The Brendan Iribe Center is directly adjacent to the university’s Discovery District, where researchers working in Capital One’s Tech Incubator and other tech startups can interact with UMD faculty members and students on topics related to machine learning.

    Amitabh Varshney, professor of computer science and dean of CMNS, said the center will be a valuable resource for the state of Maryland and the region—both for students seeking the latest knowledge and skills and for companies wanting professional development training for their employees.

    “We have new educational activities planned by the college that include professional master’s programs in machine learning and data science and analytics,” Varshney said. “We want to leverage our location near numerous federal agencies and private corporations that are interested in expanding their workforce capabilities in these areas.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Maryland Campus

    About CMNS

    The thirst for new knowledge is a fundamental and defining characteristic of humankind. It is also at the heart of scientific endeavor and discovery. As we seek to understand our world, across a host of complexly interconnected phenomena and over scales of time and distance that were virtually inaccessible to us a generation ago, our discoveries shape that world. At the forefront of many of these discoveries is the College of Computer, Mathematical, and Natural Sciences (CMNS).

    CMNS is home to 12 major research institutes and centers and to 10 academic departments: astronomy, atmospheric and oceanic science, biology, cell biology and molecular genetics, chemistry and biochemistry, computer science, entomology, geology, mathematics, and physics.

    Our Faculty

    Our faculty are at the cutting edge over the full range of these disciplines. Our physicists fill in major gaps in our fundamental understanding of matter, participating in the recent Higgs boson discovery, and demonstrating the first-ever teleportation of information between atoms. Our astronomers probe the origin of the universe with one of the world’s premier radio observatories, and have just discovered water on the moon. Our computer scientists are developing the principles for guaranteed security and privacy in information systems.

    Our Research

    Driven by the pursuit of excellence, the University of Maryland has enjoyed a remarkable rise in accomplishment and reputation over the past two decades. By any measure, Maryland is now one of the nation’s preeminent public research universities and on a path to become one of the world’s best. To fulfill this promise, we must capitalize on our momentum, fully exploit our competitive advantages, and pursue ambitious goals with great discipline and entrepreneurial spirit. This promise is within reach. This strategic plan is our working agenda.

    The plan is comprehensive, bold, and action oriented. It sets forth a vision of the University as an institution unmatched in its capacity to attract talent, address the most important issues of our time, and produce the leaders of tomorrow. The plan will guide the investment of our human and material resources as we strengthen our undergraduate and graduate programs and expand research, outreach and partnerships, become a truly international center, and enhance our surrounding community.

    Our success will benefit Maryland in the near and long term, strengthen the State’s competitive capacity in a challenging and changing environment and enrich the economic, social and cultural life of the region. We will be a catalyst for progress, the State’s most valuable asset, and an indispensable contributor to the nation’s well-being. Achieving the goals of Transforming Maryland requires broad-based and sustained support from our extended community. We ask our stakeholders to join with us to make the University an institution of world-class quality with world-wide reach and unparalleled impact as it serves the people and the state of Maryland.

    Our researchers are also at the cusp of the new biology for the 21st century, with bioscience emerging as a key area in almost all CMNS disciplines. Entomologists are learning how climate change affects the behavior of insects, and earth science faculty are coupling physical and biosphere data to predict that change. Geochemists are discovering how our planet evolved to support life, and biologists and entomologists are discovering how evolutionary processes have operated in living organisms. Our biologists have learned how human generated sound affects aquatic organisms, and cell biologists and computer scientists use advanced genomics to study disease and host-pathogen interactions. Our mathematicians are modeling the spread of AIDS, while our astronomers are searching for habitable exoplanets.

    Our Education

    CMNS is also a national resource for educating and training the next generation of leaders. Many of our major programs are ranked among the top 10 of public research universities in the nation. CMNS offers every student a high-quality, innovative and cross-disciplinary educational experience that is also affordable. Strongly committed to making science and mathematics studies available to all, CMNS actively encourages and supports the recruitment and retention of women and minorities.

    Our Students

    Our students have the unique opportunity to work closely with first-class faculty in state-of-the-art labs both on and off campus, conducting real-world, high-impact research on some of the most exciting problems of modern science. 87% of our undergraduates conduct research and/or hold internships while earning their bachelor’s degree. CMNS degrees command respect around the world, and open doors to a wide variety of rewarding career options. Many students continue on to graduate school; others find challenging positions in high-tech industry or federal laboratories, and some join professions such as medicine, teaching, and law.

     
  • richardmitnick 1:20 pm on March 4, 2019 Permalink | Reply
    Tags: , , Completely doing away with wind variability is next to impossible, , , Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful, Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind, Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified, Machine learning, , Unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector   

    From Geospatial World: “Google and DeepMind predict wind energy output using AI” 

    From Geospatial World

    03/04/2019
    Aditya Chaturvedi

    1
    Image Courtesy: Unsplash

    Google has announced in its official blog post that it has enhanced the feasibility of wind energy by using AI software created by its UK subsidiary DeepMind.

    Renewable energy is the way towards lowering carbon emissions and sustainability, so it is imperative that we focus on yielding optimum energy outputs from renewable energy.

    Renewable technologies will be at the forefront of climate change mitigation and addressing global warming, however, the complete potential is yet to be harnessed owing to a slew of obstructions. Wind energy has emerged as a crucial source of renewable energy in the past decade due to a decline in the cost of turbines that has led to the gradual mainstreaming of wind power. Though, unpredictability in delivering power at set time frame continues to remain a daunting challenge before the sector.

    Google and DeepMind project will change this forever by overcoming this limitation that has hobbled wind energy adoption.

    With the help of DeepMind’s Machine Learning algorithms, Google has been able to predict the wind energy output of the farms that it uses for its Green Energy initiatives.

    “DeepMind and Google started applying machine learning algorithms to 700 megawatts of wind power capacity in the central United States. These wind farms—part of Google’s global fleet of renewable energy projects—collectively generate as much electricity as is needed by a medium-sized city”, the blog says.

    Google is optimistic that it can accurately predict and schedule energy output, which certainly would have an upper hand over non-time based deliveries.

    3
    Image Courtesy: Google/ DeepMind

    Taking a neural network that makes uses of weather forecasts and turbine data history, DeepMind system has been configured to predict wind power output 36 hours in advance.

    Taking a cue from these predictions, the advanced model recommends the best possible method to fulfill, and even exceed, delivery commitments 24 hrs in advance. Its importance can be estimated from the fact that energy sources that deliver a particular amount of power over a defined period of time are usually more vulnerable to the grid.

    Google is working to make the algorithm more refined so that any discrepancy that might occur could be nullified. Till date, Google claims that Machine Learning algorithms have boosted wind energy generated by 20%, ‘compared to the to the baseline scenario of no time-based commitments to the grid’, the blog says.

    4
    Image Courtesy: Google

    Completely doing away with wind variability is next to impossible, but Google claims that Machine Learning and AI would indeed make wind power more predictable and hence more useful.

    This unique approach would surely open up new avenues and make wind farm data more reliable and precise. When the productivity of wind power farms in greatly increased and their output can be predicted as well as calculated, wind will have the capability to match conventional electricity sources.

    Google is hopeful that the power of Machine Learning and AI would boost the mass adoption of wind power and turn it into a popular alternative to traditional sources of electricity over the years.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    http://www.geospatialworld.net

    With an average of 55,000+ unique visitors per month, http://www.geospatialworld.net is easily the number one media portal in geospatial domain; and is a reliable source of information for professionals in 150+ countries. The website, which integrates text, graphics and video elements, is an interactive medium for geospatial industry stakeholders to connect through several innovative features, including news, videos, guest blogs, case studies, articles, interviews, business listings and events.

    600,000+ annual unique visitors

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: