Tagged: Deep Learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:10 am on November 8, 2019 Permalink | Reply
    Tags: "Researchers convert 2D images into 3D using deep learning", , Deep Learning, Deep-Z is an artificial intelligence-based framework that can digitally refocus a 2D fluorescence microscope image to produce 3D slices.,   

    From UCLA Newsroom: “Researchers convert 2D images into 3D using deep learning” 


    From UCLA Newsroom

    November 7, 2019
    Nikki Lin
    310-206-8278
    nlin@cnsi.ucla.edu

    1
    Illustration represents Deep-Z, an artificial intelligence-based framework that can digitally refocus a 2D fluorescence microscope image (at bottom) to produce 3D slices (at left). Ozcan Lab/UCLA.

    A UCLA research team has devised a technique that extends the capabilities of fluorescence microscopy, which allows scientists to precisely label parts of living cells and tissue with dyes that glow under special lighting. The researchers use artificial intelligence to turn two-dimensional images into stacks of virtual three-dimensional slices showing activity inside organisms.

    In a study published in Nature Methods, the scientists also reported that their framework, called “Deep-Z,” was able to fix errors or aberrations in images, such as when a sample is tilted or curved. Further, they demonstrated that the system could take 2D images from one type of microscope and virtually create 3D images of the sample as if they were obtained by another, more advanced microscope.

    “This is a very powerful new method that is enabled by deep learning to perform 3D imaging of live specimens, with the least exposure to light, which can be toxic to samples,” said senior author Aydogan Ozcan, UCLA chancellor’s professor of electrical and computer engineering and associate director of the California NanoSystems Institute at UCLA.

    In addition to sparing specimens from potentially damaging doses of light, this system could offer biologists and life science researchers a new tool for 3D imaging that is simpler, faster and much less expensive than current methods. The opportunity to correct for aberrations may allow scientists studying live organisms to collect data from images that otherwise would be unusable. Investigators could also gain virtual access to expensive and complicated equipment.

    This research builds on an earlier technique Ozcan and his colleagues developed that allowed them to render 2D fluorescence microscope images in super-resolution. Both techniques advance microscopy by relying upon deep learning — using data to “train” a neural network, a computer system inspired by the human brain.

    Deep-Z was taught using experimental images from a scanning fluorescence microscope, which takes pictures focused at multiple depths to achieve 3D imaging of samples. In thousands of training runs, the neural network learned how to take a 2D image and infer accurate 3D slices at different depths within a sample. Then, the framework was tested blindly — fed with images that were not part of its training, with the virtual images compared to the actual 3D slices obtained from a scanning microscope, providing an excellent match.

    Ozcan and his colleagues applied Deep-Z to images of C. elegans, a roundworm that is a common model in neuroscience because of its simple and well-understood nervous system. Converting a 2D movie of a worm to 3D, frame by frame, the researchers were able to track the activity of individual neurons within the worm body. And starting with one or two 2D images of C. elegans taken at different depths, Deep-Z produced virtual 3D images that allowed the team to identify individual neurons within the worm, matching a scanning microscope’s 3D output, except with much less light exposure to the living organism.

    The researchers also found that Deep-Z could produce 3D images from 2D surfaces where samples were tilted or curved — even though the neural network was trained only with 3D slices that were perfectly parallel to the surface of the sample.

    “This feature was actually very surprising,” said Yichen Wu, a UCLA graduate student who is co-first author of the publication. “With it, you can see through curvature or other complex topology that is very challenging to image.”

    In other experiments, Deep-Z was trained with images from two types of fluorescence microscopes: wide-field, which exposes the entire sample to a light source; and confocal, which uses a laser to scan a sample part by part. Ozcan and his team showed that their framework could then use 2D wide-field microscope images of samples to produce 3D images nearly identical to ones taken with a confocal microscope.

    This conversion is valuable because the confocal microscope creates images that are sharper, with more contrast, compared to the wide field. On the other hand, the wide-field microscope captures images at less expense and with fewer technical requirements.

    “This is a platform that is generally applicable to various pairs of microscopes, not just the wide-field-to-confocal conversion,” said co-first author Yair Rivenson, UCLA assistant adjunct professor of electrical and computer engineering. “Every microscope has its own advantages and disadvantages. With this framework, you can get the best of both worlds by using AI to connect different types of microscopes digitally.”

    Other authors are graduate students Hongda Wang and Yilin Luo, postdoctoral fellow Eyal Ben-David and Laurent Bentolila, scientific director of the California NanoSystems Institute’s Advanced Light Microscopy and Spectroscopy Laboratory, all of UCLA; and Christian Pritz of Hebrew University of Jerusalem in Israel.

    The research was supported by the Koç Group, the National Science Foundation and the Howard Hughes Medical Institute. Imaging was performed at CNSI’s Advanced Light Microscopy and Spectroscopy Laboratory and Leica Microsystems Center of Excellence.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

     
  • richardmitnick 7:56 am on July 2, 2019 Permalink | Reply
    Tags: , Deep Learning, ,   

    From PPPL: “Artificial intelligence accelerates efforts to develop clean, virtually limitless fusion energy” 

    From PPPL

    April 17, 2019 [Just found this in social media]
    John Greenwald

    1
    Depiction of fusion research on a doughnut-shaped tokamak enhanced by artificial intelligence. (Depiction by Eliot Feibush/PPPL and Julian Kates-Harbeck/Harvard University)

    Artificial intelligence (AI), a branch of computer science that is transforming scientific inquiry and industry, could now speed the development of safe, clean and virtually limitless fusion energy for generating electricity. A major step in this direction is under way at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University, where a team of scientists working with a Harvard graduate student is for the first time applying deep learning — a powerful new version of the machine learning form of AI — to forecast sudden disruptions that can halt fusion reactions and damage the doughnut-shaped tokamaks that house the reactions.

    Promising new chapter in fusion research

    “This research opens a promising new chapter in the effort to bring unlimited energy to Earth,” Steve Cowley, director of PPPL, said of the findings, which are reported in the current issue of Nature magazine. “Artificial intelligence is exploding across the sciences and now it’s beginning to contribute to the worldwide quest for fusion power.”

    Fusion, which drives the sun and stars, is the fusing of light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — that generates energy. Scientists are seeking to replicate fusion on Earth for an abundant supply of power for the production of electricity.

    Crucial to demonstrating the ability of deep learning to forecast disruptions — the sudden loss of confinement of plasma particles and energy — has been access to huge databases provided by two major fusion facilities: the DIII-D National Fusion Facility that General Atomics operates for the DOE in California, the largest facility in the United States, and the Joint European Torus (JET) in the United Kingdom, the largest facility in the world, which is managed by EUROfusion, the European Consortium for the Development of Fusion Energy. Support from scientists at JET and DIII-D has been essential for this work.

    DOE DIII-D Tokamak

    Joint European Torus, at the Culham Centre for Fusion Energy in the United Kingdom

    The vast databases have enabled reliable predictions of disruptions on tokamaks other than those on which the system was trained — in this case from the smaller DIII-D to the larger JET. The achievement bodes well for the prediction of disruptions on ITER, a far larger and more powerful tokamak that will have to apply capabilities learned on today’s fusion facilities.


    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    The deep learning code, called the Fusion Recurrent Neural Network (FRNN), also opens possible pathways for controlling as well as predicting disruptions.

    Most intriguing area of scientific growth

    “Artificial intelligence is the most intriguing area of scientific growth right now, and to marry it to fusion science is very exciting,” said Bill Tang, a principal research physicist at PPPL, coauthor of the paper and lecturer with the rank and title of professor in the Princeton University Department of Astrophysical Sciences who supervises the AI project. “We’ve accelerated the ability to predict with high accuracy the most dangerous challenge to clean fusion energy.”

    Unlike traditional software, which carries out prescribed instructions, deep learning learns from its mistakes. Accomplishing this seeming magic are neural networks, layers of interconnected nodes — mathematical algorithms — that are “parameterized,” or weighted by the program to shape the desired output. For any given input the nodes seek to produce a specified output, such as correct identification of a face or accurate forecasts of a disruption. Training kicks in when a node fails to achieve this task: the weights automatically adjust themselves for fresh data until the correct output is obtained.

    A key feature of deep learning is its ability to capture high-dimensional rather than one-dimensional data. For example, while non-deep learning software might consider the temperature of a plasma at a single point in time, the FRNN considers profiles of the temperature developing in time and space. “The ability of deep learning methods to learn from such complex data make them an ideal candidate for the task of disruption prediction,” said collaborator Julian Kates-Harbeck, a physics graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow who was lead author of the Nature paper and chief architect of the code.

    Training and running neural networks relies on graphics processing units (GPUs), computer chips first designed to render 3D images. Such chips are ideally suited for running deep learning applications and are widely used by companies to produce AI capabilities such as understanding spoken language and observing road conditions by self-driving cars.

    Kates-Harbeck trained the FRNN code on more than two terabytes (1012) of data collected from JET and DIII-D. After running the software on Princeton University’s Tiger cluster of modern GPUs, the team placed it on Titan, a supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility, and other high-performance machines.

    Tiger Dell Linux supercomputer at Princeton University

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, now No.9 on the TOP500

    A demanding task

    Distributing the network across many computers was a demanding task. “Training deep neural networks is a computationally intensive problem that requires the engagement of high-performance computing clusters,” said Alexey Svyatkovskiy, a coauthor of the Nature paper who helped convert the algorithms into a production code and now is at Microsoft. “We put a copy of our entire neural network across many processors to achieve highly efficient parallel processing,” he said.

    The software further demonstrated its ability to predict true disruptions within the 30-millisecond time frame that ITER will require, while reducing the number of false alarms. The code now is closing in on the ITER requirement of 95 percent correct predictions with fewer than 3 percent false alarms. While the researchers say that only live experimental operation can demonstrate the merits of any predictive method, their paper notes that the large archival databases used in the predictions, “cover a wide range of operational scenarios and thus provide significant evidence as to the relative strengths of the methods considered in this paper.”

    From prediction to control

    The next step will be to move from prediction to the control of disruptions. “Rather than predicting disruptions at the last moment and then mitigating them, we would ideally use future deep learning models to gently steer the plasma away from regions of instability with the goal of avoiding most disruptions in the first place,” Kates-Harbeck said. Highlighting this next step is Michael Zarnstorff, who recently moved from deputy director for research at PPPL to chief science officer for the laboratory. “Control will be essential for post-ITER tokamaks – in which disruption avoidance will be an essential requirement,” Zarnstorff noted.

    Progressing from AI-enabled accurate predictions to realistic plasma control will require more than one discipline. “We will combine deep learning with basic, first-principle physics on high-performance computers to zero in on realistic control mechanisms in burning plasmas,” said Tang. “By control, one means knowing which ‘knobs to turn’ on a tokamak to change conditions to prevent disruptions. That’s in our sights and it’s where we are heading.”

    Support for this work comes from the Department of Energy Computational Science Graduate Fellowship Program of the DOE Office of Science and National Nuclear Security Administration; from Princeton University’s Institute for Computational Science and Engineering (PICsiE); and from Laboratory Directed Research and Development funds that PPPL provides. The authors wish to acknowledge assistance with high-performance supercomputing from Bill Wichser and Curt Hillegas at PICSciE; Jack Wells at the Oak Ridge Leadership Computing Facility; Satoshi Matsuoka and Rio Yokata at the Tokyo Institute of Technology; and Tom Gibbs at NVIDIA Corp.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition


    PPPL campus

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 1:06 pm on June 30, 2019 Permalink | Reply
    Tags: , , Deep Learning, ,   

    From COSMOS Magazine: “Thanks to AI, we know we can teleport qubits in the real world” 

    Cosmos Magazine bloc

    From COSMOS Magazine

    26 June 2019
    Gabriella Bernardi

    Deep learning shows its worth in the word of quantum computing.

    1
    We’re coming to terms with quantum computing, (qu)bit by (qu)bit.
    MEHAU KULYK/GETTY IMAGES

    Italian researchers have shown that it is possible to teleport a quantum bit (or qubit) in what might be called a real-world situation.

    And they did it by letting artificial intelligence do much of the thinking.

    The phenomenon of qubit transfer is not new, but this work, which was led by Enrico Prati of the Institute of Photonics and Nanotechnologies in Milan, is the first to do it in a situation where the system deviates from ideal conditions.

    Moreover, it is the first time that a class of machine-learning algorithms known as deep reinforcement learning has been applied to a quantum computing problem.

    The findings are published in a paper in the journal Communications Physics.

    One of the basic problems in quantum computing is finding a fast and reliable method to move the qubit – the basic piece of quantum information – in the machine. This piece of information is coded by a single electron that has to be moved between two positions without passing through any of the space in between.

    In the so-called “adiabatic”, or thermodynamic, quantum computing approach, this can be achieved by applying a specific sequence of laser pulses to a chain of an odd number of quantum dots – identical sites in which the electron can be placed.

    It is a purely quantum process and a solution to the problem was invented by Nikolay Vitanov of the Helsinki Institute of Physics in 1999. Given its nature, rather distant from the intuition of common sense, this solution is called a “counterintuitive” sequence.

    However, the method applies only in ideal conditions, when the electron state suffers no disturbances or perturbations.

    Thus, Prati and colleagues Riccardo Porotti and Dario Tamaschelli of the University of Milan and Marcello Restelli of the Milan Polytechnic, took a different approach.

    “We decided to test the deep learning’s artificial intelligence, which has already been much talked about for having defeated the world champion at the game Go, and for more serious applications such as the recognition of breast cancer, applying it to the field of quantum computers,” Prati says.

    Deep learning techniques are based on artificial neural networks arranged in different layers, each of which calculates the values for the next one so that the information is processed more and more completely.

    Usually, a set of known answers to the problem is used to “train” the network, but when these are not known, another technique called “reinforcement learning” can be used.

    In this approach two neural networks are used: an “actor” has the task of finding new solutions, and a “critic” must assess the quality of these solution. Provided a reliable way to judge the respective results can be given by the researchers, these two networks can examine the problem independently.

    The researchers, then, set up this artificial intelligence method, assigning it the task of discovering alone how to control the qubit.

    “So, we let artificial intelligence find its own solution, without giving it preconceptions or examples,” Prati says. “It found another solution that is faster than the original one, and furthermore it adapts when there are disturbances.”

    In other words, he adds, artificial intelligence “has understood the phenomenon and generalised the result better than us”.

    “It is as if artificial intelligence was able to discover by itself how to teleport qubits regardless of the disturbance in place, even in cases where we do not already have any solution,” he explains.

    “With this work we have shown that the design and control of quantum computers can benefit from the using of artificial intelligence.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:17 am on November 22, 2017 Permalink | Reply
    Tags: , Deep Learning, Reconstructing a hologram to form a microscopic image, This deep-learning–based framework opens up myriad opportunities to design fundamentally new coherent imaging systems, UCLA engineers use deep learning to reconstruct holograms and improve optical microscopy,   

    From UCLA: “UCLA engineers use deep learning to reconstruct holograms and improve optical microscopy” 


    UCLA Newsroom

    November 20, 2017

    Nikki Lin
    310-206-8278
    nlin@cnsi.ucla.edu

    1
    The technique developed at UCLA uses deep learning to produce high-resolution pictures from lower-resolution microscopic images. Ozcan Research Group/UCLA

    A form of machine learning called deep learning is one of the key technologies behind recent advances in applications like real-time speech recognition and automated image and video labeling.

    The approach, which uses multi-layered artificial neural networks to automate data analysis, also has shown significant promise for health care: It could be used, for example, to automatically identify abnormalities in patients’ X-rays, CT scans and other medical images and data.

    In two new papers, UCLA researchers report that they have developed new uses for deep learning: reconstructing a hologram to form a microscopic image of an object and improving optical microscopy.

    Their new holographic imaging technique produces better images than current methods that use multiple holograms, and it’s easier to implement because it requires fewer measurements and performs computations faster.

    The research was led by Aydogan Ozcan, an associate director of the UCLA California NanoSystems Institute and the Chancellor’s Professor of Electrical and Computer Engineering at the UCLA Henry Samueli School of Engineering and Applied Science; and by postdoctoral scholar Yair Rivenson and graduate student Yibo Zhang, both of UCLA’s electrical and computer engineering department.

    For one study (PDF), published in Light: Science and Applications, the researchers produced holograms of Pap smears, which are used to screen for cervical cancer, and blood samples, as well as breast tissue samples. In each case, the neural network learned to extract and separate the features of the true image of the object from undesired light interference and from other physical byproducts of the image reconstruction process.

    “These results are broadly applicable to any phase recovery and holographic imaging problem, and this deep-learning–based framework opens up myriad opportunities to design fundamentally new coherent imaging systems, spanning different parts of the electromagnetic spectrum, including visible wavelengths and even X-rays,” said Ozcan, who also is an HHMI Professor at the Howard Hughes Medical Institute.

    Another advantage of the new approach was that it was achieved without any modeling of light–matter interaction or a solution of the wave equation, which can be challenging and time-consuming to model and calculate for each individual sample and form of light.

    “This is an exciting achievement since traditional physics-based hologram reconstruction methods have been replaced by a deep-learning–based computational approach,” Rivenson said.

    Other members of the team were UCLA researchers Harun Günaydin and Da Teng, both members of Ozcan’s lab.

    The second study, published in the journal Optica, the researchers used the same deep-learning framework to improve the resolution and quality of optical microscopic images.

    That advance could help diagnosticians or pathologists looking for very small-scale abnormalities in a large blood or tissue sample, and Ozcan said it represents the powerful opportunities for deep learning to improve optical microscopy for medical diagnostics and other fields in engineering and the sciences.

    Ozcan’s research is supported by the National Science Foundation–funded Precise Advanced Technologies and Health Systems for Underserved Populations and by the NSF, as well as the Army Research Office, the National Institutes of Health, the Howard Hughes Medical Institute, the Vodafone Americas Foundation and the Mary Kay Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

     
  • richardmitnick 8:07 am on July 10, 2017 Permalink | Reply
    Tags: , , Deep Learning, ECG-electrocardiogram, , , Stanford Machine Learning Group,   

    From Stanford: “Stanford computer scientists develop an algorithm that diagnoses heart arrhythmias with cardiologist-level accuracy” 

    Stanford University Name
    Stanford University

    July 6, 2017
    Taylor Kubota


    Stanford researchers say their algorithm could bring quick, accurate diagnoses of heart arrhythmias to people without ready access to cardiologists. Video by Kurt Hickman.

    1
    We train a 34-layer convolutional neural network (CNN) to detect arrhythmias in arbitrary length ECG time-series.

    The network takes as input a time-series of raw ECG signal, and outputs a sequence of label predictions. The 30 second long ECG signal is sampled at 200Hz, and the model outputs a new prediction once every second. We arrive at an architecture which is 33 layers of convolution followed by a fully connected layer and a softmax.

    To make the optimization of such a deep model tractable, we use residual connections and batch-normalization. The depth increases both the non-linearity of the computation as well as the size of the context window for each classification decision. Stanford Machine Learning Group.

    2
    We collect and annotate a dataset of 64,121 ECG records from 29,163 patients. Stanford Machine Learning Group.

    The ECG data is sampled at a frequency of 200 Hz and is collected from a single-lead, noninvasive and continuous monitoring device called the Zio Patch (iRhythm Technologies) which has a wear period up to 14 days. Each ECG record in the training set is 30 seconds long and can contain more than one rhythm type. Each record is annotated by a clinical ECG expert: the expert highlights segments of the signal and marks it as corresponding to one of the 14 rhythm classes.

    We collect a test set of 336 records from 328 unique patients. For the test set, ground truth annotations for each record were obtained by a committee of three board-certified cardiologists; there are three committees responsible for different splits of the test set. The cardiologists discussed each individual record as a group and came to a consensus labeling. For each record in the test set we also collect 6 individual annotations from cardiologists not participating in the group. This is used to assess performance of the model compared to an individual cardiologist.

    A new algorithm developed by Stanford computer scientists can sift through hours of heart rhythm data generated by some wearable monitors to find sometimes life-threatening irregular heartbeats, called arrhythmias. The algorithm, detailed in an arXiv paper, performs better than trained cardiologists, and has the added benefit of being able to sort through data from remote locations where people don’t have routine access to cardiologists.

    “One of the big deals about this work, in my opinion, is not just that we do abnormality detection but that we do it with high accuracy across a large number of different types of abnormalities,” said Awni Hannun, a graduate student and co-lead author of the paper. “This is definitely something that you won’t find to this level of accuracy anywhere else.”

    People suspected to have an arrhythmia will often get an electrocardiogram (ECG) in a doctor’s office. However, if an in-office ECG doesn’t reveal the problem, the doctor may prescribe the patient a wearable ECG that monitors the heart continuously for two weeks. The resulting hundreds of hours of data would then need to be inspected second by second for any indications of problematic arrhythmias, some of which are extremely difficult to differentiate from harmless heartbeat irregularities.

    Researchers in the Stanford Machine Learning Group, led by Andrew Ng, an adjunct professor of computer science, saw this as a data problem. They set out to develop a deep learning algorithm to detect 14 types of arrhythmia from ECG signals. They collaborated with the heartbeat monitor company iRhythm to collect a massive dataset that they used to train a deep neural network model. In seven months, it was able to diagnose these arrhythmias about as accurately as cardiologists and outperform them in most cases.

    The researchers believe that this algorithm could someday help make cardiologist-level arrhythmia diagnosis and treatment more accessible to people who are unable to see a cardiologist in person. Ng thinks this is just one of many opportunities for deep learning to improve patients’ quality of care and help doctors save time.

    Building heartbeat interpreter

    The group trained their algorithm on data collected from iRhythm’s wearable ECG monitor. Patients wear a small chest patch for two weeks and carry out their normal day-to-day activities while the device records each heartbeat for analysis. The group took approximately 30,000, 30-second clips from various patients that represented a variety of arrhythmias.

    “The differences in the heartbeat signal can be very subtle but have massive impact in how you choose to tackle these detections,” said Pranav Rajpurkar, a graduate student and co-lead author of the paper. “For example, two forms of the arrhythmia known as second-degree atrioventricular block look very similar, but one requires no treatment while the other requires immediate attention.”

    To test accuracy of the algorithm, the researchers gave a group of three expert cardiologists 300 undiagnosed clips and asked them to reach a consensus about any arrhythmias present in the recordings. Working with these annotated clips, the algorithm could then predict how those cardiologists would label every second of other ECGs with which it was presented, in essence, giving a diagnosis.

    Success and the future

    The group had six different cardiologists, working individually, diagnose the same 300-clip set. The researchers then compared which more closely matched the consensus opinion – the algorithm or the cardiologists working independently. They found that the algorithm is competitive with the cardiologists, and able to outperform cardiologists on most arrhythmias.

    “There was always an element of suspense when we were running the model and waiting for the result to see if it was going to do better than the experts,” said Rajpurkar. “And we had these exciting moments over and over again as we pushed the model closer and closer to expert performance and then finally went beyond it.”

    In addition to cardiologist-level accuracy, the algorithm has the advantage that it does not get fatigued and can make arrhythmia detections instantaneously and continuously.

    Long term, the group hopes this algorithm could be a step toward expert-level arrhythmia diagnosis for people who don’t have access to a cardiologist, as in many parts of the developing world and in other rural areas. More immediately, the algorithm could be part of a wearable device that at-risk people keep on at all times that would alert emergency services to potentially deadly heartbeat irregularities as they’re happening.

    Additional authors of the paper include Masoumeh Haghpanahi and Codie Bourn of iRhythm. Additional information is available at the project website.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

    Stanford University Seal

     
  • richardmitnick 11:47 am on April 6, 2017 Permalink | Reply
    Tags: , Deep Learning,   

    From Technion: “Deep Learning” 

    Technion bloc

    Israel Institute of Technology

    06/04/2017

    Researchers from the Technion Computer Science Department introduce unprecedented theoretical foundation to one of the hottest scientific fields today – deep learning.

    1
    No image caption. No image credit.

    In a recent article, Prof. Elad and his PhD students, Vardan Papyan and Yaniv Romano introduce a broad theory explaining many of the important aspects of multi-layered neural networks, which are the essence of deep learning.

    Initial seed ideas in the 1940s and 1950s, elementary applications in the 1960s, promising signs in the 1980s, a massive decline and stagnation in the 1990s, followed by dramatic awakening development in the past decade. This, in a nutshell, is the story of one of the hottest scientific fields in data sciences – neural networks, and more specifically, deep learning.

    Deep learning fascinates major companies including Google, Facebook, Microsoft, LinkedIn, IBM and Mobileye. According to Technion Professor Michael Elad, this area came to life in the past decade following a series of impressive breakthroughs. However, “while empirical research charged full speed ahead and made surprising achievements, the required theoretical analysis trailed behind and has not, until now, managed to catch up with the rapid development in the field. Now I am happy to announce that we have highly significant results in this area that close this gap.”

    “One could say that up to now, we have been working with a black box called a neural network,” Elad explains. “This box has been serving us very well , but no one was able to identify the reasons and conditions for its success. In our study, we managed to open it up, analyze it and provide a theoretical explanation for the origins of its success. Now, armed with this new perspective, we can answer fundamental questions such as failure modes in this system and ways to overcome them. We believe that the proposed analysis will lead to major breakthroughs in the coming few years.”

    But first a brief background explanation.

    2
    No image caption. No image credit.

    Convolutional neural networks, and more broadly, multi-layered neural networks, pose an engineering approach that provides the computer with a potential for learning that brings it close to human reasoning. Ray Kurzweil, Google’s chief futurist in this field, believes that by 2029 computerized systems will be able to demonstrate not only impressive cognitive abilities, but even genuine emotional intelligence, such as understanding a sense of humor and human emotions. Deloitte has reported that the field of deep learning is growing at a dizzying rate of 25% per year, and is expected to become a 43 billion USD industry per year by 2020.

    Neural networks, mainly those with a feed-forward structure that are currently at the forefront of research in the fields of machine learning and artificial intelligence, are systems that perform rapid, efficient and accurate cataloging of data. To some extent, these artificial systems are reminiscent of the human brain and, like the brain, they are made up of layers of neurons interconnected by synapses. The first layer of the network receives the input and “filters” it for the second, deeper layer, which performs additional filtering, and so on and so forth. Thus the information is diffused through a deep and intricate artificial network, at the end of which the desired output is obtained.

    If, for example, the task is to identify faces, the first layers will take the initial information and extract basic features such as the boundaries between the different areas in the face image; the next layers will identify more specific elements such as eyebrows, pupils and eyelids; while the deeper layers of the network will identify more complex parts of the face, such as the eyes; the end result will be the identification of a particular face, i.e., of a specific person. “Obviously the process is far more complex, but this is the principle: each layer is a sort of filter that transmits processed information to the next layer at an increasing level of abstraction. In this context, the term ‘deep learning’ refers to the multiple layers in the neural network, a structure that has been empirically found to be especially effective for identification tasks.

    The hierarchical structure of these networks enables them to analyze complex information, identify patterns in this information, categorize it, and more. Their greatness lies in the fact that they can learn from examples, i.e. if we feed them millions of tagged images of people, cats, dogs and trees, the network can learn to identify the various categories in new images, and do so at unprecedented levels of accuracy, in comparison with previous approaches in machine learning.”

    The first artificial neural network was presented by McCulloch and Pitts in 1943. In the 1960s, Frank Rosenblatt from Cornell University introduced the first learning algorithm for which convergence could be proven. In the 1980s, important empirical achievements were added to this development.

    It was clear to all the scientists engaged in this field in those years that there is a great potential here, but they were utterly discouraged by the many failures and the field went into a long period of hibernation. Then, less than a decade ago, there was a great revival. Why? “Because of the dramatic surge in computing capabilities, making it possible to run more daring algorithms on far more data. Suddenly, these networks succeeded in highly complex tasks: identifying handwritten digits (with accuracy of 99% and above), identifying emotions such as sadness, humor and anger in a given text and more.” One of the key figures in this revival was Yann LeCun, a professor from NYU who insisted on studying these networks, even at times when the task seemed hopeless. Prof. LeCun, together with Prof. Geoffrey Hinton and Prof. Yoshua Bengio from Canada, are the founding fathers of this revolutionary technology.

    Real Time Translation

    In November 2012, Rick Rashid, director of research at Microsoft, introduced the simultaneous translation system developed by the company on the basis of deep learning. At a lecture in China, Rashid spoke in English and his words underwent a computerized process of translation, so that the Chinese audience would hear the lecture in their own language in real time. The mistakes in the process were few – one mistake per 14 words on average. This is in comparison with a rate of 1:4, which was considered acceptable and even successful several years earlier. This translation process is used today by Skype, among others, and in Microsoft’s various products.

    Beating the World Champion

    Google did not sit idly by. It recruited the best minds in the field, including the aforementioned Geoffrey Hinton, and has actually become one of the leading research centers in this regard. The Google Brain project was established on a system of unprecedented size and power, based on 16,000 computer cores producing around 100 trillion inter-neuronal interactions. This project, which was established for the purpose of image content analysis, quickly spread to the rest of the technologies used by Google. Google’s AlphaGo system, which is based on a convolutional neural network, managed to beat the world champion at the game of Go. The young Facebook, with the help of the aforementioned Yann LeCun, has already made significant inroads into the field of deep learning, with extremely impressive achievements such as identifying people in photos. The objective, according to Facebook CEO Mark Zuckerberg, is to create computerized systems that will be superior to human beings in terms of vision, hearing, language and thinking.

    Today, no one doubts that deep learning is a dramatic revolution when it comes to speed of calculation and processing huge amounts of data with a high level of accuracy. Moreover, the applications of this revolution are already being used in a huge variety of areas: encryption, intelligence, autonomous vehicles (Mobileye’s solution is based on this technology), object recognition in stills and video, speech recognition and more.

    Back to the Foundations

    Surprisingly enough, however, the great progress described above has not included a basic theoretical understanding that explains the source of these networks’ effectiveness. Theory, as in many other cases in the history of technology, has lagged behind practice.

    This is where Prof. Elad’s group enters the picture, with a new article that presents a basic and in-depth theoretical explanation for deep learning. The people responsible for the discovery are Prof. Elad and his three doctoral students: Vardan Papyan, Jeremias Sulam and Yaniv Romano. Surprisingly, this team came to this field almost by accident, from research in a different arena: sparse representations. Sparse representations are a universal information model that describes data as molecules formed from the combination of a small number of atoms (hence the term ‘sparse’). This model has been tremendously successful over the past two decades and has led to significant breakthroughs in signal and image processing, machine learning, and other fields.

    So, how does this model relates to deep neural networks? It turns out that the principle of sparseness continues to play a major role, and even more so in this case. “Simply put, in our study we propose a hierarchical mathematical model for the representation of the treated information, whereby atoms are connected to each other and form molecules, just as before, except that now the assembly process continues: molecules form cells, cells form tissues, which in turn form organs and, in the end, the complete body – a body of information – is formed. The neural network’s job is to break up the complete information into its components in order to understand the data and its origin.

    Papyan and Sulam created the initial infrastructure in two articles completed in June 2016, while in the follow-up work Papyan and Romano diverted the discussion to deep learning and neural networks. The final article, as noted, puts forward the theoretical infrastructure that explains the operating principles of deep neural networks and their success in learning tasks.

    “We can illustrate the significance of our discovery using an analogy to the world of physics,” says Prof. Elad. “Imagine an astrophysicist who monitors the movement of celestial objects in search of the trajectories of stars. To explain these trajectories, and even predict them, he will define a specific mathematical model. In order for the model to be in line with reality, he will find that it is necessary to add complementary elements to it – black holes and antimatter, which will be investigated later using experimental tools.

    “We took the same path: We started from the real scenario of data being processed by a multi-layered neural network, and formulated a mathematical model for the data to be processed. This model enabled us to show that one possible way to decompose the data into its building blocks is the feed-forward neural network, but this could now be accompanied by an accurate prediction of its performance. Here, however, and unlike the astrophysical analogy, we can not only analyze and predict reality but also improve the studied systems, since they are under our control.”

    Prof. Elad’s emphasizes that “our expertise in this context is related to handling signals and images, but the theoretical paradigm that we present in the article could be relevant to any field, from cyberspace to autonomous navigation, from deciphering emotion in a text to speech recognition. The field of deep learning has made huge advances even without us, but the theoretical infrastructure that we are providing here closes much of the enormous gap between theory and practice that existed in this field, and I have no doubt that our work will provide a huge boost to the practical aspects of deep learning.”

    About the Doctoral Students

    When Vardan Papyan completed his master’s degree, supervised by Prof. Elad, he didn’t intend to continue studying towards a PhD. However, during the final MSc exam, the examiners determined that his work was almost a complete doctoral thesis. After consulting with the Dean of the Computer Science Faculty and the Dean of the Technion’s Graduate School, it was decided to admit him to the direct Ph.D. track with the understanding that he would complete his doctorate within less than a year.

    Yaniv Romano, a student in the direct Ph.D. track, has already won several prestigious awards. In the summer of 2015, he spent several months as an intern at Google Mountain View, USA, and left an unforgettable impression with his original solution to the single-image super-resolution problem, which is being considered for several of Google’s products.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Technion Campus

    A science and technology research university, among the world’s top ten,
    dedicated to the creation of knowledge and the development of human capital and leadership,
    for the advancement of the State of Israel and all humanity.

     
  • richardmitnick 3:51 pm on May 22, 2016 Permalink | Reply
    Tags: , , Deep Learning,   

    From SA: “Unveiling the Hidden Layers of Deep Learning” 

    Scientific American

    Scientific American

    May 20, 2016
    Amanda Montañez

    1
    Credit: Daniel Smilkov and Shan Carter

    In a recent Scientific American article entitled Springtime for AI: The Rise of Deep Learning, computer scientist Yoshua Bengio explains why complex neural networks are the key to true artificial intelligence as people have long envisioned it. It seems logical that the way to make computers as smart as humans is to program them to behave like human brains. However, given how little we know of how the brain functions, this task seems more than a little daunting. So how does deep learning work?

    This visualization by Jen Christiansen explains the basic structure and function of neural networks.

    1
    Graphic by Jen Christiansen; PUNCHSTOCK (faces)

    Evidently, these so-called “hidden layers” play a key role in breaking down visual components to decode the image as a whole. And we know there is an order to how the layers act: from input to output, each layer handles increasingly complex information. But beyond that, the hidden layers—as their name suggests—are shrouded in mystery.

    As part of a recent collaborative project called Tensor Flow, Daniel Smilkov and Shan Carter created a neural network playground, which aims to demystify the hidden layers by allowing users to interact and experiment with them.

    2
    Visualization by Daniel Smilkov and Shan Carter
    Click the image to launch the interactive [in the original article].

    There is a lot going on in this visualization, and I was recently fortunate enough to hear Fernanda Viégas and Martin Wattenberg break some of it down in their keynote talk at OpenVisConf. (Fernanda and Martin were part of the team behind Tensor Flow, which is a much more complex, open-source tool for using neural networks in real-world applications.)

    Rather than something as complicated as faces, the neural network playground uses blue and orange points scattered within a field to “teach” the machine how to find and echo patterns. The user can select different dot-arrangements of varying degrees of complexity, and manipulate the learning system by adding new hidden layers, as well as new neurons within each layer. Then, each time the user hits the “play” button, she can watch as the background color gradient shifts to approximate the arrangement of blue and orange dots. As the pattern becomes more complex, additional neurons and layers help the machine to complete the task more successfully.

    3

    4
    The machine struggles to decode this more complex spiral pattern.

    Besides the neuron layers, the machine has other meaningful features, such as the connections among the neurons. The connections appear as either blue or orange lines, blue being positive—that is, the output for each neuron is the same as its content—and orange being negative—meaning the output is the opposite of each neuron’s values. Additionally, the thickness and opacity of the connection lines indicate the confidence of the prediction each neuron is making, much like the connections in our brains strengthen as we advance through a learning process.

    Interestingly, as we get better at building neural networks for machines, we may end up revealing new information about how our own brains work. Visualizing and playing with the hidden layers seems like a great way to facilitate this process while also making the concept of deep learning accessible to a wider audience.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: