Tagged: MIT Technology Review Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:42 pm on January 8, 2020 Permalink | Reply
    Tags: "NASA’s new exoplanet hunter found its first potentially habitable world", , , , , MIT Technology Review, , , Planet TOI 700 d   

    From MIT Technology Review: “NASA’s new exoplanet hunter found its first potentially habitable world” 

    MIT Technology Review
    From MIT Technology Review

    1.8.20
    Neel V. Patel

    1
    TOI 700 d

    NASA’s Transiting Exoplanet Survey Satellite (TESS) has just found a new potentially habitable exoplanet the size of Earth, located about 100 light-years away. It’s the first potentially habitable exoplanet the telescope has found since it was launched in April 2018.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    It’s called TOI 700 d [science paper https://arxiv.org/abs/2001.00952 ]. It orbits a red dwarf star about 40% less massive than the sun and half as cool. The planet itself is about 1.2 times the size of Earth and orbits the host star every 37 days, receiving close to 86% of the amount of sunlight Earth does.

    Most notably, TOI 700 d is in what’s thought to be its star’s habitable zone, meaning it’s at a distance where temperatures ought to be moderate enough to support liquid water on the surface. This raises hopes TOI 700 d could be amenable to life—even though no one can agree on what it means for a planet to be habitable.

    A set of 20 different simulations meant to model TOI 700 d suggest the planet is rocky and has an atmosphere that helps it retain water, but there’s a chance it might simply be a gaseous mini-Neptune. We won’t know for sure until follow-up observations are made with some sharper instruments, such as the upcoming James Webb Space Telescope, which is planned for launch in March 2021.

    NASA/ESA/CSA Webb Telescope annotated

    TESS finds exoplanets using the tried-and-true technique of looking for objects as they’re transiting in front of their host stars.

    Planet transit. NASA/Ames

    Data from NASA’s Spitzer Space Telescope was also used to get some closer measurements of the planet’s size and orbit.

    NASA/Spitzer Infrared Telescope

    Tess is NASA’s newest exoplanet-hunting space telescope, the successor to the renowned Kepler Space Telescope that was used to find some 2,600 exoplanets.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    TESS, able to survey 85% of the night sky (400 times more than what Kepler could monitor), is about to finish its primary two-year mission but has fallen woefully short of expectations. NASA initially thought TESS was going to find more than 20,000 transiting exoplanets, but with only months left it has only identified 1,588 candidates. Even so, the telescope’s mission will almost surely be extended.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 12:31 pm on December 20, 2019 Permalink | Reply
    Tags: "These are the stars the Pioneer and Voyager spacecraft will encounter", , , , , , , , , , MIT Technology Review, NASA Pioneer 10 and 11,   

    From MIT Technology Review: “These are the stars the Pioneer and Voyager spacecraft will encounter” 

    MIT Technology Review
    From MIT Technology Review

    Dec 20, 2019
    Emerging Technology from the arXiv

    As four NASA spacecraft exit our solar system, a 3D map [below] of the Milky Way reveals which others they’re likely to visit tens of thousands of years on.

    Laniakea supercluster. From Nature The Laniakea supercluster of galaxies R. Brent Tully, Hélène Courtois, Yehuda Hoffman & Daniel Pomarède at http://www.nature.com/nature/journal/v513/n7516/full/nature13674.html. Milky Way is the red dot.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt. The bar is visible in this image

    NASA Pioneer 10

    NASA Pioneer 11

    NASA/Voyager 1

    NASA/Voyager 2

    During the 1970s, NASA launched four of the most important spacecraft ever built. When Pioneer 10 began its journey to Jupiter, astronomers did not even know whether it was possible to pass through the asteroid belt unharmed.

    The inner Solar System, from the Sun to Jupiter. Also includes the asteroid belt (the white donut-shaped cloud), the Hildas (the orange “triangle” just inside the orbit of Jupiter), the Jupiter trojans (green), and the near-Earth asteroids. The group that leads Jupiter are called the “Greeks” and the trailing group are called the “Trojans” (Murray and Dermott, Solar System Dynamics, pg. 107)
    This image is based on data found in the en:JPL DE-405 ephemeris, and the en:Minor Planet Center database of asteroids (etc) published 2006 Jul 6. The image is looking down on the en:ecliptic plane as would have been seen on 2006 August 14. It was rendered by custom software written for Wikipedia. The same image without labels is also available at File:InnerSolarSystem.png. Mdf at English Wikipedia

    Only after it emerged safe was Pioneer 11 sent on its way.

    Both sent back the first close-up pictures of Jupiter, with Pioneer 11 continuing to Saturn. Voyager 1 and 2 later took even more detailed measurements, and extended the exploration of the solar system to Uranus and Neptune.

    All four of these spacecraft are now on their way out of the solar system, heading into interstellar space at a rate of about 10 kilometers per second. They will travel about a parsec (3.26 light-years) every 100,000 years, and that raises an important question: What stars will they encounter next?

    This is harder to answer than it seems. Stars are not stationary but moving rapidly through interstellar space. Without knowing their precise velocity, it’s impossible to say which ones our interstellar travelers are on course to meet.

    Enter Coryn Bailer-Jones at the Max Planck Institute for Astronomy in Germany and Davide Farnocchia at the Jet Propulsion Laboratory in Pasadena, California. These guys have performed this calculation using a new 3D map of star positions and velocities throughout the Milky Way.

    Max Planck Institute for Astronomy


    Max Planck Institute for Astronomy campus, Heidelberg, Baden-Württemberg, Germany

    NASA JPL


    NASA JPL-Caltech Campus

    This has allowed them to work out for the first time which stars the spacecraft will rendezvous with in the coming millennia. “The closest encounters for all spacecraft take place at separations between 0.2 and 0.5 parsecs within the next million years,” they say.

    Their results were made possible by the observations of a space telescope called Gaia.

    ESA/GAIA satellite

    Since 2014, Gaia has sat some 1.5 million from Earth recording the position of 1 billion stars, planets, comets, asteroids, quasars, and so on. At the same time, it has been measuring the velocities of the brightest 150 million of these objects.

    The result is a three-dimensional map of the Milky Way and the way astronomical objects within it are moving. It is the latest incarnation of this map, Gaia Data Release 2 or GDR2, that Bailer-Jones and Farnocchia have used for their calculations.

    3
    ESA/Gaia/DPAC

    The map makes it possible to project the future positions of stars in our neighborhood and to compare them with the future positions of the Pioneer and Voyager spacecraft, calculated using their last known positions and velocities.

    This information yields a list of stars that the spacecraft will encounter in the coming millennia. Bailer-Jones and Farnocchia define a close encounter as flying within 0.2 or 0.3 parsecs.

    The first spacecraft to encounter another star will be Pioneer 10 in 90,000 years. It will approach the orange-red star HIP 117795 in the constellation of Cassiopeia at a distance of 0.231 parsecs. Then, in 303,000 years, Voyager 1 will pass a star called TYC 3135-52-1 at a distance of 0.3 parsecs. And in 900,000 years, Pioneer 11 will pass a star called TYC 992-192-1 at a distance of 0.245 parsecs.

    2
    These fly-bys are all at a distance of less than one light-year and in some cases might even graze the orbits of the stars’ most distant comets.

    Voyager 2 is destined for a more lonely future. According to the team’s calculations, it will never come within 0.3 parsecs of another star in the next 5 million years, although it is predicted to come within 0.6 parsecs of a star called Ross 248 in the constellation Andromeda in 42,000 years.

    Andromeda Galaxy Messier 31 with Messier32 -a satellite galaxy copyright Terry Hancock.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    These interstellar explorers will eventually collide with or be captured by other stars. It’s not possible yet to say which ones these will be, but Bailer-Jones and Farnocchia have an idea of the time involved. “The timescale for the collision of a spacecraft with a star is of order 10^20 years, so the spacecraft have a long future ahead of them,” they conclude.

    The Pioneer and Voyager spacecraft will soon be joined by another interstellar traveler. The New Horizons spacecraft that flew past Pluto in 2015 is heading out of the solar system but may yet execute a maneuver so that it intercepts a Kuiper Belt object on its way.

    NASA/New Horizons spacecraft

    Kuiper Belt. Minor Planet Center

    After that last course correction takes place, Bailer-Jones and Farnocchia will be able to work out its final destination.

    Ref: arxiv.org/abs/1912.03503 : Future stellar flybys of the Voyager and Pioneer spacecraft

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 1:40 pm on September 24, 2019 Permalink | Reply
    Tags: For a long time there have been doubts that quantum machines would ever be able to outstrip classical computers at anything., , MIT Technology Review, Quantum computing is somethng we need to start preparing for now., Quantum supremacy?, We still haven’t had confirmation from Google about what it’s done.   

    From MIT Technology Review: “Here’s what quantum supremacy does—and doesn’t—mean for computing” 

    MIT Technology Review
    From MIT Technology Review

    Sep 24, 2019
    Martin Giles

    1

    Google has reportedly demonstrated for the first time that a quantum computer is capable of performing a task beyond the reach of even the most powerful conventional supercomputer in any practical time frame—a milestone known in the world of computing as “quantum supremacy.”

    The ominous-sounding term, which was coined by theoretical physicist John Preskill in 2012, evokes an image of Darth Vader–like machines lording it over other computers. And the news has already produced some outlandish headlines, such as one on the Infowars website that screamed, “Google’s ‘Quantum Supremacy’ to Render All Cryptography and Military Secrets Breakable.” Political figures have been caught up in the hysteria, too: Andrew Yang, a presidential candidate, tweeted that “Google achieving quantum computing is a huge deal. It means, among many other things, that no code is uncrackable.”

    Nonsense. It doesn’t mean that at all. Google’s achievement is significant, but quantum computers haven’t suddenly turned into computing colossi that will leave conventional machines trailing in the dust. Nor will they be laying waste to conventional cryptography in the near future—though in the longer term, they could pose a threat we need to start preparing for now.

    Here’s a guide to what Google appears to have achieved—and an antidote to the hype surrounding quantum supremacy.

    What do we know about Google’s experiment?

    We still haven’t had confirmation from Google about what it’s done. The information about the experiment comes from a paper titled “Quantum Supremacy Using a Programmable Superconducting Processor,” which was briefly posted on a NASA website before being taken down. Its existence was revealed in a report in the Financial Times—and a copy of the paper can be found here.

    The experiment is a pretty arcane one, but it required a great deal of computational effort. Google’s team used a quantum processor code-named Sycamore to prove that the figures pumped out by a random number generator were indeed truly random. They then worked out how long it would take Summit, the world’s most powerful supercomputer, to do the same task.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The difference was stunning: while the quantum machine polished it off in 200 seconds, the researchers estimated that the classical computer would need 10,000 years.

    When the paper is formally published, other researchers may start poking holes in the methodology, but for now it appears that Google has scored a computing first by showing that a quantum machine can indeed outstrip even the most powerful of today’s supercomputers. “There’s less doubt now that quantum computers can be the future of high-performance computing,” says Nick Farina, the CEO of quantum hardware startup EeroQ.

    Why are quantum computers so much faster than classical ones?

    In a classical computer, bits that carry information represent either a 1 or a 0; but quantum bits, or qubits—which take the form of subatomic particles such as photons and electrons—can be in a kind of combination of 1 and 0 at the same time, a state known as “superposition.” Unlike bits, qubits can also influence one another through a phenomenon known as “entanglement,” which baffled even Einstein, who called it “spooky action at a distance.”

    Thanks to these properties, which are described in more detail in our quantum computing explainer, adding just a few extra qubits to a system increases its processing power exponentially. Crucially, quantum machines can crunch through large amounts of data in parallel, which helps them outpace classical machines that process data sequentially. That’s the theory. In practice, researchers have been laboring for years to prove conclusively that a quantum computer can do something even the most capable conventional one can’t. Google’s effort has been led by John Martinis, who has done pioneering work in the use of superconducting circuits to generate qubits.

    Doesn’t this speedup mean quantum machines can overtake other computers now?

    No. Google picked a very narrow task. Quantum computers still have a long way to go before they can best classical ones at most things—and they may never get there. But researchers I’ve spoken to since the paper appeared online say Google’s experiment is still significant because for a long time there have been doubts that quantum machines would ever be able to outstrip classical computers at anything.

    Until now, research groups have been able to reproduce the results of quantum machines with around 40 qubits on classical systems. Google’s Sycamore processor, which harnessed 53 qubits for the experiment, suggests that such emulation has reached its limits. “We’re entering an era where exploring what a quantum computer can do will now require a physical quantum computer … You won’t be able to credibly reproduce results anymore on a conventional emulator,” explains Simon Benjamin, a quantum researcher at the University of Oxford.

    Isn’t Andrew Yang right that our cryptographic defenses can now be blown apart?

    Again, no. That’s a wild exaggeration. The Google paper makes clear that while its team has been able to show quantum supremacy in a narrow sampling task, we’re still a long way from developing a quantum computer capable of implementing Shor’s algorithm, which was developed in the 1990s to help quantum machines factor massive numbers. Today’s most popular encryption methods can be broken only by factoring such numbers—a task that would take conventional machines many thousands of years.

    But this quantum gap shouldn’t be cause for complacency, because things like financial and health records that are going to be kept for decades could eventually become vulnerable to hackers with a machine capable of running a code-busting algorithm like Shor’s. Researchers are already hard at work on novel encryption methods that will be able to withstand such attacks (see our explainer on post-quantum cryptography for more details).

    Why aren’t quantum computers as supreme as “quantum supremacy” makes them sound?

    The main reason is that they still make far more errors than classical ones. Qubits’ delicate quantum state lasts for mere fractions of a second and can easily be disrupted by even the slightest vibration or tiny change in temperature—phenomena known as “noise” in quantum-speak. This causes mistakes to creep into calculations. Qubits also have a Tinder-like tendency to want to couple with plenty of others. Such “crosstalk” between them can also produce errors.

    Google’s paper suggests it has found a novel way to cut down on crosstalk, which could help pave the way for more reliable machines. But today’s quantum computers still resemble early supercomputers in the amount of hardware and complexity needed to make them work, and they can tackle only very esoteric tasks. We’re not yet even at a stage equivalent to the ENIAC, IBM’s first general-purpose computer, which was put to work in 1945.

    So what’s the next quantum milestone to aim for?

    Besting conventional computers at solving a real-world problem—a feat that some researchers refer to as “quantum advantage.” The hope is that quantum computers’ immense processing power will help uncover new pharmaceuticals and materials, enhance artificial-intelligence applications, and lead to advances in other fields such as financial services, where they could be applied to things like risk management.

    If researchers can’t demonstrate a quantum advantage in at least one of these kinds of applications soon, the bubble of inflated expectations that’s blowing up around quantum computing could quickly burst.

    When I asked Google’s Martinis about this in an interview for a story last year, he was clearly aware of the risk. “As soon as we get to quantum supremacy,” he told me, “we’re going to want to show that a quantum machine can do something really useful.” Now it’s time for his team and other researchers to step up to that pressing challenge.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 12:22 pm on September 21, 2019 Permalink | Reply
    Tags: "Google researchers have reportedly achieved 'quantum supremacy'", MIT Technology Review, , Quantum Computing-still a long way off.   

    From MIT Technology Review: “Google researchers have reportedly achieved ‘quantum supremacy'” 

    MIT Technology Review
    From MIT Technology Review

    Sep 20, 2019

    1
    The news: According to a report in the Financial Times, a team of researchers from Google led by John Martinis have demonstrated quantum supremacy for the first time. This is the point at which a quantum computer is shown to be capable of performing a task that’s beyond the reach of even the most powerful conventional supercomputer. The claim appeared in a paper that was posted on a NASA website, but the publication was then taken down. Google did not respond to a request for comment from MIT Technology Review.

    Why NASA? Google struck an agreement last year to use supercomputers available to NASA as benchmarks for its supremacy experiments. According to the Financial Times report, the paper said that Google’s quantum processor was able to perform a calculation in three minutes and 20 seconds that would take today’s most advanced supercomputer, known as Summit, around 10,000 years.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    In the paper, the researchers said that, to their knowledge, the experiment “marks the first computation that can only be performed on a quantum processor.”

    Quantum speed up: Quantum machines are so powerful because they harness quantum bits, or qubits. Unlike classical bits, which are either a 1 or a 0, qubits can be in a kind of combination of both at the same time. Thanks to other quantum phenomena, which are described in our explainer here, quantum computers can crunch large amounts of data in parallel that conventional machines have to work through sequentially. Scientists have been working for years to demonstrate that the machines can definitively outperform conventional ones.

    How significant is this milestone? Very. In a discussion of quantum computing at MIT Technology Review’s EmTech conference in Cambridge, Massachusetts this week before news of Google’s paper came out, Will Oliver, an MIT professor and quantum specialist, likened the computing milestone to the first flight of the Wright brothers at Kitty Hawk in aviation. He said it would give added impetus to research in the field, which should help quantum machines achieve their promise more quickly. Their immense processing power could ultimately help researchers and companies discover new drugs and materials, create more efficient supply chains, and turbocharge AI.

    But, but: It’s not clear what task Google’s quantum machine was working on, but it’s likely to be a very narrow one. In an emailed comment to MIT Technology Review, Dario Gil of IBM, which is also working on quantum computers, says an experiment that was probably designed around a very narrow quantum sampling problem doesn’t mean the machines will rule the roost.

    IBM iconic image of Quantum computer

    “In fact quantum computers will never reign ‘supreme’ over classical ones,” says Gil, “but will work in concert with them, since each have their specific strengths.” For many problems, classical computers will remain the best tool to use.

    And another but: Quantum computers are still a long way from being ready for mainstream use. The machines are notoriously prone to errors, because even the slightest change in temperature or tiny vibration can destroy the delicate state of qubits. Researchers are working on machines that will be easier to build, manage, and scale, and some computers are now available via the computing cloud. But it could still be many years before quantum computers that can tackle a wide range of problems are widely available.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 10:09 am on August 22, 2019 Permalink | Reply
    Tags: "A super-secure quantum internet just took another step closer to reality", MIT Technology Review, Quantum … what?,   

    From MIT Technology Review: “A super-secure quantum internet just took another step closer to reality” 

    MIT Technology Review
    From MIT Technology Review

    1
    Scientists have managed to send a record-breaking amount of data in quantum form, using a strange unit of quantum information called a qutrit.

    The news: Quantum tech promises to allow data to be sent securely over long distances. Scientists have already shown it’s possible to transmit information both on land and via satellites using quantum bits, or qubits. Now physicists at the University of Science and Technology of China and the University of Vienna in Austria have found a way to ship even more data using something called quantum trits, or qutrits.

    Qutrits? Oh, come on, you’ve just made that up: Nope, they’re real. Conventional bits used to encode everything from financial records to YouTube videos are streams of electrical or photonic pulses than can represent either a 1 or a 0. Qubits, which are typically electrons or photons, can carry more information because they can be polarized in two directions at once, so they can represent both a 1 and a 0 at the same time. Qutrits, which can be polarized in three different dimensions simultaneously, can carry even more information. In theory, this can then be transmitted using quantum teleportation.

    Quantum … what? Quantum teleportation is a method for shipping data that relies on an almost-mystical phenomenon called entanglement. Entangled quantum particles can influence one another’s state, even if they are continents apart. In teleportation, a sender and receiver each receive one of a pair of entangled qubits. The sender measures the interaction of their qubit with another one that holds data they want to send. By applying the results of this measurement to the other entangled qubit, the receiver can work out what information has been transmitted. (For a more detailed look at quantum teleportation, see our explainer here.)

    Measuring progress: Getting this to work with qubits isn’t easy—and harnessing qutrits is even harder because of that extra dimension. But the researchers, who include Jian-Wei Pan, a Chinese pioneer of quantum communication, say they have cracked the problem by tweaking the first part of the teleportation process so that senders have more measurement information to pass on to receivers. This will make it easier for the latter to work out what data has been teleported over. The research was published in the journal Physical Review Letters.

    Deterring hackers: This might seem rather esoteric, but it has huge implications for cybersecurity. Hackers can snoop on conventional bits flowing across the internet without leaving a trace. But interfering with quantum units of information causes them to lose their delicate quantum state, leaving a telltale sign of hacking. If qutrits can be harnessed at scale, they could form the backbone of an ultra-secure quantum internet that could be used to send highly sensitive government and commercial data.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 11:40 am on July 27, 2019 Permalink | Reply
    Tags: "A new tool uses AI to spot text written by AI", , MIT Technology Review   

    From M.I.T. Technology Review: “A new tool uses AI to spot text written by AI” 

    MIT Technology Review
    From M.I.T. Technology Review

    July 26, 2019

    1
    AI algorithms can generate text convincing enough to fool the average human—potentially providing a way to mass-produce fake news, bogus reviews, and phony social accounts. Thankfully, AI can now be used to identify fake text, too.

    The news: Researchers from Harvard University and the MIT-IBM Watson AI Lab have developed a new tool for spotting text that has been generated using AI. Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.

    The context: Misinformation is increasingly being automated, and the technology required to generate fake text and imagery is advancing fast. AI-powered tools such as this may become valuable weapons in the fight to catch fake news, deepfakes, and twitter bots.

    Faking it: Researchers at OpenAI recently demonstrated an algorithm capable of dreaming up surprisingly realistic passages. They fed huge amounts of text into a large machine-learning model, which learned to pick up statistical patterns in those words. The Harvard team developed their tool using a version of the OpenAI code that was released publicly.

    How predictable: GLTR highlights words that are statistically likely to appear after the preceding word in the text. As shown in the passage above (from Infinite Jest), the most predictable words are green; less predictable are yellow and red; and least predictable are purple. When tested on snippets of text written by OpenAI’s algorithm, it finds a lot of predictability. Genuine news articles and scientific abstracts contain more surprises.

    Mind and machine: The researchers behind GLTR carried out another experiment as well. They asked Harvard students to identify AI-generated text—first without the tool, and then with the help of its highlighting. The students were able to spot only half of all fakes on their own, but 72% when given the tool. “Our goal is to create human and AI collaboration systems,” says Sebastian Gehrmann, a PhD student involved in the work.

    If you’re interested, you can try it out for yourself.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 1:51 pm on May 14, 2019 Permalink | Reply
    Tags: "How AI could save lives without spilling medical secrets", AI algorithms trained on data from different hospitals could potentially diagnose illness; prevent disease; and extend lives., AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease., MIT Technology Review, The first big test for a platform that lets AI algorithms learn from private patient data is under way at Stanford Medical School., The sensitivity of private patient data is a looming problem.   

    From M.I.T. Technology Review: “How AI could save lives without spilling medical secrets” 

    MIT Technology Review
    From M.I.T. Technology Review

    May 14, 2019
    Will Knight

    1
    Ariel Davis

    The first big test for a platform that lets AI algorithms learn from private patient data is under way at Stanford Medical School.

    The potential for artificial intelligence to transform health care is huge, but there’s a big catch.

    AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

    That’s why researchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak.

    One promising approach is now getting its first big test at Stanford Medical School in California. Patients there can choose to contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

    Participants submit ophthalmology test results and health record data through an app. The information is used to train a machine-learning model to identify signs of eye disease (such as diabetic retinopathy and glaucoma) in the images. But the data is protected by technology developed by Oasis Labs, a startup spun out of UC Berkeley, which guarantees that the information cannot be leaked or misused. The startup was granted permission by Stanford Medical School to start the trial last week, in collaboration with researchers at UC Berkeley, Stanford and ETH Zürich.

    The sensitivity of private patient data is a looming problem. AI algorithms trained on data from different hospitals could potentially diagnose illness, prevent disease, and extend lives. But in many countries medical records cannot easily be shared and fed to these algorithms for legal reasons. Research on using AI to spot disease in medical images or data usually involves relatively small data sets, which greatly limits the technology’s promise.

    “It is very exciting to be able to do with this with real clinical data,” says Dawn Song, cofounder of Oasis Labs and a professor at UC Berkeley. “We can really show that this works.”

    Oasis stores the private patient data on a secure chip, designed in collaboration with other researchers at Berkeley. The data remains within the Oasis cloud; outsiders are able to run algorithms on the data, and receive the results, without its ever leaving the system. A smart contract—software that runs on top of a blockchain—is triggered when a request to access the data is received. This software logs how the data was used and also checks to make sure the machine-learning computation was carried out correctly.

    “This will show we can help patients contribute data in a privacy-protecting way,” says Song. She says that the eye disease model will become more accurate as more data is collected.

    Such technology could also make it easier to apply AI to other sensitive information, such as financial records or individuals’ buying habits or web browsing data. Song says the plan is to expand the medical applications before looking to other domains.

    “The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.

    “You would love it if a medical researcher could learn on everyone’s medical records,” Evans says. “You could do an analysis and tell if a drug is working on not. But you can’t do that today.”

    Despite the potential Oasis represents, Evans is cautious. Storing data in secure hardware creates a potential point of failure, he notes. If the company that makes the hardware is compromised, then all the data handled this way will also be vulnerable. Blockchains are relatively unproven, he adds.

    “There’s a lot of different tech coming together,” he says of Oasis’s approach. “Some is mature, and some is cutting-edge and has challenges.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 10:14 am on May 1, 2019 Permalink | Reply
    Tags: , MIT Technology Review, MIT's Sertac Karman and Vivienne Sze developed the new chip, New chips   

    From M.I.T. Technology Review: “This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI.” 

    MIT Technology Review
    From M.I.T. Technology Review

    1
    Photographs by Tony Luong

    May 1, 2019
    Will Knight

    Artificial Intelligence

    On a dazzling morning in Palm Springs, California, recently, Vivienne Sze took to a small stage to deliver perhaps the most nerve-racking presentation of her career.

    2
    MIT’s Sertac Karman and Vivienne Sze developed the new chip

    She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.

    The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.

    “It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.

    Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.

    New capabilities

    Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.

    Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.

    The microchips are designed to squeeze more out of the “deep learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.

    3

    This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.

    The high stakes that come with investing in next-generation AI chips, and maintaining America’s dominance in chipmaking overall, aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see The out-there AI ideas designed to keep the US ahead of China).

    But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.

    The new chip race

    Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video game graphics chips that perform parallel computations for rendering 3D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.

    The potential for new chip architectures to improve AI has stirred up a level of entrepreneurial activity that the chip industry hasn’t seen in decades (see The race to power AI’s silicon brains and China has never had a real chip industry. AI may change that).

    4

    Big tech companies hoping to harness and commercialize AI including Google, Microsoft, and (yes) Amazon, are all working on their own deep learning chips. Many smaller companies are developing new chips, too. “It impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group , an analyst firm. “I’m not joking that we learn about a new one nearly every week.”

    The real opportunity, says Sze, isn’t building the most powerful deep learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large datacenters and so can only rely on the power available on the device itself to run. This is known as operating on the “edge.”

    “AI will be everywhere—and figuring out ways to make things more energy efficient will be extremely important,” says Naveen Rao, vice president of the Artificial Intelligence group at Intel.

    For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 2:52 pm on April 30, 2019 Permalink | Reply
    Tags: , Formation of the Moon, MIT Technology Review   

    From M.I.T. Technology Review: “The moon may be made from a magma ocean that once covered Earth” 

    MIT Technology Review
    From M.I.T. Technology Review

    Apr 30, 2019

    1

    There are a number of theories about where the moon came from. Our best guess is that it was formed when the Earth was hit by a large object known as Theia. The impact threw up huge amounts of debris into orbit, which eventually coalesced to form the moon.

    2
    Thea impacts Earth. Smithsonian Magazine

    There’s a problem with this theory. The mathematical models show that most of the material that makes up the moon should come from Theia. But samples from the Apollo missions show that most of the material on the moon came from Earth.

    A paper out earlier this week in Nature Geoscience has a possible explanation. The research, led by Natsuki Hosono from the Japan Agency for Marine-Earth Science and Technology, suggests that the Earth at the time of impact was covered in hot magma rather than a hard outer crust.

    Magma on a planetary surface could be dislodged much more easily than a solid crust could be, so it’s plausible that when Theia struck the Earth, molten material originating from Earth flew up into space and then hardened into the moon.

    This theory relies a lot on the timing of the formation of the moon. The Earth would have to have been in a sweet spot of magma heat and consistency for the theory to be true.

    Additionally, as Jay Melosh at Purdue University explains in a comment article alongside the research, this new simulation still doesn’t check all the boxes needed to get our lunar observations in line with our theories. But it is an important step in getting closer to a solution.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 3:22 pm on March 13, 2019 Permalink | Reply
    Tags: "Quantum computing should supercharge this machine-learning technique", , Certain machine-learning tasks could be revolutionized by more powerful quantum computers., , MIT Technology Review,   

    From M.I.T Technology Review: “Quantum computing should supercharge this machine-learning technique” 

    MIT Technology Review
    From M.I.T Technology Review

    March 13, 2019
    Will Knight

    1
    The machine-learning experiment was performed using this IBM Q quantum computer.

    Certain machine-learning tasks could be revolutionized by more powerful quantum computers.

    Quantum computing and artificial intelligence are both hyped ridiculously. But it seems a combination of the two may indeed combine to open up new possibilities.

    In a research paper published today in the journal Nature, researchers from IBM and MIT show how an IBM quantum computer can accelerate a specific type of machine-learning task called feature matching. The team says that future quantum computers should allow machine learning to hit new levels of complexity.

    As first imagined decades ago, quantum computers were seen as a different way to compute information. In principle, by exploiting the strange, probabilistic nature of physics at the quantum, or atomic, scale, these machines should be able to perform certain kinds of calculations at speeds far beyond those possible with any conventional computer (see “What is a quantum computer?”). There is a huge amount of excitement about their potential at the moment, as they are finally on the cusp of reaching a point where they will be practical.

    At the same time, because we don’t yet have large quantum computers, it isn’t entirely clear how they will outperform ordinary supercomputers—or, in other words, what they will actually do (see “Quantum computers are finally here. What will we do with them?”).

    Feature matching is a technique that converts data into a mathematical representation that lends itself to machine-learning analysis. The resulting machine learning depends on the efficiency and quality of this process. Using a quantum computer, it should be possible to perform this on a scale that was hitherto impossible.

    The MIT-IBM researchers performed their simple calculation using a two-qubit quantum computer. Because the machine is so small, it doesn’t prove that bigger quantum computers will have a fundamental advantage over conventional ones, but it suggests that would be the case, The largest quantum computers available today have around 50 qubits, although not all of them can be used for computation because of the need to correct for errors that creep in as a result of the fragile nature of these quantum bits.

    “We are still far off from achieving quantum advantage for machine learning,” the IBM researchers, led by Jay Gambetta, write in a blog post. “Yet the feature-mapping methods we’re advancing could soon be able to classify far more complex data sets than anything a classical computer could handle. What we’ve shown is a promising path forward.”

    “We’re at stage where we don’t have applications next month or next year, but we are in a very good position to explore the possibilities,” says Xiaodi Wu, an assistant professor at the University of Maryland’s Joint Center for Quantum Information and Computer Science. Wu says he expects practical applications to be discovered within a year or two.

    Quantum computing and AI are hot right now. Just a few weeks ago, Xanadu, a quantum computing startup based in Toronto, came up with an almost identical approach to that of the MIT-IBM researchers, which the company posted online. Maria Schuld, a machine-learning researcher at Xanadu, says the recent work may be the start of a flurry of research papers that combine the buzzwords “quantum” and “AI.”

    “There is a huge potential,” she says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: