Tagged: MIT Technology Review Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:58 pm on July 22, 2020 Permalink | Reply
    Tags: "Why Japan is emerging as NASA’s most important space partner", Artemis, , , , , JAXA's proposed lunar rover with Toyota, MIT Technology Review,   

    From MIT Technology Review: “Why Japan is emerging as NASA’s most important space partner” 

    From MIT Technology Review

    July 22, 2020
    Neel V. Patel

    Japan provides a few major advantages in helping the US get back to the moon. In return, it will get its own chance to set foot on the lunar surface.

    Conceptual art of JAXA’s proposed lunar rover with Toyota. Credit: Toyota/JAXA

    The first time the US went to the moon, it put down an estimated $283 billion to do it alone. That’s not the case with Artemis, the new NASA program to send humans back. Although it’s a US-led initiative, Artemis is meant to be a much more collaborative effort than Apollo. Japan is quickly emerging as one of the most important partners for this program—perhaps the most important.

    Although NASA has teased for quite some time the idea of a pretty ambitious role for Japan in Artemis, that talk finally became real on July 9, when the two countries signed a formal agreement regarding further collaboration in human exploration. It gives NASA a much-needed partner for Artemis—without which the agency would find it much more difficult to meet the long-term goals of establishing a sustainable permanent presence on the moon.

    The US-Japanese space relationship goes back a long time, says John Logsdon, a space policy expert at George Washington University: “Japan has been basically our best international partner over the last 40-plus years.” It may have declined to work on the space shuttle program in the 1970s, but it reversed course in the early 1980s and signed on with the International Space Station program.

    Since then, Japan’s space capabilities have progressed rapidly. The country found a reliable launch vehicle in the H-IIA rocket, built by Mitsubishi, and JAXA, its space agency, has found success in a number of high-profile science missions, like HALCA (the first space-based mission for very long baseline interferometry, in which multiple telescopes are used simultaneously to study astronomical objects), Hayabusa (the first asteroid sample return mission), the lunar probe SELENE, IKAROS (the first successful demonstration of solar technology in interplanetary space), and Hayabusa2 (expected to return to Earth with samples from the asteroid Ryugu in December). Since 1990, 12 Japanese astronauts have been in space.


    JAXA Selene Kaguya lunar probe

    JAXA IKAROS spacecraft

    JAXA/Hayabusa 2 Credit: JAXA/Akihiro Ikeshita

    So the country has a spaceflight pedigree superior to that of most other American allies, and is more than capable of building and deploying the types of spaceflight technologies that could push a lunar exploration program forward (NASA, after all, is working on an Artemis budget that is much slimmer than Apollo’s). In return, Japan gets to participate in a major human exploration program and likely send its own astronauts to the moon via NASA missions, without having to pay for and develop a lunar mission of its own.

    What exactly will Japan do for Artemis? Specific details about the new agreement were not released, but we already know the country is sending a couple of science payloads on Artemis 1 (an uncrewed mission around the moon) and Artemis 2 (crewed, but only a flyby). Back in January, Yoshikazu Shoji, the director of international relations and research at JAXA, told the public that JAXA wanted to help in the development of Gateway, NASA’s upcoming lunar space station that will facilitate deep space exploration. JAXA could contribute to the Habitation and Logistics Outpost (HALO) module, developing life support and power elements, said Shoji. It can also help in delivering cargo, supplies, and parts to Gateway as it’s being built, through its upcoming HTV-X spaceflight vehicle (the successor to the current HTV that supports the ISS).

    For the moon itself, JAXA can provide more data that helps future Artemis missions land more safely. JAXA’s Smart Lander for Investigating Moon (SLIM) mission, slated for 2022, will demonstrate brand-new precision lunar landing technology that could prove very useful later on for both crewed and robotic landers. Japan is also working with Canada and the European Space Agency on Heracles, a robotic transport system that could deliver cargo to the moon or help bring back valuable resources mined there. Heracles is still under development, but it’s aimed at supporting the Artemis program and Gateway in the long run.

    The biggest thing Japan might contribute, however, is a pressurized lunar rover that astronauts could use to cruise around the moon. Last week, Mark Kirasich, acting director of NASA’s Advanced Exploration Systems, unveiled some of NASA’s plans for Artemis, outlining specific proposals for the agency to work with JAXA and its commercial partner, Toyota, to build out this RV-like vehicle for astronauts to use in some of the later lunar missions. Japan’s strong auto industry means the country already has expertise in developing technologies like this, Kirasich said. JAXA and Toyota would like to have this platform ready for launch by 2029.

    Besides helping offset technology costs, having a partner like Japan “is good for the stability of Artemis,” says Logsdon. “International cooperation is popular in Congress, and I think that’s true for most of the public as well.” These agreements mean that funding is more secure, and for a space program that has long-term goals, this is pretty important.

    It also gives the US a trusted ally that can act as a bulwark against another burgeoning space power in the region: China.

    According to Kaitlyn Johnson, an aerospace security expert at the Center for Strategic & International Studies, Japan can provide more regional stability that offsets China’s influence, both in space and in related technology sectors like defense. While the civilian and defense sides of the US space program are almost completely split from one another, that’s not so much the case in countries like Japan. “There’s a lot of technological sharing between agencies within other countries,” she says. It’s likely that work on Artemis will fill some basic knowledge gaps in space defense for Japan too, such as how to identify a stalking satellite.

    The relationship between the two countries in space, says Johnson, is similar to what we see for intelligence sharing among the Five Eyes nations (the US, Australia, Canada, New Zealand, and the UK). “That relationship has extended beyond intelligence into a lot of areas in national security, including space,” she says. “We’re seeing Japan get the similar trusted-ally treatment.”

    Defense benefits aside, space exploration is simply more achievable with partners, and Japan is just a natural fit. “Japan has been at the forefront of technological change for a long time,” says Johnson. “If the world is really serious about exploring space and establishing a presence on other bodies like the moon, I do believe we have to go at those goals together, and share the burdens and resources together.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 2:59 pm on January 29, 2020 Permalink | Reply
    Tags: "This is the highest-resolution photo of the sun ever taken", , DKIST-Daniel K. Inouye Solar Telescope, MIT Technology Review,   

    From MIT Technology Review: “This is the highest-resolution photo of the sun ever taken” 

    From MIT Technology Review

    One of the first images of the surface of the sun, taken by the Daniel K. Inouye Solar Telescope

    Daniel K. Inouye Solar Telescope, DKIST, atop the Haleakala volcano on the Pacific island of Maui, Hawaii, USA, at an altitude of 3,084 m (10,118 ft).

    Astronomers turned on the Daniel K. Inouye Solar Telescope in Maui and managed to snap this incredible view of the sun. And there’s more to come.

    Jan 29, 2020
    Neel V. Patel

    Astronomers have just released the highest-resolution image of the sun. Taken by the Daniel K. Inouye Solar Telescope in Maui, it gives us an unprecedented view of our nearest star and brings us closer to solving several long-standing mysteries.

    The new image demonstrates the telescope’s potential power. It shows off a surface that’s divided up into discrete, Texas-size cells, like cracked sections in the desert soil. You can see plasma oozing off the surface, rising into the air before sinking back into darker lanes.

    “We have now seen the smallest details on the largest object in our solar system,” says Thomas Rimmele, the director of DKIST. The new image was taken December 10, when the telescope achieved first light.

    When formal observations begin in July, DKIST, with its 13-foot mirror, will be the most powerful solar telescope in the world. Located on Haleakalā (the tallest summit on Maui), the telescope will be able to observe structures on the surface of the sun as small as 18.5 miles (30 kilometers). This resolution is over five times better than that of DKIST’s predecessor, the Richard B. Dunn Solar Telescope in New Mexico.

    DKIST was specifically designed to make precise measurements of the sun’s magnetic field throughout the corona (the outermost region of its atmosphere) and answer questions like why the corona is millions of degrees hotter than the sun’s surface.

    Each of the “cells” on the surface of the sun are roughly the size of Texas. The resolution of DKIST is about 18.5 miles.

    Other instruments coming online in the next six months will also collect data pertaining to temperature, velocity, and solar structures. The new solar cycle is about to start up again soon, and this means there’s going to be a wealth of solar activity to spot.

    To observe the sun, you can’t just build a telescope the old-fashioned way. DKIST boasts one of the world’s most complex solar-adaptive optics systems. It uses deformable mirrors to offset distortions caused by Earth’s atmosphere. The shape of the mirror adjusts 2,000 times per second. Staring at the sun also makes the telescope hot enough to melt metal. To cool it down, the DKIST team has to use a swimming pool of ice and 7.5 miles of pipe-distributed coolant.

    There’s a good reason why we need to take a closer look at the sun. When the solar atmosphere releases its magnetic energy, it results in explosive phenomena like solar flares that hurl ultra-energized particles through the solar system in all directions, including ours. This “space weather” can wreak havoc on things like GPS and electrical grids.

    Space weather affects your daily life. It’s time to start paying attention.

    Learning more about solar activity could give us more notice of when hazardous space weather is due to hit.

    The telescope’s history is not without controversy. Haleakalā is important to the culture of Native Hawaiians, who protested its construction of DKIST in the summer of 2015. The DKIST team addressed those concerns in various ways, such as launching a $20 million program at Maui College to teach science in conjunction with Hawaiian culture, and reserving 2% of telescope time for Native Hawaiians.

    The plan is to keep DKIST operational for at least four solar cycles, or around 44 years. “We’re now in the final sprint of what has been a very long marathon,” says Rimmele. “These first images are really just the very beginning.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 5:42 pm on January 8, 2020 Permalink | Reply
    Tags: "NASA’s new exoplanet hunter found its first potentially habitable world", , , , , MIT Technology Review, , , Planet TOI 700 d   

    From MIT Technology Review: “NASA’s new exoplanet hunter found its first potentially habitable world” 

    MIT Technology Review
    From MIT Technology Review

    Neel V. Patel

    TOI 700 d

    NASA’s Transiting Exoplanet Survey Satellite (TESS) has just found a new potentially habitable exoplanet the size of Earth, located about 100 light-years away. It’s the first potentially habitable exoplanet the telescope has found since it was launched in April 2018.

    NASA/MIT TESS replaced Kepler in search for exoplanets

    It’s called TOI 700 d [science paper https://arxiv.org/abs/2001.00952 ]. It orbits a red dwarf star about 40% less massive than the sun and half as cool. The planet itself is about 1.2 times the size of Earth and orbits the host star every 37 days, receiving close to 86% of the amount of sunlight Earth does.

    Most notably, TOI 700 d is in what’s thought to be its star’s habitable zone, meaning it’s at a distance where temperatures ought to be moderate enough to support liquid water on the surface. This raises hopes TOI 700 d could be amenable to life—even though no one can agree on what it means for a planet to be habitable.

    A set of 20 different simulations meant to model TOI 700 d suggest the planet is rocky and has an atmosphere that helps it retain water, but there’s a chance it might simply be a gaseous mini-Neptune. We won’t know for sure until follow-up observations are made with some sharper instruments, such as the upcoming James Webb Space Telescope, which is planned for launch in March 2021.

    NASA/ESA/CSA Webb Telescope annotated

    TESS finds exoplanets using the tried-and-true technique of looking for objects as they’re transiting in front of their host stars.

    Planet transit. NASA/Ames

    Data from NASA’s Spitzer Space Telescope was also used to get some closer measurements of the planet’s size and orbit.

    NASA/Spitzer Infrared Telescope

    Tess is NASA’s newest exoplanet-hunting space telescope, the successor to the renowned Kepler Space Telescope that was used to find some 2,600 exoplanets.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    TESS, able to survey 85% of the night sky (400 times more than what Kepler could monitor), is about to finish its primary two-year mission but has fallen woefully short of expectations. NASA initially thought TESS was going to find more than 20,000 transiting exoplanets, but with only months left it has only identified 1,588 candidates. Even so, the telescope’s mission will almost surely be extended.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 12:31 pm on December 20, 2019 Permalink | Reply
    Tags: "These are the stars the Pioneer and Voyager spacecraft will encounter", , , , , , , , , , MIT Technology Review, NASA Pioneer 10 and 11,   

    From MIT Technology Review: “These are the stars the Pioneer and Voyager spacecraft will encounter” 

    MIT Technology Review
    From MIT Technology Review

    Dec 20, 2019
    Emerging Technology from the arXiv

    As four NASA spacecraft exit our solar system, a 3D map [below] of the Milky Way reveals which others they’re likely to visit tens of thousands of years on.

    Laniakea supercluster. From Nature The Laniakea supercluster of galaxies R. Brent Tully, Hélène Courtois, Yehuda Hoffman & Daniel Pomarède at http://www.nature.com/nature/journal/v513/n7516/full/nature13674.html. Milky Way is the red dot.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt. The bar is visible in this image

    NASA Pioneer 10

    NASA Pioneer 11

    NASA/Voyager 1

    NASA/Voyager 2

    During the 1970s, NASA launched four of the most important spacecraft ever built. When Pioneer 10 began its journey to Jupiter, astronomers did not even know whether it was possible to pass through the asteroid belt unharmed.

    The inner Solar System, from the Sun to Jupiter. Also includes the asteroid belt (the white donut-shaped cloud), the Hildas (the orange “triangle” just inside the orbit of Jupiter), the Jupiter trojans (green), and the near-Earth asteroids. The group that leads Jupiter are called the “Greeks” and the trailing group are called the “Trojans” (Murray and Dermott, Solar System Dynamics, pg. 107)
    This image is based on data found in the en:JPL DE-405 ephemeris, and the en:Minor Planet Center database of asteroids (etc) published 2006 Jul 6. The image is looking down on the en:ecliptic plane as would have been seen on 2006 August 14. It was rendered by custom software written for Wikipedia. The same image without labels is also available at File:InnerSolarSystem.png. Mdf at English Wikipedia

    Only after it emerged safe was Pioneer 11 sent on its way.

    Both sent back the first close-up pictures of Jupiter, with Pioneer 11 continuing to Saturn. Voyager 1 and 2 later took even more detailed measurements, and extended the exploration of the solar system to Uranus and Neptune.

    All four of these spacecraft are now on their way out of the solar system, heading into interstellar space at a rate of about 10 kilometers per second. They will travel about a parsec (3.26 light-years) every 100,000 years, and that raises an important question: What stars will they encounter next?

    This is harder to answer than it seems. Stars are not stationary but moving rapidly through interstellar space. Without knowing their precise velocity, it’s impossible to say which ones our interstellar travelers are on course to meet.

    Enter Coryn Bailer-Jones at the Max Planck Institute for Astronomy in Germany and Davide Farnocchia at the Jet Propulsion Laboratory in Pasadena, California. These guys have performed this calculation using a new 3D map of star positions and velocities throughout the Milky Way.

    Max Planck Institute for Astronomy

    Max Planck Institute for Astronomy campus, Heidelberg, Baden-Württemberg, Germany


    NASA JPL-Caltech Campus

    This has allowed them to work out for the first time which stars the spacecraft will rendezvous with in the coming millennia. “The closest encounters for all spacecraft take place at separations between 0.2 and 0.5 parsecs within the next million years,” they say.

    Their results were made possible by the observations of a space telescope called Gaia.

    ESA/GAIA satellite

    Since 2014, Gaia has sat some 1.5 million from Earth recording the position of 1 billion stars, planets, comets, asteroids, quasars, and so on. At the same time, it has been measuring the velocities of the brightest 150 million of these objects.

    The result is a three-dimensional map of the Milky Way and the way astronomical objects within it are moving. It is the latest incarnation of this map, Gaia Data Release 2 or GDR2, that Bailer-Jones and Farnocchia have used for their calculations.


    The map makes it possible to project the future positions of stars in our neighborhood and to compare them with the future positions of the Pioneer and Voyager spacecraft, calculated using their last known positions and velocities.

    This information yields a list of stars that the spacecraft will encounter in the coming millennia. Bailer-Jones and Farnocchia define a close encounter as flying within 0.2 or 0.3 parsecs.

    The first spacecraft to encounter another star will be Pioneer 10 in 90,000 years. It will approach the orange-red star HIP 117795 in the constellation of Cassiopeia at a distance of 0.231 parsecs. Then, in 303,000 years, Voyager 1 will pass a star called TYC 3135-52-1 at a distance of 0.3 parsecs. And in 900,000 years, Pioneer 11 will pass a star called TYC 992-192-1 at a distance of 0.245 parsecs.

    These fly-bys are all at a distance of less than one light-year and in some cases might even graze the orbits of the stars’ most distant comets.

    Voyager 2 is destined for a more lonely future. According to the team’s calculations, it will never come within 0.3 parsecs of another star in the next 5 million years, although it is predicted to come within 0.6 parsecs of a star called Ross 248 in the constellation Andromeda in 42,000 years.

    Andromeda Galaxy Messier 31 with Messier32 -a satellite galaxy copyright Terry Hancock.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    These interstellar explorers will eventually collide with or be captured by other stars. It’s not possible yet to say which ones these will be, but Bailer-Jones and Farnocchia have an idea of the time involved. “The timescale for the collision of a spacecraft with a star is of order 10^20 years, so the spacecraft have a long future ahead of them,” they conclude.

    The Pioneer and Voyager spacecraft will soon be joined by another interstellar traveler. The New Horizons spacecraft that flew past Pluto in 2015 is heading out of the solar system but may yet execute a maneuver so that it intercepts a Kuiper Belt object on its way.

    NASA/New Horizons spacecraft

    Kuiper Belt. Minor Planet Center

    After that last course correction takes place, Bailer-Jones and Farnocchia will be able to work out its final destination.

    Ref: arxiv.org/abs/1912.03503 : Future stellar flybys of the Voyager and Pioneer spacecraft

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 1:40 pm on September 24, 2019 Permalink | Reply
    Tags: For a long time there have been doubts that quantum machines would ever be able to outstrip classical computers at anything., , MIT Technology Review, Quantum computing is somethng we need to start preparing for now., Quantum supremacy?, We still haven’t had confirmation from Google about what it’s done.   

    From MIT Technology Review: “Here’s what quantum supremacy does—and doesn’t—mean for computing” 

    MIT Technology Review
    From MIT Technology Review

    Sep 24, 2019
    Martin Giles


    Google has reportedly demonstrated for the first time that a quantum computer is capable of performing a task beyond the reach of even the most powerful conventional supercomputer in any practical time frame—a milestone known in the world of computing as “quantum supremacy.”

    The ominous-sounding term, which was coined by theoretical physicist John Preskill in 2012, evokes an image of Darth Vader–like machines lording it over other computers. And the news has already produced some outlandish headlines, such as one on the Infowars website that screamed, “Google’s ‘Quantum Supremacy’ to Render All Cryptography and Military Secrets Breakable.” Political figures have been caught up in the hysteria, too: Andrew Yang, a presidential candidate, tweeted that “Google achieving quantum computing is a huge deal. It means, among many other things, that no code is uncrackable.”

    Nonsense. It doesn’t mean that at all. Google’s achievement is significant, but quantum computers haven’t suddenly turned into computing colossi that will leave conventional machines trailing in the dust. Nor will they be laying waste to conventional cryptography in the near future—though in the longer term, they could pose a threat we need to start preparing for now.

    Here’s a guide to what Google appears to have achieved—and an antidote to the hype surrounding quantum supremacy.

    What do we know about Google’s experiment?

    We still haven’t had confirmation from Google about what it’s done. The information about the experiment comes from a paper titled “Quantum Supremacy Using a Programmable Superconducting Processor,” which was briefly posted on a NASA website before being taken down. Its existence was revealed in a report in the Financial Times—and a copy of the paper can be found here.

    The experiment is a pretty arcane one, but it required a great deal of computational effort. Google’s team used a quantum processor code-named Sycamore to prove that the figures pumped out by a random number generator were indeed truly random. They then worked out how long it would take Summit, the world’s most powerful supercomputer, to do the same task.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The difference was stunning: while the quantum machine polished it off in 200 seconds, the researchers estimated that the classical computer would need 10,000 years.

    When the paper is formally published, other researchers may start poking holes in the methodology, but for now it appears that Google has scored a computing first by showing that a quantum machine can indeed outstrip even the most powerful of today’s supercomputers. “There’s less doubt now that quantum computers can be the future of high-performance computing,” says Nick Farina, the CEO of quantum hardware startup EeroQ.

    Why are quantum computers so much faster than classical ones?

    In a classical computer, bits that carry information represent either a 1 or a 0; but quantum bits, or qubits—which take the form of subatomic particles such as photons and electrons—can be in a kind of combination of 1 and 0 at the same time, a state known as “superposition.” Unlike bits, qubits can also influence one another through a phenomenon known as “entanglement,” which baffled even Einstein, who called it “spooky action at a distance.”

    Thanks to these properties, which are described in more detail in our quantum computing explainer, adding just a few extra qubits to a system increases its processing power exponentially. Crucially, quantum machines can crunch through large amounts of data in parallel, which helps them outpace classical machines that process data sequentially. That’s the theory. In practice, researchers have been laboring for years to prove conclusively that a quantum computer can do something even the most capable conventional one can’t. Google’s effort has been led by John Martinis, who has done pioneering work in the use of superconducting circuits to generate qubits.

    Doesn’t this speedup mean quantum machines can overtake other computers now?

    No. Google picked a very narrow task. Quantum computers still have a long way to go before they can best classical ones at most things—and they may never get there. But researchers I’ve spoken to since the paper appeared online say Google’s experiment is still significant because for a long time there have been doubts that quantum machines would ever be able to outstrip classical computers at anything.

    Until now, research groups have been able to reproduce the results of quantum machines with around 40 qubits on classical systems. Google’s Sycamore processor, which harnessed 53 qubits for the experiment, suggests that such emulation has reached its limits. “We’re entering an era where exploring what a quantum computer can do will now require a physical quantum computer … You won’t be able to credibly reproduce results anymore on a conventional emulator,” explains Simon Benjamin, a quantum researcher at the University of Oxford.

    Isn’t Andrew Yang right that our cryptographic defenses can now be blown apart?

    Again, no. That’s a wild exaggeration. The Google paper makes clear that while its team has been able to show quantum supremacy in a narrow sampling task, we’re still a long way from developing a quantum computer capable of implementing Shor’s algorithm, which was developed in the 1990s to help quantum machines factor massive numbers. Today’s most popular encryption methods can be broken only by factoring such numbers—a task that would take conventional machines many thousands of years.

    But this quantum gap shouldn’t be cause for complacency, because things like financial and health records that are going to be kept for decades could eventually become vulnerable to hackers with a machine capable of running a code-busting algorithm like Shor’s. Researchers are already hard at work on novel encryption methods that will be able to withstand such attacks (see our explainer on post-quantum cryptography for more details).

    Why aren’t quantum computers as supreme as “quantum supremacy” makes them sound?

    The main reason is that they still make far more errors than classical ones. Qubits’ delicate quantum state lasts for mere fractions of a second and can easily be disrupted by even the slightest vibration or tiny change in temperature—phenomena known as “noise” in quantum-speak. This causes mistakes to creep into calculations. Qubits also have a Tinder-like tendency to want to couple with plenty of others. Such “crosstalk” between them can also produce errors.

    Google’s paper suggests it has found a novel way to cut down on crosstalk, which could help pave the way for more reliable machines. But today’s quantum computers still resemble early supercomputers in the amount of hardware and complexity needed to make them work, and they can tackle only very esoteric tasks. We’re not yet even at a stage equivalent to the ENIAC, IBM’s first general-purpose computer, which was put to work in 1945.

    So what’s the next quantum milestone to aim for?

    Besting conventional computers at solving a real-world problem—a feat that some researchers refer to as “quantum advantage.” The hope is that quantum computers’ immense processing power will help uncover new pharmaceuticals and materials, enhance artificial-intelligence applications, and lead to advances in other fields such as financial services, where they could be applied to things like risk management.

    If researchers can’t demonstrate a quantum advantage in at least one of these kinds of applications soon, the bubble of inflated expectations that’s blowing up around quantum computing could quickly burst.

    When I asked Google’s Martinis about this in an interview for a story last year, he was clearly aware of the risk. “As soon as we get to quantum supremacy,” he told me, “we’re going to want to show that a quantum machine can do something really useful.” Now it’s time for his team and other researchers to step up to that pressing challenge.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 12:22 pm on September 21, 2019 Permalink | Reply
    Tags: "Google researchers have reportedly achieved 'quantum supremacy'", MIT Technology Review, , Quantum Computing-still a long way off.   

    From MIT Technology Review: “Google researchers have reportedly achieved ‘quantum supremacy'” 

    MIT Technology Review
    From MIT Technology Review

    Sep 20, 2019

    The news: According to a report in the Financial Times, a team of researchers from Google led by John Martinis have demonstrated quantum supremacy for the first time. This is the point at which a quantum computer is shown to be capable of performing a task that’s beyond the reach of even the most powerful conventional supercomputer. The claim appeared in a paper that was posted on a NASA website, but the publication was then taken down. Google did not respond to a request for comment from MIT Technology Review.

    Why NASA? Google struck an agreement last year to use supercomputers available to NASA as benchmarks for its supremacy experiments. According to the Financial Times report, the paper said that Google’s quantum processor was able to perform a calculation in three minutes and 20 seconds that would take today’s most advanced supercomputer, known as Summit, around 10,000 years.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    In the paper, the researchers said that, to their knowledge, the experiment “marks the first computation that can only be performed on a quantum processor.”

    Quantum speed up: Quantum machines are so powerful because they harness quantum bits, or qubits. Unlike classical bits, which are either a 1 or a 0, qubits can be in a kind of combination of both at the same time. Thanks to other quantum phenomena, which are described in our explainer here, quantum computers can crunch large amounts of data in parallel that conventional machines have to work through sequentially. Scientists have been working for years to demonstrate that the machines can definitively outperform conventional ones.

    How significant is this milestone? Very. In a discussion of quantum computing at MIT Technology Review’s EmTech conference in Cambridge, Massachusetts this week before news of Google’s paper came out, Will Oliver, an MIT professor and quantum specialist, likened the computing milestone to the first flight of the Wright brothers at Kitty Hawk in aviation. He said it would give added impetus to research in the field, which should help quantum machines achieve their promise more quickly. Their immense processing power could ultimately help researchers and companies discover new drugs and materials, create more efficient supply chains, and turbocharge AI.

    But, but: It’s not clear what task Google’s quantum machine was working on, but it’s likely to be a very narrow one. In an emailed comment to MIT Technology Review, Dario Gil of IBM, which is also working on quantum computers, says an experiment that was probably designed around a very narrow quantum sampling problem doesn’t mean the machines will rule the roost.

    IBM iconic image of Quantum computer

    “In fact quantum computers will never reign ‘supreme’ over classical ones,” says Gil, “but will work in concert with them, since each have their specific strengths.” For many problems, classical computers will remain the best tool to use.

    And another but: Quantum computers are still a long way from being ready for mainstream use. The machines are notoriously prone to errors, because even the slightest change in temperature or tiny vibration can destroy the delicate state of qubits. Researchers are working on machines that will be easier to build, manage, and scale, and some computers are now available via the computing cloud. But it could still be many years before quantum computers that can tackle a wide range of problems are widely available.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 10:09 am on August 22, 2019 Permalink | Reply
    Tags: "A super-secure quantum internet just took another step closer to reality", MIT Technology Review, Quantum … what?,   

    From MIT Technology Review: “A super-secure quantum internet just took another step closer to reality” 

    MIT Technology Review
    From MIT Technology Review

    Scientists have managed to send a record-breaking amount of data in quantum form, using a strange unit of quantum information called a qutrit.

    The news: Quantum tech promises to allow data to be sent securely over long distances. Scientists have already shown it’s possible to transmit information both on land and via satellites using quantum bits, or qubits. Now physicists at the University of Science and Technology of China and the University of Vienna in Austria have found a way to ship even more data using something called quantum trits, or qutrits.

    Qutrits? Oh, come on, you’ve just made that up: Nope, they’re real. Conventional bits used to encode everything from financial records to YouTube videos are streams of electrical or photonic pulses than can represent either a 1 or a 0. Qubits, which are typically electrons or photons, can carry more information because they can be polarized in two directions at once, so they can represent both a 1 and a 0 at the same time. Qutrits, which can be polarized in three different dimensions simultaneously, can carry even more information. In theory, this can then be transmitted using quantum teleportation.

    Quantum … what? Quantum teleportation is a method for shipping data that relies on an almost-mystical phenomenon called entanglement. Entangled quantum particles can influence one another’s state, even if they are continents apart. In teleportation, a sender and receiver each receive one of a pair of entangled qubits. The sender measures the interaction of their qubit with another one that holds data they want to send. By applying the results of this measurement to the other entangled qubit, the receiver can work out what information has been transmitted. (For a more detailed look at quantum teleportation, see our explainer here.)

    Measuring progress: Getting this to work with qubits isn’t easy—and harnessing qutrits is even harder because of that extra dimension. But the researchers, who include Jian-Wei Pan, a Chinese pioneer of quantum communication, say they have cracked the problem by tweaking the first part of the teleportation process so that senders have more measurement information to pass on to receivers. This will make it easier for the latter to work out what data has been teleported over. The research was published in the journal Physical Review Letters.

    Deterring hackers: This might seem rather esoteric, but it has huge implications for cybersecurity. Hackers can snoop on conventional bits flowing across the internet without leaving a trace. But interfering with quantum units of information causes them to lose their delicate quantum state, leaving a telltale sign of hacking. If qutrits can be harnessed at scale, they could form the backbone of an ultra-secure quantum internet that could be used to send highly sensitive government and commercial data.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 11:40 am on July 27, 2019 Permalink | Reply
    Tags: "A new tool uses AI to spot text written by AI", , MIT Technology Review   

    From M.I.T. Technology Review: “A new tool uses AI to spot text written by AI” 

    MIT Technology Review
    From M.I.T. Technology Review

    July 26, 2019

    AI algorithms can generate text convincing enough to fool the average human—potentially providing a way to mass-produce fake news, bogus reviews, and phony social accounts. Thankfully, AI can now be used to identify fake text, too.

    The news: Researchers from Harvard University and the MIT-IBM Watson AI Lab have developed a new tool for spotting text that has been generated using AI. Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.

    The context: Misinformation is increasingly being automated, and the technology required to generate fake text and imagery is advancing fast. AI-powered tools such as this may become valuable weapons in the fight to catch fake news, deepfakes, and twitter bots.

    Faking it: Researchers at OpenAI recently demonstrated an algorithm capable of dreaming up surprisingly realistic passages. They fed huge amounts of text into a large machine-learning model, which learned to pick up statistical patterns in those words. The Harvard team developed their tool using a version of the OpenAI code that was released publicly.

    How predictable: GLTR highlights words that are statistically likely to appear after the preceding word in the text. As shown in the passage above (from Infinite Jest), the most predictable words are green; less predictable are yellow and red; and least predictable are purple. When tested on snippets of text written by OpenAI’s algorithm, it finds a lot of predictability. Genuine news articles and scientific abstracts contain more surprises.

    Mind and machine: The researchers behind GLTR carried out another experiment as well. They asked Harvard students to identify AI-generated text—first without the tool, and then with the help of its highlighting. The students were able to spot only half of all fakes on their own, but 72% when given the tool. “Our goal is to create human and AI collaboration systems,” says Sebastian Gehrmann, a PhD student involved in the work.

    If you’re interested, you can try it out for yourself.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 1:51 pm on May 14, 2019 Permalink | Reply
    Tags: "How AI could save lives without spilling medical secrets", AI algorithms trained on data from different hospitals could potentially diagnose illness; prevent disease; and extend lives., AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease., MIT Technology Review, The first big test for a platform that lets AI algorithms learn from private patient data is under way at Stanford Medical School., The sensitivity of private patient data is a looming problem.   

    From M.I.T. Technology Review: “How AI could save lives without spilling medical secrets” 

    MIT Technology Review
    From M.I.T. Technology Review

    May 14, 2019
    Will Knight

    Ariel Davis

    The first big test for a platform that lets AI algorithms learn from private patient data is under way at Stanford Medical School.

    The potential for artificial intelligence to transform health care is huge, but there’s a big catch.

    AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

    That’s why researchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak.

    One promising approach is now getting its first big test at Stanford Medical School in California. Patients there can choose to contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

    Participants submit ophthalmology test results and health record data through an app. The information is used to train a machine-learning model to identify signs of eye disease (such as diabetic retinopathy and glaucoma) in the images. But the data is protected by technology developed by Oasis Labs, a startup spun out of UC Berkeley, which guarantees that the information cannot be leaked or misused. The startup was granted permission by Stanford Medical School to start the trial last week, in collaboration with researchers at UC Berkeley, Stanford and ETH Zürich.

    The sensitivity of private patient data is a looming problem. AI algorithms trained on data from different hospitals could potentially diagnose illness, prevent disease, and extend lives. But in many countries medical records cannot easily be shared and fed to these algorithms for legal reasons. Research on using AI to spot disease in medical images or data usually involves relatively small data sets, which greatly limits the technology’s promise.

    “It is very exciting to be able to do with this with real clinical data,” says Dawn Song, cofounder of Oasis Labs and a professor at UC Berkeley. “We can really show that this works.”

    Oasis stores the private patient data on a secure chip, designed in collaboration with other researchers at Berkeley. The data remains within the Oasis cloud; outsiders are able to run algorithms on the data, and receive the results, without its ever leaving the system. A smart contract—software that runs on top of a blockchain—is triggered when a request to access the data is received. This software logs how the data was used and also checks to make sure the machine-learning computation was carried out correctly.

    “This will show we can help patients contribute data in a privacy-protecting way,” says Song. She says that the eye disease model will become more accurate as more data is collected.

    Such technology could also make it easier to apply AI to other sensitive information, such as financial records or individuals’ buying habits or web browsing data. Song says the plan is to expand the medical applications before looking to other domains.

    “The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.

    “You would love it if a medical researcher could learn on everyone’s medical records,” Evans says. “You could do an analysis and tell if a drug is working on not. But you can’t do that today.”

    Despite the potential Oasis represents, Evans is cautious. Storing data in secure hardware creates a potential point of failure, he notes. If the company that makes the hardware is compromised, then all the data handled this way will also be vulnerable. Blockchains are relatively unproven, he adds.

    “There’s a lot of different tech coming together,” he says of Oasis’s approach. “Some is mature, and some is cutting-edge and has challenges.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 10:14 am on May 1, 2019 Permalink | Reply
    Tags: , MIT Technology Review, MIT's Sertac Karman and Vivienne Sze developed the new chip, New chips   

    From M.I.T. Technology Review: “This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI.” 

    MIT Technology Review
    From M.I.T. Technology Review

    Photographs by Tony Luong

    May 1, 2019
    Will Knight

    Artificial Intelligence

    On a dazzling morning in Palm Springs, California, recently, Vivienne Sze took to a small stage to deliver perhaps the most nerve-racking presentation of her career.

    MIT’s Sertac Karman and Vivienne Sze developed the new chip

    She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.

    The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.

    “It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.

    Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.

    New capabilities

    Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.

    Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.

    The microchips are designed to squeeze more out of the “deep learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.


    This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.

    The high stakes that come with investing in next-generation AI chips, and maintaining America’s dominance in chipmaking overall, aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see The out-there AI ideas designed to keep the US ahead of China).

    But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.

    The new chip race

    Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video game graphics chips that perform parallel computations for rendering 3D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.

    The potential for new chip architectures to improve AI has stirred up a level of entrepreneurial activity that the chip industry hasn’t seen in decades (see The race to power AI’s silicon brains and China has never had a real chip industry. AI may change that).


    Big tech companies hoping to harness and commercialize AI including Google, Microsoft, and (yes) Amazon, are all working on their own deep learning chips. Many smaller companies are developing new chips, too. “It impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group , an analyst firm. “I’m not joking that we learn about a new one nearly every week.”

    The real opportunity, says Sze, isn’t building the most powerful deep learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large datacenters and so can only rely on the power available on the device itself to run. This is known as operating on the “edge.”

    “AI will be everywhere—and figuring out ways to make things more energy efficient will be extremely important,” says Naveen Rao, vice president of the Artificial Intelligence group at Intel.

    For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: