Tagged: MIT Technology Review Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:51 pm on May 14, 2019 Permalink | Reply
    Tags: "How AI could save lives without spilling medical secrets", AI algorithms trained on data from different hospitals could potentially diagnose illness; prevent disease; and extend lives., AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease., MIT Technology Review, The first big test for a platform that lets AI algorithms learn from private patient data is under way at Stanford Medical School., The sensitivity of private patient data is a looming problem.   

    From M.I.T. Technology Review: “How AI could save lives without spilling medical secrets” 

    MIT Technology Review
    From M.I.T. Technology Review

    May 14, 2019
    Will Knight

    1
    Ariel Davis

    The first big test for a platform that lets AI algorithms learn from private patient data is under way at Stanford Medical School.

    The potential for artificial intelligence to transform health care is huge, but there’s a big catch.

    AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

    That’s why researchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak.

    One promising approach is now getting its first big test at Stanford Medical School in California. Patients there can choose to contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

    Participants submit ophthalmology test results and health record data through an app. The information is used to train a machine-learning model to identify signs of eye disease (such as diabetic retinopathy and glaucoma) in the images. But the data is protected by technology developed by Oasis Labs, a startup spun out of UC Berkeley, which guarantees that the information cannot be leaked or misused. The startup was granted permission by Stanford Medical School to start the trial last week, in collaboration with researchers at UC Berkeley, Stanford and ETH Zürich.

    The sensitivity of private patient data is a looming problem. AI algorithms trained on data from different hospitals could potentially diagnose illness, prevent disease, and extend lives. But in many countries medical records cannot easily be shared and fed to these algorithms for legal reasons. Research on using AI to spot disease in medical images or data usually involves relatively small data sets, which greatly limits the technology’s promise.

    “It is very exciting to be able to do with this with real clinical data,” says Dawn Song, cofounder of Oasis Labs and a professor at UC Berkeley. “We can really show that this works.”

    Oasis stores the private patient data on a secure chip, designed in collaboration with other researchers at Berkeley. The data remains within the Oasis cloud; outsiders are able to run algorithms on the data, and receive the results, without its ever leaving the system. A smart contract—software that runs on top of a blockchain—is triggered when a request to access the data is received. This software logs how the data was used and also checks to make sure the machine-learning computation was carried out correctly.

    “This will show we can help patients contribute data in a privacy-protecting way,” says Song. She says that the eye disease model will become more accurate as more data is collected.

    Such technology could also make it easier to apply AI to other sensitive information, such as financial records or individuals’ buying habits or web browsing data. Song says the plan is to expand the medical applications before looking to other domains.

    “The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.

    “You would love it if a medical researcher could learn on everyone’s medical records,” Evans says. “You could do an analysis and tell if a drug is working on not. But you can’t do that today.”

    Despite the potential Oasis represents, Evans is cautious. Storing data in secure hardware creates a potential point of failure, he notes. If the company that makes the hardware is compromised, then all the data handled this way will also be vulnerable. Blockchains are relatively unproven, he adds.

    “There’s a lot of different tech coming together,” he says of Oasis’s approach. “Some is mature, and some is cutting-edge and has challenges.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 10:14 am on May 1, 2019 Permalink | Reply
    Tags: , MIT Technology Review, MIT's Sertac Karman and Vivienne Sze developed the new chip, New chips   

    From M.I.T. Technology Review: “This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI.” 

    MIT Technology Review
    From M.I.T. Technology Review

    1
    Photographs by Tony Luong

    May 1, 2019
    Will Knight

    Artificial Intelligence

    On a dazzling morning in Palm Springs, California, recently, Vivienne Sze took to a small stage to deliver perhaps the most nerve-racking presentation of her career.

    2
    MIT’s Sertac Karman and Vivienne Sze developed the new chip

    She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.

    The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.

    “It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.

    Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.

    New capabilities

    Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.

    Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.

    The microchips are designed to squeeze more out of the “deep learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.

    3

    This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.

    The high stakes that come with investing in next-generation AI chips, and maintaining America’s dominance in chipmaking overall, aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see The out-there AI ideas designed to keep the US ahead of China).

    But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.

    The new chip race

    Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video game graphics chips that perform parallel computations for rendering 3D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.

    The potential for new chip architectures to improve AI has stirred up a level of entrepreneurial activity that the chip industry hasn’t seen in decades (see The race to power AI’s silicon brains and China has never had a real chip industry. AI may change that).

    4

    Big tech companies hoping to harness and commercialize AI including Google, Microsoft, and (yes) Amazon, are all working on their own deep learning chips. Many smaller companies are developing new chips, too. “It impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group , an analyst firm. “I’m not joking that we learn about a new one nearly every week.”

    The real opportunity, says Sze, isn’t building the most powerful deep learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large datacenters and so can only rely on the power available on the device itself to run. This is known as operating on the “edge.”

    “AI will be everywhere—and figuring out ways to make things more energy efficient will be extremely important,” says Naveen Rao, vice president of the Artificial Intelligence group at Intel.

    For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 2:52 pm on April 30, 2019 Permalink | Reply
    Tags: , Formation of the Moon, MIT Technology Review   

    From M.I.T. Technology Review: “The moon may be made from a magma ocean that once covered Earth” 

    MIT Technology Review
    From M.I.T. Technology Review

    Apr 30, 2019

    1

    There are a number of theories about where the moon came from. Our best guess is that it was formed when the Earth was hit by a large object known as Theia. The impact threw up huge amounts of debris into orbit, which eventually coalesced to form the moon.

    2
    Thea impacts Earth. Smithsonian Magazine

    There’s a problem with this theory. The mathematical models show that most of the material that makes up the moon should come from Theia. But samples from the Apollo missions show that most of the material on the moon came from Earth.

    A paper out earlier this week in Nature Geoscience has a possible explanation. The research, led by Natsuki Hosono from the Japan Agency for Marine-Earth Science and Technology, suggests that the Earth at the time of impact was covered in hot magma rather than a hard outer crust.

    Magma on a planetary surface could be dislodged much more easily than a solid crust could be, so it’s plausible that when Theia struck the Earth, molten material originating from Earth flew up into space and then hardened into the moon.

    This theory relies a lot on the timing of the formation of the moon. The Earth would have to have been in a sweet spot of magma heat and consistency for the theory to be true.

    Additionally, as Jay Melosh at Purdue University explains in a comment article alongside the research, this new simulation still doesn’t check all the boxes needed to get our lunar observations in line with our theories. But it is an important step in getting closer to a solution.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 3:22 pm on March 13, 2019 Permalink | Reply
    Tags: "Quantum computing should supercharge this machine-learning technique", , Certain machine-learning tasks could be revolutionized by more powerful quantum computers., , MIT Technology Review,   

    From M.I.T Technology Review: “Quantum computing should supercharge this machine-learning technique” 

    MIT Technology Review
    From M.I.T Technology Review

    March 13, 2019
    Will Knight

    1
    The machine-learning experiment was performed using this IBM Q quantum computer.

    Certain machine-learning tasks could be revolutionized by more powerful quantum computers.

    Quantum computing and artificial intelligence are both hyped ridiculously. But it seems a combination of the two may indeed combine to open up new possibilities.

    In a research paper published today in the journal Nature, researchers from IBM and MIT show how an IBM quantum computer can accelerate a specific type of machine-learning task called feature matching. The team says that future quantum computers should allow machine learning to hit new levels of complexity.

    As first imagined decades ago, quantum computers were seen as a different way to compute information. In principle, by exploiting the strange, probabilistic nature of physics at the quantum, or atomic, scale, these machines should be able to perform certain kinds of calculations at speeds far beyond those possible with any conventional computer (see “What is a quantum computer?”). There is a huge amount of excitement about their potential at the moment, as they are finally on the cusp of reaching a point where they will be practical.

    At the same time, because we don’t yet have large quantum computers, it isn’t entirely clear how they will outperform ordinary supercomputers—or, in other words, what they will actually do (see “Quantum computers are finally here. What will we do with them?”).

    Feature matching is a technique that converts data into a mathematical representation that lends itself to machine-learning analysis. The resulting machine learning depends on the efficiency and quality of this process. Using a quantum computer, it should be possible to perform this on a scale that was hitherto impossible.

    The MIT-IBM researchers performed their simple calculation using a two-qubit quantum computer. Because the machine is so small, it doesn’t prove that bigger quantum computers will have a fundamental advantage over conventional ones, but it suggests that would be the case, The largest quantum computers available today have around 50 qubits, although not all of them can be used for computation because of the need to correct for errors that creep in as a result of the fragile nature of these quantum bits.

    “We are still far off from achieving quantum advantage for machine learning,” the IBM researchers, led by Jay Gambetta, write in a blog post. “Yet the feature-mapping methods we’re advancing could soon be able to classify far more complex data sets than anything a classical computer could handle. What we’ve shown is a promising path forward.”

    “We’re at stage where we don’t have applications next month or next year, but we are in a very good position to explore the possibilities,” says Xiaodi Wu, an assistant professor at the University of Maryland’s Joint Center for Quantum Information and Computer Science. Wu says he expects practical applications to be discovered within a year or two.

    Quantum computing and AI are hot right now. Just a few weeks ago, Xanadu, a quantum computing startup based in Toronto, came up with an almost identical approach to that of the MIT-IBM researchers, which the company posted online. Maria Schuld, a machine-learning researcher at Xanadu, says the recent work may be the start of a flurry of research papers that combine the buzzwords “quantum” and “AI.”

    “There is a huge potential,” she says.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 3:21 pm on July 11, 2018 Permalink | Reply
    Tags: , MIT Technology Review   

    From M I T Technology Review: “The US may have just pulled even with China in the race to build supercomputing’s next big thing” 

    MIT Technology Review
    From M.I.T Technology Review

    July 11, 2018
    Martin Giles

    1
    Ms. Tech

    The US may have just pulled even with China in the race to build supercomputing’s next big thing.

    The two countries are vying to create an exascale computer that could lead to significant advances in many scientific fields.

    There was much celebrating in America last month when the US Department of Energy unveiled Summit, the world’s fastest supercomputer.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Now the race is on to achieve the next significant milestone in processing power: exascale computing.

    This involves building a machine within the next few years that’s capable of a billion billion calculations per second, or one exaflop, which would make it five times faster than Summit (see chart). Every person on Earth would have to do a calculation every second of every day for just over four years to match what an exascale machine will be able to do in a flash.

    2
    Top500 / MIT Technology Review

    This phenomenal power will enable researchers to run massively complex simulations that spark advances in many fields, from climate science to genomics, renewable energy, and artificial intelligence. “Exascale computers are powerful scientific instruments, much like [particle] colliders or giant telescopes,” says Jack Dongarra, a supercomputing expert at the University of Tennessee.

    The machines will also be useful in industry, where they will be used for things like speeding up product design and identifying new materials. The military and intelligence agencies will be keen to get their hands on the computers, which will be used for national security applications, too.

    The race to hit the exascale milestone is part of a burgeoning competition for technological leadership between China and the US. (Japan and Europe are also working on their own computers; the Japanese hope to have a machine running in 2021 and the Europeans in 2023.)

    In 2015, China unveiled a plan to produce an exascale machine by the end of 2020, and multiple reports over the past year or so have suggested it’s on track to achieve its ambitious goal. But in an interview with MIT Technology Review, Depei Qian, a professor at Beihang University in Beijing who helps manage the country’s exascale effort, explained it could fall behind schedule. “I don’t know if we can still make it by the end of 2020,” he said. “There may be a year or half a year’s delay.”

    Teams in China have been working on three prototype exascale machines, two of which use homegrown chips derived from work on existing supercomputers the country has developed. The third uses licensed processor technology. Qian says that the pros and cons of each approach are still being evaluated, and that a call for proposals to build a fully functioning exascale computer has been pushed back.

    Given the huge challenges involved in creating such a powerful computer, timetables can easily slip, which could make an opening for the US. China’s initial goal forced the American government to accelerate its own road map and commit to delivering its first exascale computer in 2021, two years ahead of its original target. The American machine, called Aurora, is being developed for the Department of Energy’s Argonne National Laboratory in Illinois. Supercomputing company Cray is building the system for Argonne, and Intel is making chips for the machine.

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    To boost supercomputers’ performance, engineers working on exascale systems around the world are using parallelism, which involves packing many thousands of chips into millions of processing units known as cores. Finding the best way to get all these to work in harmony requires time-consuming experimentation.

    Moving data between processors, and into and out of storage, also soaks up a lot of energy, which means the cost of operating a machine over its lifetime can exceed the cost of building it. The DoE has set an upper limit of 40 megawatts of power for an exascale computer, which would roughly translate into an electricity budget of $40 million a year.

    To lower power consumption, engineers are placing three-dimensional stacks of memory chips as close as possible to compute cores to reduce the distance data has to travel, explains Steve Scott, the chief technology officer of Cray. And they’re increasingly using flash memory, which uses less power than alternative systems such as disk storage. Reducing these power needs makes it cheaper to store data at various points during a calculation, and that saved data can help an exascale machine recover quickly if a glitch occurs.

    Such advances have helped the team behind Aurora. “We’re confident of [our] ability to deliver it in 2021,” says Scott.

    More US machines will follow. In April the DoE announced a request for proposals worth up to $1.8 billion for two more exascale computers to come online between 2021 and 2023. These are expected to cost $400 million to $600 million each, with the remaining money being used to upgrade Aurora or even create a follow-on machine.

    Both China and America are also funding work on software for exascale machines. China reportedly has teams working on some 15 application areas, while in the US, teams are working on 25, including applications in fields such as astrophysics and materials science. “Our goal is to deliver as many breakthroughs as possible,” says Katherine Yelick, the associate director for computing sciences at Lawrence Berkeley National Laboratory, who is part of the leadership team coordinating the US initiative.

    While there’s plenty of national pride wrapped up in the race to get to exascale first, the work Yelick and other researchers are doing is a reminder that raw exascale computing power isn’t the true test of success here; what really matters is how well it’s harnessed to solve some of the world’s toughest problems.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 4:10 pm on December 7, 2017 Permalink | Reply
    Tags: , MIT Technology Review, , , Quantum Simulation Could Shed Light on the Origins of Life   

    From MIT Tech Review: “Quantum Simulation Could Shed Light on the Origins of Life” 

    MIT Technology Review
    M.I.T Technology Review

    December 7, 2017
    No writer credit

    For decades computer scientists have created artificial life to test ideas about evolution. Doing so on a quantum computer could help capture the role quantum mechanics may have played.

    What role does quantum mechanics play in the machinery of life? Nobody is quite sure, but in recent years, physicists have begun to investigate all kinds of possibilities. In the process, they have gathered evidence suggesting that quantum mechanics plays an important role in photosynthesis, in bird navigation, and perhaps in our sense of smell.

    There is even a speculative line of thought that quantum processes must have governed the origin of life itself and the formulation of the genetic code. The work to study these questions is ongoing and involves careful observation of the molecules of life.

    But there is another way to approach this question from the bottom up. Computer scientists have long toyed with artificial life forms built from computer code. This code lives in a silicon-based landscape where its fitness is measured against some selection criteria.

    1
    The process of quantum evolution and the creation of artificial quantum life. No image credit.

    It reproduces by combining with other code or by the mutation of its own code. And the fittest code has more offspring while the least fit dies away. In other words, the code evolves. Computer scientists have used this approach to study various aspects of life, evolution, and the emergence of complexity.

    This is an entirely classical process following ordinary Newtonian steps, one after the other. The real world, on the other hand, includes quantum mechanics and the strange phenomena that it allows. That’s how the question arises of whether quantum mechanics can play a role in evolution and even in the origin of life itself.

    So an important first step is to reproduce this process of evolution in the quantum world, creating artificial quantum life forms. But is this possible?

    Today we get an answer thanks to the work of Unai Alvarez-Rodriguez and a few pals at the University of the Basque Country in Spain. These guys have created a quantum version of artificial life for the first time. And they say their results are the first examples of quantum evolution that allows physicists to explore the way complexity emerges in the quantum world.

    The experiment is simple in principle. The team think of quantum life as consisting of two parts—a genotype and a phenotype. Just as with carbon-based life, the quantum genotype contains the quantum information that describes the individual—its genetic code. The genotype is the part of the quantum life unit that is transmitted from one generation to the next.

    The phenotype, on the other hand, is the manifestation of the genotype that interacts with the real world—the “body” of the individual. “This state, together with the information it encodes, is degraded during the lifetime of the individual,” say Alvarez-Rodriguez and co.

    So each unit of quantum life consists of two qubits—one representing the genotype and the other the phenotype. “The goal is to reproduce the characteristic processes of Darwinian evolution, adapted to the language of quantum algorithms and quantum computing,” say the team.

    The first step in the evolutionary process is reproduction. Alvarez-Rodriguez and co do this using the process of entanglement, which allows the transmission of quantum states from one object to another. In this case, they entangle the genotype qubit with a blank state, and then transfer its quantum information.

    The next stage is survival, which depends on the phenotype. Alvarez-Rodriguez and co do this by transfering an aspect of the genotype state to another blank state, which becomes the phenotype. The phenotype then interacts with the environment and eventually dissipates.

    This process is equivalent to aging and dying, and the time it takes depends on the genotype. Those that live longer are implicitly better suited to their environment and are preferentially reproduced in the next generation.

    There is another important aspect of evolution—how individuals differ from each other. In ordinary evolution, variation occurs in two ways. The first is through sexual recombination, where the genotype from two individuals combines. The second is by mutation, where random changes occur in the genotype during the reproductive process.

    Alvarez-Rodriguez and co employ this second type of variation in their quantum world. When the quantum information is transferred from one generation to the next, the team introduce a random change—in this case a rotation of the quantum state. And this, in turn, determines the phenotype and how it interacts with its environment.

    So that’s the theory. The experiment itself is tricky because quantum computers are still in their infancy. Nevertheless, Alvarez-Rodriguez and co have made use of the IBM QX, a superconducting quantum computer at IBM’s T.J. Watson Laboratories that the company has made publicly accessible via the cloud. The company claims that some 40,000 individuals have signed up to use the service and have together run some 275,000 quantum algorithms through the device.

    Alvarez-Rodriguez and co used the five-qubit version of the machine, which runs quantum algorithms that allow two-qubit interactions. However, the system imposes some limitations on the process of evolution that the team want to run. For example, it does not allow the variations introduced during the reproductive process to be random.

    Instead, the team run the experiment several times, introducing a different known rotation in each run, and then look at the results together. In total, they run the experiment thousands of times to get a good sense of the outcomes.

    In general, the results match the theoretical predictions with high fidelity. “The experiments reproduce the characteristic properties of the sought quantum natural selection scenario,” say Alvarez-Rodriguez and co.

    And the team say that the mutations have an important impact on the outcomes: “[They] significantly improved the fidelity of the quantum algorithm outcome.” That’s not so different from the classical world, where mutations help species adapt to changing environments.

    Of course, there are important caveats. The limitations of IBM’s quantum computer raise important questions about whether the team has really simulated evolution. But these issues should be ironed out in the near future.

    All this work is the result of the team’s long focus on quantum life. Back in 2015, we reported on the team’s work in simulating quantum life on a classical computer. Now they have taken the first step in testing these ideas on a real quantum computer.

    And the future looks bright. Quantum computer technology is advancing rapidly, which this should allow Alvarez-Rodriguez and co to create quantum life in more complex environments. IBM, for example, has a 20-qubit processor online and is testing a 50-qubit version.

    That will make possible a variety of new experiments on quantum life. The most obvious will include the ability for quantum life forms to interact with each other and perhaps reproduce by sexual recombination—in other words, by combining elements of their genotypes. Another possibility will be to allow the quantum life forms to move and see how this influences their interactions and fitness for survival.

    Just what will emerge isn’t clear. But Alvarez-Rodriguez and co hope their quantum life forms will become important models for exploring the emergence of complexity in the quantum world.

    Eventually, that should feed into our understanding of the role of quantum processes in carbon-based life forms and the origin of life itself. The ensuing debate will be fascinating to watch.

    Ref: Quantum Artificial Life in an IBM Quantum Computer

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
    • stewarthoughblog 10:48 pm on December 7, 2017 Permalink | Reply

      It is always nice for computer simulations to be developed for virtually all scientific phenomenon. However, the true realism is the critical measure of their relevance and veracity. Since there are no known methods for origin of life development sequences, only possible scenarios for the complexity progression essential for a first organism are possible following some logical progression of events.

      Consequently, the computer code can simulate some logical process and provide a learning method for what logically must have happened, but its relevance to reality is unknown without experimental verification.

      Like

  • richardmitnick 4:58 pm on December 4, 2017 Permalink | Reply
    Tags: A new study links extreme heat during early childhood to lower earnings as an adult, , , , Global Warming May Harm Children for Life, MIT Technology Review   

    From MIT Tech Review: “Global Warming May Harm Children for Life” 

    MIT Technology Review
    M.I.T Technology Review

    December 4, 2017
    James Temple

    1
    A baby sits on a Tel Aviv beach on a hot summer’s day. Uriel Sinai | Getty Images

    A new study links extreme heat during early childhood to lower earnings as an adult.

    growing body of research concludes that rising global temperatures increase the risk of heat stress and stroke, decrease productivity and economic output, widen global wealth disparities, and can trigger greater violence.

    Now a new study by researchers at Stanford, the University of California, Berkeley, and the U.S. Department of the Treasury suggests that even short periods of extreme heat can carry long-term consequences for children and their financial future. Specifically, heat waves during an individual’s early childhood, including the period before birth, can affect his or her earnings three decades later, according to the paper, published on Monday in Proceedings of the National Academy of Sciences. Every day that temperatures rise above 32 ˚C, or just shy of 90 ˚F, from conception to the age of one is associated with a 0.1 percent decrease in average income at the age of 30.

    The researchers don’t directly tackle the tricky question of how higher temperatures translate to lower income, noting only that fetuses and infants are “especially sensitive to hot temperatures because their thermoregulatory and sympathetic nervous systems are not fully developed.” Earlier studies have linked extreme temperatures during this early life period with lower birth rate and higher infant mortality, and a whole field of research has developed around what’s known as the “developmental origins of health and disease paradigm,” which traces the impacts of early health shocks into adulthood.

    There are several pathways through which higher temperatures could potentially lead to lower adult earnings, including reduced cognition, ongoing health issues that increase days missed from school or work, and effects on non-cognitive traits such as ambition, assertiveness, or self-control, says Maya Rossin-Slater, a coauthor of the study and assistant professor in Stanford’s department of health research and policy.

    The bigger danger here is that global warming will mean many more days with a mean temperature above 32 ˚C—specifically, an increase from one per year in the average U.S. county today to around 43 annually by around 2070, according to an earlier UN report cited in the study.

    For workers who would otherwise make $50,000 annually, a single day of extreme heat during their first 21 months would cut their salary by $50. But 43 such days would translate to $2,150. Multiply that by the total population experiencing such events, and it quickly adds up to a huge economic impact. A greater proportion of citizens failing to reach their full earnings potential implies lower overall productivity and economic output.

    All of that comes on top of the ways that high temperatures directly hit the economy, mainly by decreasing human productivity and agricultural yields, according to other research. Unchecked climate change could reduce average global income by around 23 percent in 2100, and as much as 75 percent in the world’s poorest countries, according to research by UC Berkeley public policy professor Solomon Hsiang and coauthors in a 2015 Nature paper (see “Hotter Days Will Drive Global Inequality”). Notably, that excludes the devastating economic impacts of things like hurricanes and sea level rise.

    “We know that high temperatures have numerous damaging consequences for current economic productivity, at the time that the high temperatures occur,” Hsiang said in an e-mail to MIT Technology Review. “This study demonstrates a new way in which high temperatures today reduce economic productivity far into the future, by weakening our labor force.”

    The good news, at least for certain nations and demographic groups, is that air-conditioning nearly eliminates this observed effect, based on the authors’ analysis of U.S. Census data that captures how air-conditioning penetration increased in U.S. counties over time. But that could point to one more way that rising global temperatures will disproportionally harm impoverished nations, or perhaps already have.

    “In poor countries in hot climates that don’t have air-conditioning, we could imagine these effects being even more dramatic,” Rossin-Slater says.

    The study explored the results for 12 million people born in the United States between 1969 and 1977, incorporating adult earnings information from newly available data in the U.S. Census Bureau’s Longitudinal Employer Household Dynamics program. The researchers sought to isolate the impact of temperature, and control for other variables, by using “fine-scale” daily weather data and county-level birth information.

    “This study makes it very clear to see how climate change in the next few decades can affect our grandchildren, even if populations in the distant future figure out how to cool things back down,” Hsiang said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 2:04 pm on December 2, 2017 Permalink | Reply
    Tags: and Cameras Will Never Be the Same, , Lenses Are Being Reinvented, MIT Technology Review, ,   

    From MIT Tech Review: “Lenses Are Being Reinvented, and Cameras Will Never Be the Same” 

    MIT Technology Review
    M.I.T Technology Review

    December 1, 2017
    No writer credit

    “Metalenses” created with photolithography could change the nature of imaging and optical processing.

    Lenses are almost as old as civilization itself. The ancient Egyptians, Greeks, and Babylonians all developed lenses made from polished quartz and used them for simple magnification. Later, 17th-century scientists combined lenses to make telescopes and microscopes, instruments that changed our view of the universe and our position within it.

    Now lenses are being reinvented by the process of photolithography, which carves subwavelength features onto flat sheets of glass. Today, Alan She and pals at Harvard University in Massachusetts show how to arrange these features in ways that scatter light with greater control than has ever been possible. They say the resulting “metalenses” are set to revolutionize imaging and usher in a new era of optical processing.

    Lens making has always been a tricky business. It is generally done by pouring molten glass, or silicon dioxide, into a mold and allowing it to set before grinding and polishing it into the required shape. This is a time-consuming business that is significantly different from the manufacturing processes for light-sensing components on microchips.

    1
    Metalenses are carved onto wafers of silicon dioxide in a process like that used to make silicon chips. No image credit.

    So a way of making lenses on chips in the same way would be hugely useful. It would allow lenses to be fabricated in the same plants as other microelectronic components, even at the same time.

    She and co show how this process is now possible. The key idea is that tiny features, smaller than the wavelength of light, can manipulate it. For example, white light can be broken into its component colors by reflecting it off a surface into which are carved a set of parallel trenches that have the same scale as the wavelength of light.

    2
    Metalenses can produce high quality images

    Physicists have played with so-called diffraction gratings for centuries. But photolithography makes it possible to take the idea much further by creating a wider range of features and varying their shape and orientation.

    Since the 1960s, photolithography has produced ever smaller features on silicon chips. In 1970, this technique could carve shapes in silicon with a scale of around 10 micrometers. By 1985, feature size had dropped to one micrometer, and by 1998, to 250 nanometers. Today, the chip industry makes features around 10 nanometers in size.

    Visible light has a wavelength of 400 to 700 nanometers, so the chip industry has been able to make features of this size for some time. But only recently have researchers begun to investigate how these features can be arranged on flat sheets of silicon dioxide to create metalenses that bend light.

    The process begins with a silicon dioxide wafer onto which is deposited a thin layer of silicon covered in a photoresist pattern. The silicon below is then carved away using ultraviolet light. Washing away the remaining photoresist leaves the unexposed silicon in the desired shape.

    She and co use this process to create a periodic array of silicon pillars on glass that scatter visible light as it passes through. And by carefully controlling the spacing between the pillars, the team can bring the light to a focus.

    Specific pillar spacings determine the precise optical properties of this lens. For example, the researchers can control chromatic aberration to determine where light of different colors comes to a focus.

    In imaging lenses, chromatic aberration must be minimized—it otherwise produces the colored fringes around objects viewed through cheap toy telescopes. But in spectrographs, different colors must be brought to focus in different places. She and co can do either.

    Neither do these lenses suffer from spherical aberration, a common problem with ordinary lenses caused by their three-dimensional spherical shape. Metalenses do not have this problem because they are flat. Indeed, they are similar to the theoretical “ideal lenses” that undergraduate physicists study in optics courses.

    Of course, physicists have been able to make flat lenses, such as Fresnel lenses, for decades. But they have always been hard to make.

    The key advance here is that metalenses, because they can be fabricated in the same way as microchips, can be mass-produced with subwavelength surface features. She and co make dozens of them on a single silica wafer. Each of these lenses is less than a micrometer thick, with a diameter of 20 millimeters and a focal length of 50 millimeters.

    “We envision a manufacturing transition from using machined or moulded optics to lithographically patterned optics, where they can be mass produced with the similar scale and precision as IC chips,” say She and co.

    And they can do this with chip fabrication technology that is more than a decade old. That will give old fab plants a new lease on life. “State-of-the-art equipment is useful, but not necessarily required,” say She and co.

    Metalenses have a wide range of applications. The most obvious is imaging. Flat lenses will make imaging systems thinner and simpler. But crucially, since metalenses can be fabricated in the same process as the electronic components for sensing light, they will be cheaper.

    So cameras for smartphones, laptops, and augmented-reality imaging systems will suddenly become smaller and less expensive to make. They could even be printed onto the end of optical fibers to acts as endoscopes.

    Astronomers could have some fun too. These lenses are significantly lighter and thinner than the behemoths they have launched into orbit in observatories such as the Hubble Space Telescope. A new generation of space-based astronomy and Earth observing beckons.

    But it is within chips themselves that this technology could have the biggest impact. The technique makes it possible to build complex optical bench-type systems into chips for optical processing.

    And there are further advances in the pipeline. One possibility is to change the properties of metalenses in real time using electric fields. That raises the prospect of lenses that change focal length with voltage—or, more significant, that switch light.

    Science paper:
    Alan She, Shuyan Zhang, Samuel Shian, David R. Clarke, Federico Capasso
    Large Area Metalenses: Design, Characterization, and Mass Manufacturing. No Journal reference.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 1:23 pm on November 8, 2017 Permalink | Reply
    Tags: , CSAIL-MIT’s Computer Science and Artificial Intelligence Lab, Daniela Rus, MIT Technology Review, More Evidence that Humans and Machines Are Better When They Team Up, ,   

    From M.I.T Technology Review: Women in STEM- Daniela Rus”More Evidence that Humans and Machines Are Better When They Team Up” 

    MIT Technology Review
    M.I.T Technology Review

    November 8, 2017
    Will Knight

    By worrying about job displacement, we might end up missing a huge opportunity for technological amplification.

    1
    MIT computer scientist Daniela Rus. Justin Saglio

    Instead of just fretting about how robots and AI will eliminate jobs, we should explore new ways for humans and machines to collaborate, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).

    “I believe people and machines should not be competitors, they should be collaborators,” Rus said during her keynote at EmTech MIT 2017, an annual event hosted by MIT Technology Review.

    How technology will impact employment in coming years has become a huge question for economists, policy-makers, and technologists. And, as one of the world’s preeminent centers of robotics and artificial intelligence, CSAIL has a big stake in driving coming changes.

    There is some disagreement among experts about how significantly jobs will be affected by automation and AI, and about how this will be offset by the creation of new business opportunities. Last week, Rus and others at MIT organized an event called AI and the Future of Work, where some speakers gave more dire warnings about the likely upheaval ahead (see “Is AI About to Decimate White Collar Jobs?”).

    The potential for AI to augment human skills is often mentioned, but it has been researched relatively little. Rus talked about a study by researchers from Harvard University comparing the ability of expert doctors and AI software to diagnose cancer in patients. They found that doctors perform significantly better than the software, but doctors together with software were better still.

    Rus pointed to the potential for AI to augment human capabilities in law and in manufacturing, where smarter automated systems might enable the production of goods to be highly customized and more distributed.

    Robotics might end up augmenting human abilities in some surprising ways. For instance, Rus pointed to a project at MIT that involves using the technology in self-driving cars to help people with visual impairment to navigate. She also speculated that brain-computer interfaces, while still relatively crude today, might have a huge impact on future interactions with robots.

    Although Rus is bullish on the future of work, she said two economic phenomena do give her cause for concern. One is the decreasing quality of many jobs, something that is partly shaped by automation; and the other is the flat gross domestic product of the United States, which impacts the emergence of new economic opportunities.

    But because AI is still so limited, she said she expects it to mostly eliminate routing and boring elements of work. “There is still a lot to be done in this space,” Rus said. “I am wildly excited about offloading my routine tasks to machines so I can focus on things that are interesting.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 9:54 am on November 1, 2017 Permalink | Reply
    Tags: , Deep neural networks, Google Researchers Have a New Alternative to Traditional Neural Networks, MIT Technology Review   

    From M.I.T Technology Review: “Google Researchers Have a New Alternative to Traditional Neural Networks” 

    MIT Technology Review
    M.I.T Technology Review

    November 1st, 2017
    Jamie Condliffe

    1
    Image credit: Jingyi Wang

    Say hello to the capsule network.

    AI has enjoyed huge growth in the past few years, and much of that success is owed to deep neural networks, which provide the smarts behind some of AI’s most impressive tricks like image recognition. But there is growing concern that some of the fundamental tenets that have made those systems so successful may not be able to overcome the major problems facing AI—perhaps the biggest of which is a need for huge quantities of data from which to learn.

    Seemingly Google’s Geoff Hinton is among those who are concerned. Because Wired reports that he has now unveiled a new take on the traditional neural networks that he calls capsule networks. In a pair of new papers—one published on the arXIv, the other on OpenReview—Hinton and a handful of colleagues explain how they work.

    Their approach uses small groups of neurons, collectively known as capsules, which are organized into layers to identify things in video or images. When several capsules in one layer agree on having detected something, they activate a capsule at a higher level—and so on, until the network is able to make a judgement about what it sees. Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like from varying angles.

    Hinton claims that the approach, which has been in the making for decades, should enable his networks to require less data than regular neural nets in order to recognize objects in new situations. In the papers published so far, capsule networks have been shown to keep up with regular neural networks when it comes to identifying handwritten characters, and make fewer errors when trying to recognize previously observed toys from different angles. In other words, he’s published the results because he’s got his capsules to work as well as, or slightly better than, regular ones (albeit more slowly, for now).

    Now, then, comes the interesting part. Will these systems provide a compelling alternative to traditional neural networks, or will they stall? Right now it’s impossible to tell, but we can expect the machine learning community to implement the work, and fast, in order to find out. Either way, those concerned about the limitations of current AI systems can be heartened by the fact that researchers are pushing the boundaries to build new deep learning alternatives.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: