Tagged: MIT Technology Review Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:10 pm on December 7, 2017 Permalink | Reply
    Tags: , MIT Technology Review, , , Quantum Simulation Could Shed Light on the Origins of Life   

    From MIT Tech Review: “Quantum Simulation Could Shed Light on the Origins of Life” 

    MIT Technology Review
    M.I.T Technology Review

    December 7, 2017
    No writer credit

    For decades computer scientists have created artificial life to test ideas about evolution. Doing so on a quantum computer could help capture the role quantum mechanics may have played.

    What role does quantum mechanics play in the machinery of life? Nobody is quite sure, but in recent years, physicists have begun to investigate all kinds of possibilities. In the process, they have gathered evidence suggesting that quantum mechanics plays an important role in photosynthesis, in bird navigation, and perhaps in our sense of smell.

    There is even a speculative line of thought that quantum processes must have governed the origin of life itself and the formulation of the genetic code. The work to study these questions is ongoing and involves careful observation of the molecules of life.

    But there is another way to approach this question from the bottom up. Computer scientists have long toyed with artificial life forms built from computer code. This code lives in a silicon-based landscape where its fitness is measured against some selection criteria.

    The process of quantum evolution and the creation of artificial quantum life. No image credit.

    It reproduces by combining with other code or by the mutation of its own code. And the fittest code has more offspring while the least fit dies away. In other words, the code evolves. Computer scientists have used this approach to study various aspects of life, evolution, and the emergence of complexity.

    This is an entirely classical process following ordinary Newtonian steps, one after the other. The real world, on the other hand, includes quantum mechanics and the strange phenomena that it allows. That’s how the question arises of whether quantum mechanics can play a role in evolution and even in the origin of life itself.

    So an important first step is to reproduce this process of evolution in the quantum world, creating artificial quantum life forms. But is this possible?

    Today we get an answer thanks to the work of Unai Alvarez-Rodriguez and a few pals at the University of the Basque Country in Spain. These guys have created a quantum version of artificial life for the first time. And they say their results are the first examples of quantum evolution that allows physicists to explore the way complexity emerges in the quantum world.

    The experiment is simple in principle. The team think of quantum life as consisting of two parts—a genotype and a phenotype. Just as with carbon-based life, the quantum genotype contains the quantum information that describes the individual—its genetic code. The genotype is the part of the quantum life unit that is transmitted from one generation to the next.

    The phenotype, on the other hand, is the manifestation of the genotype that interacts with the real world—the “body” of the individual. “This state, together with the information it encodes, is degraded during the lifetime of the individual,” say Alvarez-Rodriguez and co.

    So each unit of quantum life consists of two qubits—one representing the genotype and the other the phenotype. “The goal is to reproduce the characteristic processes of Darwinian evolution, adapted to the language of quantum algorithms and quantum computing,” say the team.

    The first step in the evolutionary process is reproduction. Alvarez-Rodriguez and co do this using the process of entanglement, which allows the transmission of quantum states from one object to another. In this case, they entangle the genotype qubit with a blank state, and then transfer its quantum information.

    The next stage is survival, which depends on the phenotype. Alvarez-Rodriguez and co do this by transfering an aspect of the genotype state to another blank state, which becomes the phenotype. The phenotype then interacts with the environment and eventually dissipates.

    This process is equivalent to aging and dying, and the time it takes depends on the genotype. Those that live longer are implicitly better suited to their environment and are preferentially reproduced in the next generation.

    There is another important aspect of evolution—how individuals differ from each other. In ordinary evolution, variation occurs in two ways. The first is through sexual recombination, where the genotype from two individuals combines. The second is by mutation, where random changes occur in the genotype during the reproductive process.

    Alvarez-Rodriguez and co employ this second type of variation in their quantum world. When the quantum information is transferred from one generation to the next, the team introduce a random change—in this case a rotation of the quantum state. And this, in turn, determines the phenotype and how it interacts with its environment.

    So that’s the theory. The experiment itself is tricky because quantum computers are still in their infancy. Nevertheless, Alvarez-Rodriguez and co have made use of the IBM QX, a superconducting quantum computer at IBM’s T.J. Watson Laboratories that the company has made publicly accessible via the cloud. The company claims that some 40,000 individuals have signed up to use the service and have together run some 275,000 quantum algorithms through the device.

    Alvarez-Rodriguez and co used the five-qubit version of the machine, which runs quantum algorithms that allow two-qubit interactions. However, the system imposes some limitations on the process of evolution that the team want to run. For example, it does not allow the variations introduced during the reproductive process to be random.

    Instead, the team run the experiment several times, introducing a different known rotation in each run, and then look at the results together. In total, they run the experiment thousands of times to get a good sense of the outcomes.

    In general, the results match the theoretical predictions with high fidelity. “The experiments reproduce the characteristic properties of the sought quantum natural selection scenario,” say Alvarez-Rodriguez and co.

    And the team say that the mutations have an important impact on the outcomes: “[They] significantly improved the fidelity of the quantum algorithm outcome.” That’s not so different from the classical world, where mutations help species adapt to changing environments.

    Of course, there are important caveats. The limitations of IBM’s quantum computer raise important questions about whether the team has really simulated evolution. But these issues should be ironed out in the near future.

    All this work is the result of the team’s long focus on quantum life. Back in 2015, we reported on the team’s work in simulating quantum life on a classical computer. Now they have taken the first step in testing these ideas on a real quantum computer.

    And the future looks bright. Quantum computer technology is advancing rapidly, which this should allow Alvarez-Rodriguez and co to create quantum life in more complex environments. IBM, for example, has a 20-qubit processor online and is testing a 50-qubit version.

    That will make possible a variety of new experiments on quantum life. The most obvious will include the ability for quantum life forms to interact with each other and perhaps reproduce by sexual recombination—in other words, by combining elements of their genotypes. Another possibility will be to allow the quantum life forms to move and see how this influences their interactions and fitness for survival.

    Just what will emerge isn’t clear. But Alvarez-Rodriguez and co hope their quantum life forms will become important models for exploring the emergence of complexity in the quantum world.

    Eventually, that should feed into our understanding of the role of quantum processes in carbon-based life forms and the origin of life itself. The ensuing debate will be fascinating to watch.

    Ref: Quantum Artificial Life in an IBM Quantum Computer

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

    • stewarthoughblog 10:48 pm on December 7, 2017 Permalink | Reply

      It is always nice for computer simulations to be developed for virtually all scientific phenomenon. However, the true realism is the critical measure of their relevance and veracity. Since there are no known methods for origin of life development sequences, only possible scenarios for the complexity progression essential for a first organism are possible following some logical progression of events.

      Consequently, the computer code can simulate some logical process and provide a learning method for what logically must have happened, but its relevance to reality is unknown without experimental verification.


  • richardmitnick 4:58 pm on December 4, 2017 Permalink | Reply
    Tags: A new study links extreme heat during early childhood to lower earnings as an adult, , , , Global Warming May Harm Children for Life, MIT Technology Review   

    From MIT Tech Review: “Global Warming May Harm Children for Life” 

    MIT Technology Review
    M.I.T Technology Review

    December 4, 2017
    James Temple

    A baby sits on a Tel Aviv beach on a hot summer’s day. Uriel Sinai | Getty Images

    A new study links extreme heat during early childhood to lower earnings as an adult.

    growing body of research concludes that rising global temperatures increase the risk of heat stress and stroke, decrease productivity and economic output, widen global wealth disparities, and can trigger greater violence.

    Now a new study by researchers at Stanford, the University of California, Berkeley, and the U.S. Department of the Treasury suggests that even short periods of extreme heat can carry long-term consequences for children and their financial future. Specifically, heat waves during an individual’s early childhood, including the period before birth, can affect his or her earnings three decades later, according to the paper, published on Monday in Proceedings of the National Academy of Sciences. Every day that temperatures rise above 32 ˚C, or just shy of 90 ˚F, from conception to the age of one is associated with a 0.1 percent decrease in average income at the age of 30.

    The researchers don’t directly tackle the tricky question of how higher temperatures translate to lower income, noting only that fetuses and infants are “especially sensitive to hot temperatures because their thermoregulatory and sympathetic nervous systems are not fully developed.” Earlier studies have linked extreme temperatures during this early life period with lower birth rate and higher infant mortality, and a whole field of research has developed around what’s known as the “developmental origins of health and disease paradigm,” which traces the impacts of early health shocks into adulthood.

    There are several pathways through which higher temperatures could potentially lead to lower adult earnings, including reduced cognition, ongoing health issues that increase days missed from school or work, and effects on non-cognitive traits such as ambition, assertiveness, or self-control, says Maya Rossin-Slater, a coauthor of the study and assistant professor in Stanford’s department of health research and policy.

    The bigger danger here is that global warming will mean many more days with a mean temperature above 32 ˚C—specifically, an increase from one per year in the average U.S. county today to around 43 annually by around 2070, according to an earlier UN report cited in the study.

    For workers who would otherwise make $50,000 annually, a single day of extreme heat during their first 21 months would cut their salary by $50. But 43 such days would translate to $2,150. Multiply that by the total population experiencing such events, and it quickly adds up to a huge economic impact. A greater proportion of citizens failing to reach their full earnings potential implies lower overall productivity and economic output.

    All of that comes on top of the ways that high temperatures directly hit the economy, mainly by decreasing human productivity and agricultural yields, according to other research. Unchecked climate change could reduce average global income by around 23 percent in 2100, and as much as 75 percent in the world’s poorest countries, according to research by UC Berkeley public policy professor Solomon Hsiang and coauthors in a 2015 Nature paper (see “Hotter Days Will Drive Global Inequality”). Notably, that excludes the devastating economic impacts of things like hurricanes and sea level rise.

    “We know that high temperatures have numerous damaging consequences for current economic productivity, at the time that the high temperatures occur,” Hsiang said in an e-mail to MIT Technology Review. “This study demonstrates a new way in which high temperatures today reduce economic productivity far into the future, by weakening our labor force.”

    The good news, at least for certain nations and demographic groups, is that air-conditioning nearly eliminates this observed effect, based on the authors’ analysis of U.S. Census data that captures how air-conditioning penetration increased in U.S. counties over time. But that could point to one more way that rising global temperatures will disproportionally harm impoverished nations, or perhaps already have.

    “In poor countries in hot climates that don’t have air-conditioning, we could imagine these effects being even more dramatic,” Rossin-Slater says.

    The study explored the results for 12 million people born in the United States between 1969 and 1977, incorporating adult earnings information from newly available data in the U.S. Census Bureau’s Longitudinal Employer Household Dynamics program. The researchers sought to isolate the impact of temperature, and control for other variables, by using “fine-scale” daily weather data and county-level birth information.

    “This study makes it very clear to see how climate change in the next few decades can affect our grandchildren, even if populations in the distant future figure out how to cool things back down,” Hsiang said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 2:04 pm on December 2, 2017 Permalink | Reply
    Tags: and Cameras Will Never Be the Same, , Lenses Are Being Reinvented, MIT Technology Review, ,   

    From MIT Tech Review: “Lenses Are Being Reinvented, and Cameras Will Never Be the Same” 

    MIT Technology Review
    M.I.T Technology Review

    December 1, 2017
    No writer credit

    “Metalenses” created with photolithography could change the nature of imaging and optical processing.

    Lenses are almost as old as civilization itself. The ancient Egyptians, Greeks, and Babylonians all developed lenses made from polished quartz and used them for simple magnification. Later, 17th-century scientists combined lenses to make telescopes and microscopes, instruments that changed our view of the universe and our position within it.

    Now lenses are being reinvented by the process of photolithography, which carves subwavelength features onto flat sheets of glass. Today, Alan She and pals at Harvard University in Massachusetts show how to arrange these features in ways that scatter light with greater control than has ever been possible. They say the resulting “metalenses” are set to revolutionize imaging and usher in a new era of optical processing.

    Lens making has always been a tricky business. It is generally done by pouring molten glass, or silicon dioxide, into a mold and allowing it to set before grinding and polishing it into the required shape. This is a time-consuming business that is significantly different from the manufacturing processes for light-sensing components on microchips.

    Metalenses are carved onto wafers of silicon dioxide in a process like that used to make silicon chips. No image credit.

    So a way of making lenses on chips in the same way would be hugely useful. It would allow lenses to be fabricated in the same plants as other microelectronic components, even at the same time.

    She and co show how this process is now possible. The key idea is that tiny features, smaller than the wavelength of light, can manipulate it. For example, white light can be broken into its component colors by reflecting it off a surface into which are carved a set of parallel trenches that have the same scale as the wavelength of light.

    Metalenses can produce high quality images

    Physicists have played with so-called diffraction gratings for centuries. But photolithography makes it possible to take the idea much further by creating a wider range of features and varying their shape and orientation.

    Since the 1960s, photolithography has produced ever smaller features on silicon chips. In 1970, this technique could carve shapes in silicon with a scale of around 10 micrometers. By 1985, feature size had dropped to one micrometer, and by 1998, to 250 nanometers. Today, the chip industry makes features around 10 nanometers in size.

    Visible light has a wavelength of 400 to 700 nanometers, so the chip industry has been able to make features of this size for some time. But only recently have researchers begun to investigate how these features can be arranged on flat sheets of silicon dioxide to create metalenses that bend light.

    The process begins with a silicon dioxide wafer onto which is deposited a thin layer of silicon covered in a photoresist pattern. The silicon below is then carved away using ultraviolet light. Washing away the remaining photoresist leaves the unexposed silicon in the desired shape.

    She and co use this process to create a periodic array of silicon pillars on glass that scatter visible light as it passes through. And by carefully controlling the spacing between the pillars, the team can bring the light to a focus.

    Specific pillar spacings determine the precise optical properties of this lens. For example, the researchers can control chromatic aberration to determine where light of different colors comes to a focus.

    In imaging lenses, chromatic aberration must be minimized—it otherwise produces the colored fringes around objects viewed through cheap toy telescopes. But in spectrographs, different colors must be brought to focus in different places. She and co can do either.

    Neither do these lenses suffer from spherical aberration, a common problem with ordinary lenses caused by their three-dimensional spherical shape. Metalenses do not have this problem because they are flat. Indeed, they are similar to the theoretical “ideal lenses” that undergraduate physicists study in optics courses.

    Of course, physicists have been able to make flat lenses, such as Fresnel lenses, for decades. But they have always been hard to make.

    The key advance here is that metalenses, because they can be fabricated in the same way as microchips, can be mass-produced with subwavelength surface features. She and co make dozens of them on a single silica wafer. Each of these lenses is less than a micrometer thick, with a diameter of 20 millimeters and a focal length of 50 millimeters.

    “We envision a manufacturing transition from using machined or moulded optics to lithographically patterned optics, where they can be mass produced with the similar scale and precision as IC chips,” say She and co.

    And they can do this with chip fabrication technology that is more than a decade old. That will give old fab plants a new lease on life. “State-of-the-art equipment is useful, but not necessarily required,” say She and co.

    Metalenses have a wide range of applications. The most obvious is imaging. Flat lenses will make imaging systems thinner and simpler. But crucially, since metalenses can be fabricated in the same process as the electronic components for sensing light, they will be cheaper.

    So cameras for smartphones, laptops, and augmented-reality imaging systems will suddenly become smaller and less expensive to make. They could even be printed onto the end of optical fibers to acts as endoscopes.

    Astronomers could have some fun too. These lenses are significantly lighter and thinner than the behemoths they have launched into orbit in observatories such as the Hubble Space Telescope. A new generation of space-based astronomy and Earth observing beckons.

    But it is within chips themselves that this technology could have the biggest impact. The technique makes it possible to build complex optical bench-type systems into chips for optical processing.

    And there are further advances in the pipeline. One possibility is to change the properties of metalenses in real time using electric fields. That raises the prospect of lenses that change focal length with voltage—or, more significant, that switch light.

    Science paper:
    Alan She, Shuyan Zhang, Samuel Shian, David R. Clarke, Federico Capasso
    Large Area Metalenses: Design, Characterization, and Mass Manufacturing. No Journal reference.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 1:23 pm on November 8, 2017 Permalink | Reply
    Tags: , CSAIL-MIT’s Computer Science and Artificial Intelligence Lab, Daniela Rus, MIT Technology Review, More Evidence that Humans and Machines Are Better When They Team Up, ,   

    From M.I.T Technology Review: Women in STEM- Daniela Rus”More Evidence that Humans and Machines Are Better When They Team Up” 

    MIT Technology Review
    M.I.T Technology Review

    November 8, 2017
    Will Knight

    By worrying about job displacement, we might end up missing a huge opportunity for technological amplification.

    MIT computer scientist Daniela Rus. Justin Saglio

    Instead of just fretting about how robots and AI will eliminate jobs, we should explore new ways for humans and machines to collaborate, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).

    “I believe people and machines should not be competitors, they should be collaborators,” Rus said during her keynote at EmTech MIT 2017, an annual event hosted by MIT Technology Review.

    How technology will impact employment in coming years has become a huge question for economists, policy-makers, and technologists. And, as one of the world’s preeminent centers of robotics and artificial intelligence, CSAIL has a big stake in driving coming changes.

    There is some disagreement among experts about how significantly jobs will be affected by automation and AI, and about how this will be offset by the creation of new business opportunities. Last week, Rus and others at MIT organized an event called AI and the Future of Work, where some speakers gave more dire warnings about the likely upheaval ahead (see “Is AI About to Decimate White Collar Jobs?”).

    The potential for AI to augment human skills is often mentioned, but it has been researched relatively little. Rus talked about a study by researchers from Harvard University comparing the ability of expert doctors and AI software to diagnose cancer in patients. They found that doctors perform significantly better than the software, but doctors together with software were better still.

    Rus pointed to the potential for AI to augment human capabilities in law and in manufacturing, where smarter automated systems might enable the production of goods to be highly customized and more distributed.

    Robotics might end up augmenting human abilities in some surprising ways. For instance, Rus pointed to a project at MIT that involves using the technology in self-driving cars to help people with visual impairment to navigate. She also speculated that brain-computer interfaces, while still relatively crude today, might have a huge impact on future interactions with robots.

    Although Rus is bullish on the future of work, she said two economic phenomena do give her cause for concern. One is the decreasing quality of many jobs, something that is partly shaped by automation; and the other is the flat gross domestic product of the United States, which impacts the emergence of new economic opportunities.

    But because AI is still so limited, she said she expects it to mostly eliminate routing and boring elements of work. “There is still a lot to be done in this space,” Rus said. “I am wildly excited about offloading my routine tasks to machines so I can focus on things that are interesting.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 9:54 am on November 1, 2017 Permalink | Reply
    Tags: , Deep neural networks, Google Researchers Have a New Alternative to Traditional Neural Networks, MIT Technology Review   

    From M.I.T Technology Review: “Google Researchers Have a New Alternative to Traditional Neural Networks” 

    MIT Technology Review
    M.I.T Technology Review

    November 1st, 2017
    Jamie Condliffe

    Image credit: Jingyi Wang

    Say hello to the capsule network.

    AI has enjoyed huge growth in the past few years, and much of that success is owed to deep neural networks, which provide the smarts behind some of AI’s most impressive tricks like image recognition. But there is growing concern that some of the fundamental tenets that have made those systems so successful may not be able to overcome the major problems facing AI—perhaps the biggest of which is a need for huge quantities of data from which to learn.

    Seemingly Google’s Geoff Hinton is among those who are concerned. Because Wired reports that he has now unveiled a new take on the traditional neural networks that he calls capsule networks. In a pair of new papers—one published on the arXIv, the other on OpenReview—Hinton and a handful of colleagues explain how they work.

    Their approach uses small groups of neurons, collectively known as capsules, which are organized into layers to identify things in video or images. When several capsules in one layer agree on having detected something, they activate a capsule at a higher level—and so on, until the network is able to make a judgement about what it sees. Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like from varying angles.

    Hinton claims that the approach, which has been in the making for decades, should enable his networks to require less data than regular neural nets in order to recognize objects in new situations. In the papers published so far, capsule networks have been shown to keep up with regular neural networks when it comes to identifying handwritten characters, and make fewer errors when trying to recognize previously observed toys from different angles. In other words, he’s published the results because he’s got his capsules to work as well as, or slightly better than, regular ones (albeit more slowly, for now).

    Now, then, comes the interesting part. Will these systems provide a compelling alternative to traditional neural networks, or will they stall? Right now it’s impossible to tell, but we can expect the machine learning community to implement the work, and fast, in order to find out. Either way, those concerned about the limitations of current AI systems can be heartened by the fact that researchers are pushing the boundaries to build new deep learning alternatives.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 10:27 am on July 12, 2017 Permalink | Reply
    Tags: , , , , Micius satellite, MIT Technology Review, , Teleportation achieved   

    From MIT Tech Review: “First Object Teleported from Earth to Orbit” 

    MIT Technology Review
    M.I.T Technology Review

    July 10, 2017
    No writer credit found

    Researchers in China have teleported a photon from the ground to a satellite orbiting more than 500 kilometers above.

    Last year, a Long March 2D rocket took off from the Jiuquan Satellite Launch Centre in the Gobi Desert carrying a satellite called Micius, named after an ancient Chinese philosopher who died in 391 B.C. The rocket placed Micius in a Sun-synchronous orbit so that it passes over the same point on Earth at the same time each day.

    Micius is a highly sensitive photon receiver that can detect the quantum states of single photons fired from the ground. That’s important because it should allow scientists to test the technological building blocks for various quantum feats such as entanglement, cryptography, and teleportation.

    Micius satellite. https://www.fusecrunch.com/chinas-first-quantum-satellite.html

    Today, the Micius team announced the results of its first experiments. The team created the first satellite-to-ground quantum network, in the process smashing the record for the longest distance over which entanglement has been measured. And they’ve used this quantum network to teleport the first object from the ground to orbit.

    Teleportation has become a standard operation in quantum optics labs around the world. The technique relies on the strange phenomenon of entanglement. This occurs when two quantum objects, such as photons, form at the same instant and point in space and so share the same existence. In technical terms, they are described by the same wave function.

    No image caption or credit.

    The curious thing about entanglement is that this shared existence continues even when the photons are separated by vast distances. So a measurement on one immediately influences the state of the other, regardless of the distance between them.

    Back in the 1990s, scientists realized they could use this link to transmit quantum information from one point in the universe to another. The idea is to “download” all the information associated with one photon in one place and transmit it over an entangled link to another photon in another place.

    This second photon then takes on the identity of the first. To all intents and purposes, it becomes the first photon. That’s the nature of teleportation and it has been performed many times in labs on Earth.

    Teleportation is a building block for a wide range of technologies. “Long-distance teleportation has been recognized as a fundamental element in protocols such as large-scale quantum networks and distributed quantum computation,” says the Chinese team.

    In theory, there should be no maximum distance over which this can be done. But entanglement is a fragile thing because photons interact with matter in the atmosphere or inside optical fibers, causing the entanglement to be lost.

    As a result, the distance over which scientists have measured entanglement or performed teleportation is severely limited. “Previous teleportation experiments between distant locations were limited to a distance on the order of 100 kilometers, due to photon loss in optical fibers or terrestrial free-space channels,” says the team.

    But Micius changes all that because it orbits at an altitude of 500 kilometers, and for most of this distance, any photons making the journey travel through a vacuum. To minimize the amount of atmosphere in the way, the Chinese team set up its ground station in Ngari in Tibet at an altitude of over 4,000 meters. So the distance from the ground to the satellite varies from 1,400 kilometers when it is near the horizon to 500 kilometers when it is overhead.

    To perform the experiment, the Chinese team created entangled pairs of photons on the ground at a rate of about 4,000 per second. They then beamed one of these photons to the satellite, which passed overhead every day at midnight. They kept the other photon on the ground.

    Finally, they measured the photons on the ground and in orbit to confirm that entanglement was taking place, and that they were able to teleport photons in this way. Over 32 days, they sent millions of photons and found positive results in 911 cases. “We report the first quantum teleportation of independent single-photon qubits from a ground observatory to a low Earth orbit satellite—through an up-link channel— with a distance up to 1400 km,” says the Chinese team.

    This is the first time that any object has been teleported from Earth to orbit, and it smashes the record for the longest distance for entanglement.

    That’s impressive work that sets the stage for much more ambitious goals in the future. “This work establishes the first ground-to-satellite up-link for faithful and ultra-long-distance quantum teleportation, an essential step toward global-scale quantum internet,” says the team.

    It also shows China’s obvious dominance and lead in a field that, until recently, was led by Europe and the U.S.—Micius would surely have been impressed. But an important question now is how the West will respond.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 2:17 pm on July 11, 2017 Permalink | Reply
    Tags: Amygdala, Kay Tye, MIT Technology Review, Neurons, , ,   

    From MIT Tech Review: Women in STEM – “How the Brain Seeks Pleasure and Avoids Pain” Kay Tye 

    MIT Technology Review
    M.I.T Technology Review

    June 27, 2017
    Amanda Schaffer

    Neuroscientist Kay Tye

    As a child, Kay Tye was immersed in a life of science. “I grew up in my mom’s lab,” she says. At the age of five or six, she earned 25 cents a box for “restocking” bulk-ordered pipette tips into boxes for sterilization as her mother, an acclaimed biochemist at Cornell University, probed the genetics of yeast. (Tye’s father is a theoretical physicist known for his work on cosmic inflation and superstring theory.)

    Today, Tye runs her own neuroscience lab at MIT. Under large black lights reminiscent of a fashion shoot, she and her team at the Picower Institute for Learning and Memory can observe how mice behave when particular brain circuits are turned on or off. Nearby, they can record the mice’s neural activity as the animals move toward a particular stimulus, like sugar water, or away, if they’re crossing a floor that delivers mild electric shocks. Elsewhere, they create brain slices to test in vitro, since these samples retain their physiological activity, even outside the body, for up to eight hours.

    Tye has been at the forefront of efforts to pinpoint the sources of anxiety and other emotions in the brain by analyzing how groups of neurons work together in circuits to process information. In particular, her work has contributed to a profound shift in researchers’ understanding of the amygdala, a brain area that has been thought of as central to fear responses: she has found that signaling in the amygdala can in fact reduce anxiety as well as increase it. To gain such insights, she has also made crucial advances in a technique, called optogenetics, that allows researchers to activate or suppress particular neural circuits in lab animals using light. Optogenetics was developed by Stanford neuroscientist and psychiatrist Karl ­Deisseroth, and it represented a breakthrough in efforts to determine the role of specific parts of the brain. While Tye was working in his laboratory as a postdoc, she demonstrated, for the first time, that it was possible to pinpoint and control specific groups of neurons that were sending signals to specific target neurons.

    No image caption or credit.

    This fine-grained approach is important because drugs that treat conditions like anxiety currently do not target specific circuits, let alone individual neurons; rather, they operate throughout the brain, which often leads to undesirable side effects. Tye’s research may eventually help open the door to drugs that affect only specific neural circuits, reducing anxiety with fewer side effects.

    Such work has earned formal accolades, including a Presidential Early Career Award for Scientists and Engineers from President Obama, a Freedman Prize for neuroscience, and a TR35 award, recognizing outstanding researchers under the age of 35. Tye has also won high praise from others in her field who admire the creative breadth of her ambition. “She’s not afraid to ask the most fundamental questions, the ones most other scientists shy away from,” says Sheena Josselyn of the University of Toronto and the Hospital for Sick Children Research Institute.

    The questions she takes on involve emotions and phenomena that loom large in human experience, such as reward-seeking, loneliness, and compulsive overeating. Her goal is to understand their neural basis—to bridge the gap between brain, as understood by neuroscientists, and the mind, as conceived more expansively by psychiatrists, psychologists, and other students of human behavior.

    Would-be novelist

    Though it might seem as if Tye was born to be a scientist, she says her choice of career was anything but inevitable. In high school, she was ambivalent about science and gravitated instead toward writing; she wrote plays, short stories, and poetry. “In my mind, I was going to be a novelist,” she recalls.

    Still, while applying to college, she included MIT on her list, partly to humor her parents, Bik-Kwoon Tye and Henry Tye, both of whom had earned PhDs there in 1974. And when she received an acceptance letter, her father found it hard to disguise his feelings as his eyes welled with tears. “I’d never in my life seen my dad cry,” she says. She decided that she ought to give scientific learning a more dedicated try. She also convinced herself (with parental encouragement) that focusing on the natural world would give her more to write about down the road.

    As a freshman at MIT, Tye joined the lab of Suzanne Corkin, who was working with H.M., one of the most famous patients in the history of neuroscience. H.M., whose name was revealed to be Henry Molaison upon his death in 2008, suffered from profound amnesia after a lobotomy to treat seizures; studying his condition allowed researchers to probe the neural underpinnings of memory. One of Tye’s roles in the group was to make H.M. a peanut butter and jelly sandwich for lunch. He would eat it and then, moments later, with crumbs still on his face, ask, “Did we have lunch yet?”

    Researchers troubleshoot behavioral boxes in which mice learn to form positive and negative associations with sounds. No image credit.

    “It made me appreciate that these basic functions, like memory, that are so key to who we are have biological substrates in the brain,” she says. Neuroscience can be intimidating and filled with jargon, she adds. But the experience with H.M., along with an inspiring introductory psychology class taught by Steven Pinker, “made it seem worth it to slog through the all-nighters” to understand the biological mechanisms behind psychological constructs.

    Still, after graduation, Tye wanted to make sure she was “looking around,” thinking about who she was and who she wanted to be. So she spent a year backpacking in Australia, where she worked on a farm, lived in a yoga ashram, taught yoga, camped out on the beach, and worked on a novel. She found that writing was “hard and lonely.” She enjoyed teaching yoga but didn’t see it as a satisfying career path.

    “I came out of that year surprisingly ready to go to grad school,” she says. Diving back into the academic world, she initially struggled to find a lab that would accept her and almost dropped out after her first year. But she found a mentor in Patricia Janak, who became her advisor, and earned a PhD in neuroscience at the University of California, San Francisco, in 2008.

    A surprise in the amygdala

    In 2009, Tye joined Deisseroth’s lab at Stanford. Deisseroth had already developed optogenetics, which gave researchers a much more precise way to identify the contributions of individual neurons within a circuit. Along with others in the lab, Tye used optogenetics to probe the connection between two parts of the amygdala, an almond-shaped region that is crucial to anxiety and fear. She first identified neurons in one area (known as the basolateral amygdala) that formed connections to neurons in another amygdalar area (known as the central nucleus) by sending out projections of nerve fibers. When she stimulated those basolateral amygdala neurons, she was able to reduce anxiety in mice. That is, she could cause the animals to spend more time in open spaces and less time cowering to the side. This was surprising, because when researchers stimulated the amygdala as a whole, the mice’s behavior grew more anxious.

    At first, everyone asked, “Are you sure you’re using the tool right? What’s going on?” she recalls. But after meticulous validation, in 2011, Tye and the group published their results in Nature, showing that some circuitry within the amygdala helps to calm animals down. This paper also represented a breakthrough in optogenetic technique. For the first time, researchers were able to zero in on and manipulate a specific part of a brain circuit: particular groups of neurons communicating with known target neurons. The technique, known as optogenetic projection-specific manipulation, is now considered one of the key tools of neuroscience.

    In 2012, Tye came to MIT as an assistant professor of brain and cognitive sciences at the Picower, continuing her work on anxiety. While setting up her lab, she targeted neurons within the amygdala that seemed to have the opposite effect on mouse anxiety, causing it to increase. These brain cells are also located in the basolateral amygdala, but they send projections to a nearby region known as the ventral hippocampus. When Tye stimulated this circuit using optogenetics, the mice avoided open spaces, apparently suffering from anxiety. (When she inhibited the connections from forming, the animals hung out in the open again, their anxiety seemingly alleviated.) Tye proposed that neighboring neurons in the amygdala can have opposite effects on animals’ behavior, depending on the targets to which they send signals.

    Tye lab grad students Chris Leppla and Caitlin Vander Weele and postdocs Praneeth Namburi and Stephen Allsop. No image credit.

    Threats and rewards

    At the time, most researchers studying the amygdala still tended to focus mainly on its role in fear. Yet Tye suspected that activity in this part of the brain might encode a stimulus as either rewarding or threatening, good or bad, helping individuals decide how to respond. “There are many stimuli we encounter in our daily lives that are ambiguous,” says Conor ­Liston of the Brain and Mind Research Institute at Weill Cornell. “A social interaction, for example, can be either threatening or rewarding, and we need brain circuits devoted to differentiating which is which.”

    By looking at the relative strength of the currents passing through two glutamate receptors known to indicate synaptic strength, Tye discovered that different neural connections in mice were reinforced depending on whether a particular stimulus was linked to a reward or a threat. When mice learned to associate a sound with a treat of sugar, she found stronger synaptic input to the neurons in the basolateral amygdala that were sending information to the nucleus accumbens, which is part of the brain’s reward circuitry. On the other hand, when mice learned to associate the sound with mild electric shocks to their feet, input signals grew stronger in circuits leading from the basolateral amygdala to the centromedial amygdala, which is involved in pain and fear. In addition, she demonstrated a trade-off: when one of these circuits grew more active, the other grew less so. In other words, she had found how the brain encodes information that allows mice to differentiate between stimuli that are rewarding and those that are potentially harmful. The results were published in Nature in 2015.

    In recent work, Tye also probed the circuitry involved in making split-second decisions when both threatening and rewarding cues are present at the same time. She and her team focused this time on connections between the amygdala and the prefrontal cortex, an area responsible for higher-order thinking. (Specifically, they examined interactions between the basolateral amygdala and the prelimbic medial prefrontal cortex.) Using optogenetics and other techniques, they showed that this circuitry was active when the animals were simultaneously exposed to a potential sugar treat and a potential electric shock and had to make a decision about how to behave. Her results, which appeared in April in Nature Neuroscience, help illuminate how animals figure out what to do in the face of complex and sometimes contradictory cues.

    Grad student Caitlin Vander Weele examines magnified images of brain slices to verify that a calcium sensor is targeting a specific type of neuron. No image credit.

    Cravings and compulsions

    As a graduate student, Tye had worked with researchers focused on addiction, but she was more interested in natural rewards, like sugar, than in substances that are regularly abused. In 2012, New York City mayor Michael Bloomberg announced a plan to limit the portion size of sodas sold in movie theaters, stadiums, and fast-food restaurants. Tye found herself wondering what exactly, at a brain level, causes people to crave sugary treats, above and beyond the normal drive to satisfy hunger.

    So she delved into the neural circuitry. In a paper published in 2015 in Cell, she and her team focused on neurons in the lateral hypothalamus (LH), a brain area involved in drives like hunger, and studied their projections into another region, called the ventral tegmental area (VTA), known to play a role in both motivation and addiction. Using optogenetics, she and her team showed that turning on specific LH-VTA connections caused the mice to gorge on sugar, while turning them off reduced the compulsive overeating.

    On her desktop, Tye loads a video demonstration featuring a mouse with a cable for light transmission attached to its brain. The video shows the mouse moving around, casually at first. Then, when the laser light is turned on to activate specific neurons in the LH-VTA circuit, the animal becomes frantic, running and licking the floor. Soon after, it brings its empty paws up to its mouth and does a pantomime of tasting and nibbling. “It engages in this complicated motor sequence and pretends to eat, which is crazy because there’s no food,” says Tye. In other words, turning the circuit on causes the animal to behave compulsively. Turning it off has the opposite effect.

    Crucially, though, while switching off this circuit prevents compulsive behavior, it does not affect normal eating. That is, it is possible to define a brain-based difference between at least some healthy and unhealthy drives to eat. This suggests that it might be possible to develop targeted drugs or even some form of biofeedback that might someday help people reduce unhealthy cravings without blocking ordinary hunger.

    Another recent finding, about loneliness, arose serendipitously from a project that postdoc Gillian Matthews had begun as a graduate student at Imperial College London with Mark Ungless. ­Matthews noticed that mice that had been isolated for 24 hours during experiments displayed stronger neural signaling in the brain’s dorsal raphe nucleus, which participates in reward signaling—and actively sought out the company of other mice. After she moved to Tye’s lab at MIT, Matthews and Tye developed the theory that the animals were craving interaction. In further experiments, they used optogenetics to turn off the signaling pathway in the dorsal raphe nucleus. Mice subjected to this treatment did not seem to seek out additional social interaction following time by themselves.

    Ultimately, Tye hopes that she and her team can speak to fundamental human questions, like why some people prefer to spend more time alone while others crave greater social contact.

    A lab without drama

    Though Tye’s lab is interested in the origins of phenomena like fear and compulsion, it is notable for its own lack of tension and conflict. Stephen Allsop, a postdoc who has worked with her for five years (several of which were spent as a graduate student), says that she stresses close collaboration among team members and oversees an upbeat, supportive culture: “It’s amazing how little drama we have in this lab.”

    “Along with scientific integrity, I make the positive, collaborative, open culture of my research group—and the happiness of the individuals within it—my top priority,” says Tye. “Scientific excellence is a close second.” Strong relationships with professors and mentors are part of the draw of science, she adds.

    Indeed, she says, they are second only to the bonds between parents and children. In 2013, Tye and her husband, Jim Wagner, a software developer, had a daughter, Keeva, who has already accompanied her to conferences around the world. Their son, Jet, was born last year. And the children have found a place in her lab, much as she found a niche in her mother’s (though they have yet to earn paid positions). As she told Nature when Keeva was still an infant: “If my daughter all of a sudden needs to be picked up, I bring her to my lab meeting or meet with people while I bounce her. If she has a total meltdown, then sometimes I have to bail and follow up later.”

    But while she may be easygoing as a parent and a lab leader, Tye finds plenty of drama in neuroscience itself, and she keeps returning to its central questions because they are so enticing. Though she says she reads fewer novels now than she used to, she still seems compelled by the kinds of mysteries a writer might probe: Why does a hero set out on a journey? Why does the chatter in his or her head go awry and lead to gloomy soliloquizing or anxious self-sabotage? Like a novelist, she exhibits tremendous creative breadth. “There is something special about science,” she says. “Your new work is based on what you did previously. And if you’re lucky, you can help shape the future.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 4:02 pm on May 5, 2017 Permalink | Reply
    Tags: , Astrophysicists Turn GPS Satellite Constellation into Giant Dark Matter Detector, , , , , MIT Technology Review   

    From MIT Tech Review: “Astrophysicists Turn GPS Satellite Constellation into Giant Dark Matter Detector” 

    MIT Technology Review
    M.I.T. Technology Review

    May 4, 2017
    Emerging Technology from the arXiv
    If Earth is sweeping through an ocean of dark matter, the effects should be visible in clock data from GPS satellites.


    The Global Positioning System consists of 31 Earth-orbiting satellites, each carrying an atomic clock that sends a highly accurate timing signal to the ground. Anybody with an appropriate receiver can work out their position to within a few meters by comparing the arrival time of signals from three or more satellites.

    And this system can easily be improved. The accuracy of GPS signals can be made much higher by combining the signals with ones produced on the ground. Geophysicists, for example, use this technique to determine the position of ground stations to within a few millimeters. In this way, they can measure the tiny movements of entire continents.

    This is an impressive endeavor. Geophysicists routinely measure the difference between GPS signals and clocks on the ground with an accuracy of less than 0.1 nanoseconds. They also archive this data providing a detailed record of how GPS signals have changed over time. This archival storage opens the possibility of using the data for other exotic studies.

    Today Benjamin Roberts at the University of Nevada and a few pals say they have used this data to find out whether GPS satellites may have been influenced by dark matter, the mysterious invisible stuff that astrophysicists think fills our galaxy. In effect, these guys have turned the Global Positioning System into an astrophysical observatory of truly planetary proportion.

    The theory behind dark matter is based in observations of the way galaxies rotate. This spinning motion is so fast that it should send stars flying off into extra-galactic space.

    But this doesn’t happen. Instead, a mysterious force must somehow hold the stars in place. The theory is that this force is gravity generated by invisible stuff that doesn’t show up in astronomical observations. In other words, dark matter.

    If this theory is correct, dark matter should fill our galaxy, too, and as the sun makes its stately orbit round the galactic center, Earth should plough through a great ocean of dark matter.

    There’s no obvious sign of this stuff, which makes physicists think it must interact very weakly with ordinary visible matter. But they hypothesize that if dark matter exists in small atomic-sized lumps, it might occasionally hit atomic nuclei head on, thereby transferring their energy to visible matter.

    That’s why astrophysicists have built giant observatories in underground mines to look for the tell-tale energy released in these collisions. So far, they’ve seen nothing. Or at least, there is no consensus that anybody has seen evidence of dark matter. So other ways to look for dark matter are desperately needed.

    Enter Roberts and co. They start with a different vision of what dark matter may consist of. Instead of small particles, another option is that dark matter may take the form of topological defects in space-time left over from the Big Bang. These would be glitches in the fabric of the universe, like domain walls, that bend space-time in their vicinity.

    Should the Earth pass through such a defect, it would change the local gravitational field just slightly over a period of an hour or so.

    But how to detect such a change in the local field? To Roberts and co, the answer is clear. According to relativity, any change in gravity also changes the rate at which a clock ticks. That’s why orbiting clocks run a little bit slower than those on the surface.

    If the Earth has passed through any topological defects in the recent past, the clock data from GPS satellites would have recorded this event. So by searching through geophysicists’ archived records of GPS clock timings, it ought to be possible to see such events.

    That’s the theory. In practice, this work is a little more complicated because GPS timing signals are also influenced by other factors such as atmospheric conditions, random variations, and other things. All these need to be taken into account.

    But a key signature of a topological defect is that its influence should sweep through the fleet of satellites as the Earth passes through it. So any other kinds of local timing fluctuation can be ruled out.

    Roberts and co study the data over the last 16 years, and their results make for interesting reading. These guys say they have found no sign that Earth has passed through a topological defect in that time. “We find no evidence for dark matter clumps in the form of domain walls,” they say.

    Of course, that doesn’t rule out the existence of dark matter or even that dark matter exists in this form. But it does place strong limits on how common topological defects can be and how strong their influence is.

    Until now, the limits have been set using observations of the cosmic microwave background radiation, which should reveal topological defects, albeit at low resolution. The work of Roberts and co improves these limits by five orders of magnitude.

    And better data should be available soon. The best clocks in Earth laboratories are orders of magnitude more accurate than the atomic clocks on board GPS satellites. So a network of clocks on Earth should act as an even more sensitive observatory for topological defects. These clocks are only just becoming linked together in networks, so the data from them should be available in the coming years.

    This greater sensitivity should allow physicists to look for other types of dark matter, which may take the form of solitons or Q-balls, for example.

    All this is part of a fascinating process of evolution. The technology behind the GPS system can be traced directly back to the first attempts to track the Sputnik spacecraft after the Soviets launched it in 1957. Physicists soon realized they could determine its location by measuring the radio signals it generated at different places.

    It wasn’t long before they turned this idea on its head. Given the known location of a satellite, is it possible to determine your location on Earth using the signals it broadcasts? The GPS constellation is a direct descendant of that train of thought.

    Those physicists would surely be amazed to know that the technology they developed is also now being used as a planetary-sized astrophysical observatory.

    Ref: arxiv.org/abs/1704.06844: GPS as a Dark-Matter Detector: Orders-of-Magnitude Improvement on Couplings of Clumpy Dark Matter to Atomic Clocks

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 3:44 pm on February 23, 2017 Permalink | Reply
    Tags: Magnetic resonance imaging, MIT Technology Review, University of Melbourne   

    From MIT Tech Review: “This Microscope Reveals Human Biochemistry at Previously Unimaginable Scales” 

    MIT Technology Review
    M.I.T Technology Review

    February 23, 2017


    Magnetic resonance imaging is one of the miracles of modern science. It produces noninvasive 3-D images of the body using harmless magnetic fields and radio waves. And with a few additional tricks, it can also reveal details of the biochemical makeup of tissue.

    Atomic-scale MRI holds promise for new drug discovery | The Melbourne Newsroom

    That biochemical trick is called magnetic resonance spectroscopy, and it is a powerful tool for physicians and researchers studying the biochemistry of the body, including metabolic changes in tumors in the brain and in muscles.

    But this technique is not perfect. The resolution of magnetic resonance spectroscopy is limited to length scales of about 10 micrometers. And there is a world of chemical and biological activity at smaller scales that scientists simply cannot access in this way.

    So physicians and researchers would dearly love to have a magnetic resonance microscope that can study body tissue and the biochemical reactions within it at much smaller scales.

    Today, David Simpson and pals at the University of Melbourne in Australia say they have built a magnetic resonance microscope with a resolution of just 300 nanometers that can study biochemical reactions on previously unimaginable scales. Their key breakthrough is an exotic diamond sensor that creates magnetic resonance images in a similar way to a light sensitive CCD chip in a camera.

    Magnetic resonance imaging works by placing a sample in a magnetic field so powerful that the atomic nuclei all become aligned; in other words, they all spin the same way. When these nuclei are zapped with radio waves, the nuclei become excited and then emit radio waves as they relax. By studying the pattern of re-emitted radio waves, it is possible to work out where they have come from and so build up a picture of the sample.

    The signals also reveal how the atoms are bonded to each other and the biochemical processes at work. But the resolution of this technique is limited by how closely the radio receiver can get to the sample.

    Enter Simpson and co, who have built an entirely new kind of magnetic resonance sensor out of diamond film. The secret sauce in this sensor is an array of nitrogen atoms that have been embedded in a diamond film at a depth of about seven nanometers and about 10 nanometers apart.

    Nitrogen atoms are useful because when embedded in diamond, they can be made to fluoresce. And when in a magnetic field, the color they produce is highly sensitive to the spin of atoms and electrons nearby or, in other words, to the local biochemical environment.

    So in the new machine, Simpson and co place their sample on top of the diamond sensor, in a powerful magnetic field and zap it with radio waves. Any changes in the state of nearby nuclei causes the nitrogen array to fluoresce in various colors. And the array of nitrogen atoms produces a kind of image, just like a light sensitive CCD chip. All Simpson and co do is monitor this fireworks display to see what’s going on.

    To put the new technique through its paces, Simpson and co study the behavior of hexaaqua copper(2+) complexes in aqueous solution. Hexaaqua copper is present in many enzymes which use it to incorporate copper in metalloproteins. However, the distribution of copper during this process, and the role it plays in cell signaling, is poorly understood because it is impossible to visualize in vivo.

    Simpson and co show how this can now be done using their new technique, which they call quantum magnetic resonance microscopy. They show how their new sensor can reveal the spatial distribution of copper 2+ ions in volumes of just a few attoLitres and at high resolution. “We demonstrate imaging resolution at the diffraction limit (~300 nm) with spin sensitivities in the zeptomol (10‐21) range,” say Simpson and co. They also show how the technique reveals the redox reactions that the ions undergo. And they do all this at room temperature.

    That’s impressive work that has important implications for the future study of biochemistry. “The work demonstrates that quantum sensing systems can accommodate the fluctuating Brownian environment encountered in ‘real’ chemical systems and the inherent fluctuations in the spin environment of ions undergoing ligand rearrangement,” says Simpson and co.

    That makes it a powerful new tool that could change the way we understand biological processes. Simpson and co are optimistic about its potential. “Quantum magnetic resonance microscopy is ideal for probing fundamental nanoscale biochemistry such as binding events on cell membranes and the intra‐cellular transition metal concentration in the periplasm of prokaryotic cells.”

    Ref: arxiv.org/abs/1702.04418: Quantum Magnetic Resonance Microscopy

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 12:48 pm on January 8, 2017 Permalink | Reply
    Tags: A test that will detect all of the major cancer types, , , , MIT Technology Review   

    From MIT Tech Review: “Liquid Biopsies Are About to Get a Billion Dollar Boost’ 

    MIT Technology Review
    M.I.T Technology Review

    January 6, 2017
    Michael Reilly

    A billion dollars sounds like a lot of money. But when your ambitions are as big as the cancer-detection startup Grail Bio’s are, it might not be enough.

    As CEO and ex-Googler Jeff Huber puts it, Grail’s aim is to create “a test that will detect all of the major cancer types.” Already the recipient of $100 million in funding from DNA sequencing company Illumina and a series of tech luminaries, Grail believes that adding another zero to its cash balance will put its lofty goals within reach. The company announced Thursday that it plans to raise $1 billion, has “indications of interest” from investors, and would move quickly to secure the hefty cash infusion.

    Whether Grail succeeds turns on the company’s ability to dramatically expand an emerging technology known as the liquid biopsy. It works by sequencing DNA from someone’s blood and looking for tell-tale fragments that indicate the presence of cancer. Dennis Lo, a doctor in Hong Kong, was among the first to show the technique’s promise. He’d previously used it to detect fetal DNA in a mother’s bloodstream. That led to a much safer form of screening for Down’s syndrome that is now in wide use.

    Lo has experimented with liquid biopsy as a way to catch liver and nasopharyngeal cancers, with some encouraging results. But he urged caution in assuming the technique could be translated to all cancers.

    Grail, which was spun out of Illumina about a year ago, has launched its first trials to see whether liquid biopsies can spot cancers earlier and more reliably than other screening tests.

    For his part, Huber seems to understand that he’s got a mountain to climb. After losing his wife to colorectal cancer, Grail’s mission is deeply personal. He acknowledges that detecting cancer DNA may be difficult, because the disease mutates rapidly as it advances, and varies immensely from one type to another. He says his company will rely on sequencing the DNA of tens of thousands of subjects to build a library of cancer DNA that computers can then decipher.

    Beyond the high-minded talk of turning the tide in the war against cancer, though, is a more cynical reading of the situation. As a unit within Illumina, Grail was an expensive, long-shot bet to create a new market for its gene sequencing machines. As a separate, now cash-rich company, Grail figures to become one of Illumina’s biggest customers. And venture capital will foot the bill, whether or not the experiment works.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: