Tagged: MIT Technology Review Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:42 pm on August 18, 2016 Permalink | Reply
    Tags: , , MIT Technology Review,   

    From MIT Tech Review: “New Brain-Mapping Technique Captures Every Connection Between Neurons” 

    MIT Technology Review
    M.I.T Technology Review

    August 18, 2016
    Ryan Cross

    The human brain is among the universe’s greatest remaining uncharted territories. And as with any mysterious land, the secret to understanding it begins with a good map.

    Neuroscientists have now taken a huge step toward the goal of mapping the connections between neurons in the brain using bits of genetic material to bar-code each individual brain cell. The technique, called MAP-seq, could help researchers study disorders like autism and schizophrenia in unprecedented detail.

    “We’ve got the basis for a whole new technology with a gazillion applications,” says Anthony Zador, a neuroscientist at Cold Spring Harbor Laboratory who came up with the technique.

    Current methods for mapping neuronal connections, known as the brain’s connectome, commonly rely on fluorescent proteins and microscopes to visualize cells, but they are laborious and have difficultly following the connections of many neurons at once.

    MAP-seq works by first creating a library of viruses that contain randomized RNA sequences. This mixture is then injected into the brain, and approximately one virus enters each neuron in the injection area, granting each cell a unique RNA bar code. The brain is then sliced and diced into orderly sections for processing. A DNA sequencer reads the RNA bar codes, and researchers create a connectivity matrix that displays how individual neurons connect to other regions of the brain.

    The newly published study, which appears Thursday in the journal Neuron, follows the sprawling outbound connections from 1,000 mouse neurons in a brain region called the locus coeruleus to show that the technique works. But Zador says the results actually reconcile previously conflicting findings about how those neurons connect across the brain.

    Justus Kebschull, who worked with Zador in developing MAP-seq, says the technique is getting better. “We’re now mapping out 100,000 cells at a time, in one week, in one experiment,” he says. “That was previously only possible if you put a ton of work in.”

    Both autism and schizophrenia are viewed as disorders that may arise from dysfunctional brain connectivity. There are perhaps hundreds of genetic mutations that may slightly alter the brain’s wiring as it develops. “We are looking at mouse models where something is mucked up. And now that the method is so fast, we can look at many mouse models,” Kebschull says. By comparing the brain circuitry in mice with different candidate genes for autism, researchers expect, they’ll get new insight into the condition.

    “I think it is a great method that has a lot of room to grow,” says Je Hyuk Lee, a molecular biologist at Cold Spring Harbor Laboratory, who was not part of the MAP-seq study. Although other groups have used similar bar-coding to study individual differences between cells, no one knew if the bar codes would be able to travel along the neuronal connections across the brain. “That had been conjectured but never shown, especially not at this scale,” Lee says.

    Zador says that as of now, his lab is the only one bar-coding the brain, but he hopes others will start using MAP-seq to chart the brain’s circuitry. “Because the cost of sequencing is continuing to plummet, we can envision doing this quickly and cheaply,” he said. It may not be long, then, before a complete map of the brain is ready for its first explorer to use.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 10:51 am on June 11, 2016 Permalink | Reply
    Tags: , Genetic replacement therapy, MIT Technology Review, Strimvelis   

    From MIT Tech Review: “Gene Therapy’s First Out-and-Out Cure Is Here” 

    MIT Technology Review
    M.I.T Technology Review

    May 6, 2016
    Antonio Regalado

    1
    An artist’s illustration of gene therapy shows a retrovirus harboring a correct copy of a human gene. No image credit

    A treatment now pending approval in Europe will be the first commercial gene therapy to provide an outright cure for a deadly disease.

    The treatment is a landmark for gene-replacement technology, an idea that’s struggled for three decades to prove itself safe and practical.

    Called Strimvelis, and owned by drug giant GlaxoSmithKline, the treatment is for severe combined immune deficiency, a rare disease that leaves newborns with almost no defense against viruses, bacteria, or fungi and is sometimes called “bubble boy” disease after an American child whose short life inside a protective plastic shield was described in a 1976 movie.

    The treatment is different from any that’s come before because it appears to be an outright cure carried out through a genetic repair. The therapy was tested on 18 children, the first of them 15 years ago. All are still alive.

    “I would be hesitant to call it a cure, although there’s no reason to think it won’t last,” says Sven Kili, the executive who heads gene-therapy development at GSK.

    The British company licensed the treatment in 2010 from the San Raffaele Telethon Institute for Gene Therapy, in Milan, Italy, where it was developed and first tested on children.

    On April 1, European advisors recommended that Strimvelis be allowed on the market. If, as expected, GSK wins formal authorization, it can start selling the drug in 27 European countries. GSK plans to seek U.S. marketing approval next year.

    GSK is the first large drug company to seek to market a gene therapy to treat any genetic disease. If successful, the therapeutic could signal a disruptive new phase in medicine in which one-time gene fixes replace trips to the pharmacy or lifelong dependence on medication.

    “The idea that you don’t have to worry about it and can be normal is extremely exciting for people,” says Marcia Boyle, founder and president of the Immune Deficiency Foundation, whose son was born with a different immune disorder, one of more than 200 known to exist. “I am a little guarded on gene therapy because we were all excited a long time ago, and it was not as easy to fool Mother Nature as people had hoped.”

    Today, several hundred gene therapies are in development, and many aspire to be out-and-out cures for one of about 5,000 rare diseases caused by errors in a single gene.

    Children who lack correct copies of a gene called adenosine deaminase begin to get life-threatening infections days after birth. The current treatment for this immune deficiency, known as ADA-SCID, is a bone marrow transplant, which itself is sometimes fatal, or lifelong therapy using costly replacement enzymes that cost $5,000 a vial.

    Strimvelis uses a “repair and replace” strategy, so called because doctors first remove stem cells from a patient’s bone marrow then soak them with viruses to transfer a correct copy of the ADA gene.

    “What we are talking about is ex vivo gene therapy—you pull out the cells, correct them in test tube, and put the cells back,” says Maria-Grazia Roncarolo, a pediatrician and scientist at Stanford University who led the original Milan experiments. “If you want to fix a disease for life, you need to put the gene in the stem cells.”

    Some companies are trying to add corrected genes using direct injections into muscles or the eye. But the repair-and-replace strategy may have the larger impact. As soon as next year, companies like Novartis and Juno Therapeutics may seek approval for cancer treatments that also use a combination of gene and cell therapy to obliterate one type of leukemia.

    Overall, investment in gene therapy is booming. The Alliance for Regenerative Medicine says that globally, in 2015, public and private companies raised $10 billion, and about 70 treatments are in late-stage testing.

    GSK has never sold a product so drastically different from a bottle of pills. And because ADA-SCID is one of the rarest diseases on Earth, Strimvelis won’t be a blockbuster. GSK estimates there are only about 14 cases a year in Europe, and 12 in the U.S.

    Instead, the British company hopes to master gene-therapy technology, including virus manufacturing. “If we can first make products that change lives, then we can develop them into things that affect more people,” says Kili. “We believe gene therapy is an area of important future growth; we don’t want to rush or cut corners.”

    Markets will closely scrutinize how much GSK charges for Strimvelis. Kili says a final decision hasn’t been made. Another gene therapy, called Glybera, debuted with a $1 million price tag but is already considered a commercial flop. The dilemma is how to bill for a high-tech drug that people take only once.

    Kili says GSK’s price won’t be anywhere close to a million dollars, though it will be enough to meet a company policy of getting a 14 percent return on every dollar spent on R&D.

    The connection between immune deficiency and gene therapy isn’t new. In fact, the first attempt to correct genes in a living person occurred in 1990, also in a patient with ADA-SCID.

    By 2000, teams in London and France had cured some children of a closely related immune deficiency, X-linked SCID, the bubble boy disease. But some of those children developed leukemia after the viruses dropped their genetic payloads into the wrong part of the genome.

    In the U.S., the Food and Drug Administration quickly canceled 27 trials over safety concerns. “It was a major step back,” says Roncarolo, and probably a more serious red flag even than the death of a volunteer named Jesse Gelsinger in a U.S. trial in 1999, which also drew attention to gene therapy’s risks.

    The San Raffaele Telethon Institute for Gene Therapy presented its own results in ADA-SCID, which also affects girls, in 2002 in the journal Science. Like the French, they’d also apparently cured patients, and because of differences in their approach, they didn’t run the same cancer risk.

    GSK says it is moving toward commercializing several other gene therapies for rare disease developed by the Italian team, including treatments for metachromatic leukodystrophy, a rare but rapidly fatal birth defect, and for beta thalassemia.

    Kili says the general idea is to leapfrog from ultra-rare diseases to less rare ones, like beta thalassemia, hemophilia, and sickle cell disease. However, he doubts the technology will be used to treat common conditions such as arthritis or heart disease anytime soon. Those conditions are complex and aren’t caused by a defect in just one gene.

    “Honestly, as we stand at the moment, I don’t think gene therapy will address all the ills or ailments of humanity. We can address [single-gene] disease,” he says. “We are building a hammer that is not that big.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 8:19 am on June 10, 2016 Permalink | Reply
    Tags: A Power Plant in Iceland Deals with Carbon Dioxide by Turning It into Rock, , , MIT Technology Review   

    From MIT Tech Review: “A Power Plant in Iceland Deals with Carbon Dioxide by Turning It into Rock” 

    MIT Technology Review
    MIT Technology Review

    June 9, 2016
    Ryan Cross

    1
    Photograph by Juerg Matter

    The world has a carbon dioxide problem. And while there are lots of ideas on how to curtail the nearly 40 billion tons of the gas that humanity spews into the atmosphere annually, one has just gotten a boost: burying it.

    Since 2012, Reykjavík Energy’s CarbFix project in Iceland has been injecting carbon dioxide underground in a way that converts it into rock so that it can’t escape. This kind of carbon sequestration has been tried before, but as researchers working on the project report today in the journal Science, the process of mineralizing the carbon dioxide happens far more quickly than expected, confirming previous reports and brightening the prospects for scaling up this technology.

    Iceland’s volcanic landscape is replete with basalt. Injecting carbon dioxide and water deep underground allows the mixture to react with calcium, magnesium, and iron in the basalt, turning it into carbonate minerals like limestone.

    2
    Project leader Juerg Matter stands by the injection well during the CarbFix project’s initial injection. Photograph by Sigurdur Gislason

    Conventional methods for storing carbon dioxide underground pressurize and heat it to form a supercritical fluid, giving it the properties of both a liquid and a gas. While making the carbon dioxide easier to inject into the ground—usually in an old oil or gas reservoir—this carries a higher risk that it could escape back into the atmosphere through cracks in the rock.

    CarbFix takes carbon dioxide from the Hellisheidi geothermal power plant, the largest in the world, which uses volcanically heated water to power turbines. The process produces 40,000 tons of carbon dioxide a year, as well as hydrogen sulfide, both of which are naturally present in the water.

    3
    The CarbFix pilot injection site in March 2011. Photograph by Martin Stute

    The new study shows that more than 95 percent of the injected material turned to rock in less than two years. “No one actually expected it to be this quick,” says Edda Aradóttir, CarbFix’s project manager. The project is already storing 5,000 tons underground per year, making it the largest of its kind. New equipment being installed this summer aims to double the rate of storage.

    Aradóttir says CarbFix spends $30 per ton to capture and inject the carbon dioxide, versus $65 to $100 per ton for the conventional method. A lot of that savings comes from not having to purify the carbon dioxide; it and the hydrogen sulfide are simply mixed with additional water and injected underground.

    4
    CarbFix team members handle the rock core recovered from drilling at the CarbFix pilot injection site in October 2014. Photograph by Juerg Matter

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 12:13 pm on June 9, 2016 Permalink | Reply
    Tags: , MIT Technology Review,   

    From MIT Tech Review: “Proof That Quantum Computers Will Change Everything” 

    MIT Technology Review
    MIT Technology Review

    First Demonstration of 10-Photon Quantum Entanglement

    June 9, 2016
    Emerging Technology from the arXiv

    The ability to entangle 10 photons should allow physicists to prove, once and for all, that quantum computers really can do things classical computers cannot.

    Entanglement is the strange phenomenon in which quantum particles become so deeply linked that they share the same existence. Once rare, entangling particles has become routine in labs all over the world.

    Quantum approach to big data. MIT
    Quantum approach to big data. MIT

    Physicists have learned how to create entanglement, transfer it from one particle to another, and even distil it. Indeed, entanglement has become a resource in itself and a crucial one for everything from cryptography and teleportation to computing and simulation.

    But a significant problem remains. To carry out ever more complex and powerful experiments, physicists need to produce entanglement on ever-larger scales by entangling more particles at the same time.

    The current numbers are paltry, though. Photons are the quantum workhorses in most labs and the record for the number of entangled photons is a mere eight, produced at a rate of about nine events per hour.

    Using the same techniques to create a 10-photon count rate would result in only 170 per year, too few even to measure easily. So the prospects of improvement have seemed remote.

    Which is why the work of Xi-Lin Wang and pals at the University of Science and Technology of China in Heifu is impressive. Today, they announce that they’ve produced 10-photon entanglement for the first time, and they’ve done it at a count rate that is three orders of magnitude higher than anything possible until now.

    The biggest bottleneck in entangling photons is the way they are produced. This involves a process called spontaneous parametric down conversion, in which one energetic photon is converted into two photons of lower energy inside a crystal of beta-barium borate. These daughter photons are naturally entangled.

    2
    Experiment setup for generating ten-photon polarization-entangled GHZ, from the science paper

    By zapping the crystal continuously with a laser beam, it is possible to create a stream of entangled photon pairs. However, the rate of down conversion is tiny, just one photon per trillion. So collecting the entangled pairs efficiently is hugely important.

    That’s no easy tasks, not least because the photons come out of the crystal in slightly different directions, neither of which can be easily predicted. Physicists collect the photons from the two points where they are most likely to appear but most of the entangled photons are lost.

    Xi-Lin and co have tackled this problem by reducing the uncertainty in the photon directions. Indeed, they have been able to shape the beams of entangled photons so that they form two separate circular beams, which can be more easily collected.

    In this way, the team has generated entangled photon pairs at the rate of about 10 million per watt of laser power. This is brighter than previous entanglement generators by a factor of about four. It is this improvement that makes 10-photon entanglement possible.

    Their method is to collect five successively generated pairs of entangled photons and pass them into an optical network of four beam splitters. The team then introduces time delays that ensure the photons arrive at the beam splitters simultaneously and so become entangled.

    This creates the 10-photon entangled state, albeit at a rate of about four per hour, which is low but finally measureable for the first time. “We demonstrate, for the first time, genuine and distillable entanglement of 10 single photons,” say Xi-Lin and co.

    That’s impressive work that immediately opens the prospect of a new generation of experiments. The most exciting of these is a technique called boson sampling that physicists hope will prove that quantum computers really are capable of things classical computers are not.

    That’s important because nobody has built a quantum computer more powerful than a pocket calculator (the controversial D-Wave results aside). Neither are they likely to in the near future. So boson sampling is quantum physicists’ greatest hope that will allow them to show off the mind-boggling power of quantum computation for the first time.

    Other things also become possible, such as the quantum teleportation of three degrees of freedom in a single photon and multi-photon experiments over very long distances.

    But it is the possibility of boson sampling that will send a frisson through the quantum physics community.

    Ref: arxiv.org/abs/1605.08547: Experimental Ten-Photon Entanglement

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 8:54 am on June 6, 2016 Permalink | Reply
    Tags: , , Go Inside an Industrial Plant That Sucks Carbon Dioxide Straight Out of the Air, MIT Technology Review   

    From MIT Tech Review: “Go Inside an Industrial Plant That Sucks Carbon Dioxide Straight Out of the Air” 

    MIT Technology Review
    MIT Technology Review

    June 6, 2016
    Peter Fairley

    1
    No image caption. No image credit.

    Carbon dioxide emissions must decrease to nearly zero by 2040 if global warming by the end of this century is to be held to 2 °C. But we may well miss that target. A pilot plant started up last fall at Squamish, British Columbia, is testing a backup plan: sucking carbon dioxide directly out of the air.

    Capturing ambient carbon dioxide is a tall order because, for all the trouble it causes, the greenhouse gas makes up just 0.04 percent of the air we breathe. The Squamish plant can capture one ton of carbon dioxide a day. Significantly reducing atmospheric carbon dioxide levels would require thousands of far larger facilities, each sucking millions of tons of carbon per year out of the air.

    The plant is the brainchild of Calgary-based Carbon Engineering and its founder, Harvard University physicist David Keith. While some scientists have estimated that direct air capture would cost $400 to $1,000 per ton of carbon dioxide, Keith projects that large plants could do it for about $100 per ton.

    “We’ve taken existing pieces of industrial equipment and thought about new chemistries to run through them,” says Adrian Corless, Carbon Engineering’s CEO. The company captures carbon dioxide in a refashioned cooling tower flowing with an alkali solution that reacts with acidic carbon dioxide. That yields dissolved carbon molecules that are then converted to pellets in equipment created to extract minerals in water treatment plants. And the plant can turn those carbonate solids into pure carbon dioxide gas for sale by heating them in a modified cement kiln.

    2
    Carbon Engineering CEO Adrian Corless

    In May the company closed on $8 million of new financing in Canadian dollars ($6.2 million in U.S. dollars) from investors including Bill Gates. Keith also hopes to start winning over skeptics. “Most people in the energy expert space think that air capture is not particularly credible,” he says. “There won’t be incentives and funding in a serious way for these technologies unless people believe that they actually work.”

    3
    Carbon dioxide is captured within the plant’s gas-liquid contactor, which is essentially a repurposed cooling tower. An alkaline solution in the contactor reacts with acidic carbon dioxide in air to enrich the capture solution with potassium carbonate.

    4

    5
    The contactor contains 80 cubic meters of plastic packing whose three-dimensional honeycomb structure offers 16,800 square meters of surface area. The setup removes 75 to 80 percent of the carbon dioxide in the air.

    6

    7
    Above top: The capture fluid, now rich with carbon dioxide from the air, circulates to a 13-meter-tall reactor. Above bottom: Calcium hydroxide is added to the capture fluid just before it enters the reactor, causing two products to be created inside. One is solid calcium carbonate containing the captured atmospheric carbon. The second, potassium hydroxide, flows back to the air contactor to capture more carbon dioxide.

    8
    As fluid moves up through the reactor, growing pellets of calcium carbonate spread out in a gradient, with the smallest pellets at the top. Pellets can be removed via these sample ports and analyzed in order to optimize the process.

    9
    The heaviest pellets settle at the bottom of the reactor and are periodically removed, washed to remove fine crystals and capture fluid, and dried. The finished product is solid grains of calcium carbonate that resemble a fine couscous.

    10
    Dried pellets are fed into the calciner, in which a 900 °C inferno of natural gas burning in pure oxygen roasts a rolling mass of calcium oxide. The calcium carbonate pellets spontaneously break down, producing more calcium oxide and releasing carbon dioxide gas.

    Next up at Squamish: turning captured carbon dioxide (now vented back to the air) into a low-carbon transportation fuel. By reacting carbon dioxide with hydrogen, Carbon Engineering plans to synthesize a fuel with less than one-third the carbon content of conventional gasoline. Corless estimates the fuels will cost $4 to $6 per gallon, but he expects to fetch a premium in places such as California and the European Union, where mandates require fuel suppliers to reduce their carbon content annually. Ultimately, says Corless, fuel from air capture may prove crucial to break the fossil-fuel dependence everywhere.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 5:45 pm on April 26, 2016 Permalink | Reply
    Tags: "A Space Mission to the Gravitational Focus of the Sun", , , MIT Technology Review   

    From MIT Tech Review: “A Space Mission to the Gravitational Focus of the Sun” 

    MIT Technology Review
    M.I.T Technology Review

    April 26, 2016
    by Emerging Technology from the arXiv

    The search for an Earth-like planet orbiting another star is one of astronomy’s greatest challenges. It’s a task that appears close to fruition. Since astronomers spotted the first exoplanet in 1988, they have found more than 2,000 others.

    Most of these planets are huge, because bigger objects are easier to spot. But as sensing techniques and technologies improve, astronomers are finding planets that match Earth’s vital statistics ever more closely.

    They have even begun to use a ranking system called the Earth Similarity Index to quantify how similar an exoplanet is to the mother planet. The exoplanet that currently ranks most highly is Kepler-438b, which orbits in the habitable zone of a red dwarf in the constellation of Lyra some 470 light years from here.

    Kepler-438b has an Earth Similarity Index of 0.88. By comparison, Mars has an ESI of 0.797, so it’s more Earth-like than our nearest neighbor. That’s exciting but it is inevitable that astronomers will find planets with even higher indices in the near future.

    And that raises an interesting question: how much can we ever know about these planets, given their size and distance from us? After all, the limited size of orbiting telescopes places severe restrictions on how much light and information we can gather from an Earth analogue.

    1

    But there is another option—the gravitational field of the sun can focus light. Place a telescope at the focal point of this giant lens and it should become possible to study a distant object in unprecedented detail. But how good would such a lens be; what would it reveal that we couldn’t see with our own telescopes?

    Today we get an answer to these questions thanks to the work of Geoffrey Landis at NASA’s John Glenn Research Center in Cleveland. Landis has analyzed the resolving power of the solar lens and worked out just how good it could be.

    The basic physics is straightforward and has been worked out in some detail by astronomers in the past. General relativity predicts that light must bend around any massive object. The effect is tiny, however, and only observable with objects of truly enormous mass.

    Despite its size, the sun only bends light by a tiny amount. Consequently, the focal point of our solar lens is at least 550 astronomical units away. That’s beyond the orbit of Pluto and the Kuiper Belt, which extends a mere 50 AU.

    Nevertheless, it is a tempting stepping stone given that there is little of interest between the Kuiper Belt and the next nearest star, Alpha Centauri, which is 280,000 AU distant. “There is thus a powerful incentive to find some plausible objective in visiting the gravitational focus, as a potential intermediate step toward a future interstellar mission,” says Landis.

    Kuiper Belt. Minor Planet Center
    Kuiper Belt. Minor Planet Center

    Centauris Alpha Beta Proxima 27, February 2012 Skatebiker
    Centauris Alpha Beta Proxima, 27, February 2012. Skatebiker

    But there are significant challenges in using the sun as a gravitational lens. The first is related to pointing and focal length. The idea is to place a spacecraft on the opposite side of the sun from the exoplanet, but it cannot sit exactly at the focal point where the light from the exoplanet converges.

    That’s because any image would be drowned out by light from the sun, which would still be the brightest object in the sky. Instead, the spacecraft would sit beyond the focal point where the light from the exoplanet would form into an Einstein ring around the sun. It is this ring that the mission would have to sample.

    Einstein ring. NASA/ESA Hubble
    Einstein ring. NASA/ESA Hubble

    But it is not just the sun that can drown out the image. The solar corona, the aura of plasma that surrounds the sun, is also a problem, and this extends much further. To ensure that the Einstein ring is larger than the corona and not obscured by it, the mission would have to sit even further, at a distance of more than 2,000 AU, says Landis. That’s much further than the 550 AU that previous analyses have suggested.

    It is a simple matter to show that this mission could only have a single objective. To point at a different object just 1 degree away, the telescope would have to move at least 10 AU around the sun, equivalent to the distance from Earth to Saturn. “A significant difference of the solar gravitational lens from a conventional telescope is that the gravitational lens telescope is not in any practical sense pointable,” says Landis.

    But given a specific target, the focal power of the sun produces a hugely magnified view. To demonstrate its potential, Landis uses the hypothetical example of an exoplanet orbiting a star some 35 light years away. If this planet were the same size as the Earth, the image at the focal plane of the sun would be 12.5 kilometers across.

    So the mission could only ever see a small fraction of the planet’s surface. Indeed, a telescope with a one-metre detector would image a one kilometer square area on the surface of the planet—that’s smaller than New York’s Central Park.

    Pointing a telescope at an area so small and distant is tricky. There can be no “finder scope” on such a telescope because the target would be invisible except when using the gravity lens. So the exoplanet’s position will have to be known with high precision.

    Even then, pointing it will not be trivial. “Finding a planet of diameter ~10^4 km at a distance of 10^14 km requires a pointing knowledge and pointing accuracy of 0.1 nanoradians,” says Landis. State-of-the-art pointing accuracy is today about 10 nanoradians.

    But that’s just the start. The exoplanet will be moving as it orbits its star. Landis analyses what would happen if the exoplanet has the same orbital velocity as the Earth, 30 km/sec. In that case, a one-kilometer section of the planet will traverse a one-meter detector in just 33 milliseconds and the entire planet will slip past in 42 seconds.

    Preventing blur by moving the telescope to track the image will be hard. Landis says that the spacecraft will need to change its velocity by 30 meters per second to keep up and that over the course of a year it would follow an ellipse with a semi major axis of about 150,000 kilometers. It’s not clear what kind of propulsion system would be capable of this.

    The alternative, of course, is to use image processing techniques to remove the blur, which is increasingly doable with today’s technology.

    Another major problem is filtering out the light from the sun, not to mention the exoplanet’s parent star, which will be orders of magnitude brighter than the target. The telescope will also have to minimize interference from other sources such as zodiacal light. Much effort has been out into this for the current generation of planet hunting telescopes. Nevertheless, Landis says, this is not a trivial problem.

    Given all these problems, how much better the image from a gravitational lens be compared to an unlensed image? Landis’s estimate is that the lens increases the intensity of light from the exoplanet by a factor of 100,000.

    That’s a significant advantage. But it can only be realized if the exoplanet light can be well separated from the light from other sources such as the sun, the corona, the parent star, and so on. And this is a big unknown.

    The utility of the mission depends on this. “Given all the difficulties, is it worth traveling out to beyond 600 AU to merely gain a factor of 100,000? Is this enough?” asks Landis.

    That’s a question that astronomers, funding agencies, and the public at large will have to consider in some detail. Landis makes no suggestion that such a mission should be undertaken now or is even possible or affordable. But his analysis has certainly raised the stakes.

    Going further, it seems hard to understate the significance of finding an Earth analogue that has the potential to support life. The idea of mapping areas on this planet that are just one kilometer in size will be powerful motivation.

    On Earth, this kind of image would reveal islands, rivers, parks, Great Walls, freeways, cities, and so on. Perhaps a spacecraft sitting at the gravitational focus of a distant star is revealing these things right now to a spellbound alien population. Just imagine.

    Ref: arxiv.org/abs/1604.06351: Mission to the Gravitational Focus of the Sun: A Critical Analysis

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 12:52 pm on April 7, 2016 Permalink | Reply
    Tags: , , MIT Technology Review   

    From MIT Tech Review: “Moore’s Law’s Ultraviolet Savior Is Finally Ready” 

    MIT Technology Review
    MIT Technology Review

    April 7, 2016
    Katherine Bourzac

    It is easy to take for granted the advancements in our mobile phones, wearable electronics, and other gadgets. But advances in computing rely on processes that the semiconductor industry cannot take for granted. Moore’s Law, which says that computing power will double every two years, is already slowing (see “Intel Puts the Brakes on Moore’s Law”).

    Now a key tool the tech industry hopes will offset that deceleration—one that private companies, academia, and governments around the world have invested billions of dollars and decades developing—is finally being tested in factories operated by Samsung, Intel, and other companies. This technology is called extreme-ultraviolet (EUV) lithography, and industry leaders say it could be used in high-volume chip manufacturing as early as 2018 (see “The Moore’s Law Moon Shot”).

    Lithography works somewhat like old-fashioned film photography: light is projected through a patterned mask onto a surface coated with light-sensitive chemicals called photoresists. The smaller the wavelength of light, the higher resolution patterns it’s possible to make. The industry has pushed the existing technology, which uses light that’s 193 nanometers in wavelength, to its limits. To keep up progress in the latest generation of chips, Intel and other companies had to use multiple patterning steps for each layer in a chip. Each of these steps—and the necessary masks—adds time, complexity, and expense. Using shorter wavelength EUV light would bring some relief.

    2
    No image credit

    “I never expected they would detect gravity waves before EUV went into production,” said Kenneth Goldberg, the deputy director of the Center for X-Ray Optics at the Lawrence Berkeley National Laboratory, at a lithography conference this spring in San Jose, California. Indeed, making EUV lithography work has been an expensive, international, interdisciplinary physics project.

    In 2011, Intel added to that, investing $4 billion in ASML, a Dutch chip-making equipment company. That investment seems to be paying off. ASML recently announced that it has overcome the biggest technological hurdle: it hasn’t been practical to switch to the shorter wavelength EUV light because the light sources were far too dim. A dim light source means it takes longer to expose the photoresist—it’s akin to nighttime photography, which requires longer exposure times. And time is money. Until this fall, companies had not reported any throughput numbers at all for EUV.

    For ASML, making the light source brighter involved advances in plasma and laser physics, as well as a deeper understanding of the materials involved. A laser is used to heat up a tiny droplet of tin and turn it into plasma. As the tin cools, it emits EUV light. One hang-up has been that only 1 percent of the energy provided by that first laser pulse ended up being turned into UV light. By adding a pre-pulse step, ASML has made the conversion five times more efficient. The first pulse shapes the tin into a pancake that is better at absorbing the energy from the second pulse.

    This boosts the wattage of the light source to something viable—from 40 watts last year to 200 watts this year. With brighter light, the manufacturing speed doubles, from 400 wafers a day to 800. That’s still slower than the status quo technology, which can pattern 3,000 wafers a day. But the status quo technology will slow down in the coming years—it will take more patterning steps and more expensive masks to make ever finer features on future chips.

    We’ve heard this before, says lithography expert and longtime EUV skeptic Chris Mack. The technology has been two years from high-volume manufacturing for a decade, he says. “I’m surprised we didn’t give up on EUV a long time ago, but we haven’t because we don’t have alternatives,” he says (see “Intel Chips Will Have to Sacrifice Speed Gains for Energy Savings”).

    Mack notes that companies other than ASML are more vague in their public statements about the timing of EUV. Representatives of Taiwanese chip-making giant TSMC have hinted it will bring on the technology in 2020, says Mack. Intel has been less specific. Janice Golda, who works on lithography in the Technology and Manufacturing Group at Intel says there have been significant strides with EUV over the past year, but she declines to give a specific date for Intel to bring it into production.

    But even skeptics like Mack are feeling more optimistic today. It’s significant that ASML finally has machines out to its customers for test runs. “We’ll see quicker progress now,” he says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 9:00 am on March 25, 2016 Permalink | Reply
    Tags: , , MIT Technology Review,   

    From MIT Tech Review: “Intel Puts the Brakes on Moore’s Law” 

    MIT Technology Review
    MIT Technology Review

    3.25.16
    Tom Simonite

    1

    Chip maker Intel has signaled a slowing of Moore’s Law, a technological phenomenon that has played a role in just about every major advance in engineering and technology for decades.

    BOINC WallPaper
    CPU displayed by BOINC

    Since the 1970s, Intel has released chips that fit twice as many transistors into the same space roughly every two years, aiming to follow an exponential curve named after Gordon Moore, one of the company’s cofounders. That continual shrinking has helped make computers more powerful, compact, and energy-efficient. It has helped bring us smartphones, powerful Internet services, and breakthroughs in fields such as artificial intelligence and genetics. And Moore’s Law has become shorthand for the idea that anything involving computing gets more capable over time.

    But Intel disclosed in a regulatory filing last month that it is slowing the pace with which it launches new chip-making technology. The gap between successive generations of chips with new, smaller transistors will widen. With the transistors in Intel’s latest chips already as small as 14 nanometers, it is becoming more difficult to shrink them further in a way that’s cost-effective for production.

    Intel’s strategy shift is not a complete surprise. It already pushed back the debut of its first chips with 10-nanometer transistors from the end of this year to sometime in 2017. But it is notable that the company has now admitted that wasn’t a one-off, and that it can’t keep up the pace it used to. That means Moore’s Law will slow down, too.

    That doesn’t necessarily mean that our devices are about to stop improving, or that ideas such as driverless cars will stall from lack of processing power. Intel says it will deliver extra performance upgrades between generations of transistor technology by making improvements to the way chips are designed. And the company’s chips are essentially irrelevant to mobile devices, a market dominated by competitors that are generally a few years behind in terms of shrinking transistors and adopting new manufacturing technologies. It is also arguable that for many important new use cases for computing, such as wearable devices or medical implants, chips are already powerful enough and power consumption is more important.

    But raw computing power still matters. Putting more of it behind machine-learning algorithms has been crucial to recent breakthroughs in artificial intelligence, for example. And Intel is likely to have to deliver more bad news about the future of chips and Moore’s Law before too long.

    The company’s chief of manufacturing said in February that Intel needs to switch away from silicon transistors in about four years. “The new technology will be fundamentally different,” he said, before admitting that Intel doesn’t yet have a successor lined up. There are two leading candidates—technologies known as spintronics and tunneling transistors—but they may not offer big increases in computing power. And both are far from being ready for use in making processors in large volumes.

    [If one examines the details of many many supercomputers, one sees that graphics processing units (GPU’s) are becoming much mofre important than central processing units (CPU’s) which are based upon transister developments ruled by Moore’s Law]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 8:35 am on March 25, 2016 Permalink | Reply
    Tags: Biomedicine, , MIT Technology Review   

    From MIT Tech Review: “Genome Discovery Holds Key to Designer Organisms” 

    MIT Technology Review
    MIT Technology Review

    1
    A cluster of synthetic cells with the fewest genes needed to grow and divide. In cultures, these so-called JCVI-syn.30 cells form a variety of structures.

    March 24, 2016
    Karen Weintraub

    For more than 20 years, J. Craig Venter has been trying to make a cell with the fewest possible genes in the hope that the stripped-down cell would tell us something about the necessities of life.

    In a paper published today in Science, Venter and his team announced that they’ve made a big step toward that goal—and found some surprises along the way.

    The parts list of basic life is one-third longer than scientists had thought, said Venter, who is known for winning the race to map the human genome. And it depends much more on context than they had realized.

    To get their synthetic cell to replicate and grow fast enough to use in the lab took 473 genes, 149 of which have an unclear function.

    Venter, founder, chairman, and CEO of the J. Craig Venter Institute, which led the research, said he started his hunt for genes assuming he’d be able to pinpoint the single or few genes responsible for this or that trait. Instead, he said at a Wednesday news conference, he’s learned that functions, diseases, and basic existence are dependent on the interplay of many genes.

    “Life is much more like a symphony orchestra than a piccolo player.”

    Most of the applications for this synthetic cell are years or decades off, but it is an important scientific advance.

    “This is really useful for giving you an insight to what’s really the minimal parts list it takes to keep an organism going,” said Jef Boeke, director of the Institute for Systems Genetics at New York University’s Langone Medical Center. “There’s tremendous value in terms of understanding the basic wiring of a cell.”

    The synthetic cell, dubbed JCVI-syn3.0, also has potential applications for advancing medicine, nutrition, agriculture, biofuels, and biochemicals, said Dan Gibson, vice president of DNA Technology for Synthetic Genomics, a company started by Venter to commercialize genetic advances, which was also involved in the new work.

    “Our long-term vision is to have the ability to design and build synthetic organisms on demand that perform specific functions that are programmed into the cellular genome,” Gibson wrote in a follow-up e-mail. Synthetic cells with a minimal parts list “would be devoting maximal energy to their purpose—they would simply grow and divide and make the product that was programmed into the cell.”

    When asked for specific examples of applications, Venter mentioned synthetic antibiotics, and an ongoing collaboration between Synthetic Genomics and United Therapeutics to grow transplantable organs in pigs. Humans cannot use pig hearts, lungs, or livers because of the risk of rejection and diseases, but the companies are trying to engineer changes into the pig genome to make that possible.

    Harvard University geneticist George Church prefers to edit functions into existing genomes, rather than build up from the bottom. Church said JCVI-syn3.0 is a significant academic achievement, but he doesn’t see much practical use for it in the short-term.

    “I don’t want to be impolite,” Church said. “I think it’s a lovely thing they did.”

    As a scientific feat, Church said he was more impressed with the group’s earlier work done more than five years ago, which showed that the team could synthesize a much larger genome that is much closer to the complexity needed for real-world applications.

    Venter said the work shows how far we still have to go to understand the genomes of even the simplest creatures.

    “The fact that this has taken a highly dedicated, extremely competent team with a Nobel laureate, three National Academy of Science members, and some brilliant junior scientists this long to get this far tells us a lot about the fundamentals of life and says the next phases are not going to be trivial,” he said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 9:16 am on February 19, 2016 Permalink | Reply
    Tags: , MIT Technology Review, Repairing sight in the human eye   

    From MIT Tech Review: “In First Human Test of Optogenetics, Doctors Aim to Restore Sight to the Blind” 

    MIT Technology Review
    MIT Technology Review

    February 19, 2016
    Katherine Bourzac

    Eye human

    If all goes according to plan, sometime next month a surgeon in Texas will use a needle to inject viruses laden with DNA from a light-sensitive algae into the eye of a legally blind person in a bet that it could let the patient see again, if only in blurry black-and-white.

    The study, sponsored by a startup called RetroSense Therapeutics, of Ann Arbor, Michigan, is expected to be the first human test of optogenetics, a technology developed in neuroscience labs that uses a combination of gene therapy and light to precisely control nerve cells.

    The trial, to be carried out by doctors at the Retina Foundation of the Southwest, will involve as many as 15 patients with retinitis pigmentosa, a degenerative disease in which the specialized light-sensitive photoreceptor cells in the eye die, slowly causing blindness. The aim of the treatment is to engineer the DNA of different cells in the retina, called ganglion cells, so that they can respond to light instead, firing off signals to the brain.

    The Texas study will be followed closely by neuroscientists who hope to eventually use optogenetics inside the human brain to treat Parkinson’s or severe mental illness. “This is going to be a gold mine of information about doing optogenetics studies in humans,” says Antonello Bonci, a neuroscientist who is scientific director of the intramural research program at the National Institute on Drug Abuse in Baltimore.

    Patients who have retinitis pigmentosa lose peripheral and night vision before eventually becoming blind. Candidates for the RetroSense study won’t be able to see much more than a hand moving in front of their face. RetroSense CEO Sean Ainsworth says he hopes that after the treatment patients will “see tables and chairs” or maybe read large letters.

    Optogenetics was developed a decade ago in neuroscience labs as a way to precisely control the activity of nerve cells. It works by adding DNA instructions for a light-sensitive protein, channelrhodopsin, that algae use to sense sunlight and move toward it. Added to a nerve, it causes the cell to fire when exposed to a specific wavelength of light.

    The technology is already helping scientists make rapid progress in understanding what brain cells underlie movement, motivation, pain, and many other basic brain functions in animals. In one experiment, Stanford University researchers led by Karl Deisseroth, one of the inventors of optogenetics, found they could switch the sensation of fear on and off in mice by shooting light through a fiber-optic cable at specific cells in their brains.

    RetroSense was founded in 2009 to commercialize research carried out by Zhuo-Hua Pan, a Wayne State University vision expert who realized that the eye might be the easiest place to use optogenetics. Unlike the brain, the eye is transparent and sensitive to light, and it’s much easier to treat with gene therapy. No extra hardware or fiber-optic cables are needed, since light shines directly onto the retina.

    The eye has two kinds of photoreceptor cells. Cones, named for their shape, are responsible for color vision. Rods respond to light at night. Both react to incoming photons by generating an electrical signal that is passed through a succession of nerve cells to the optic nerve and then to the brain.

    To overcome the loss of photoreceptors, the strategy created by Pan and adopted by RetroSense works by injecting viruses laden with algae DNA into the center of the eye. Their target is the topmost layer of cells in the retina, called ganglions. Once they start making the light-sensitive protein, the ganglion cells should fire in response to light.

    Pan expects the treatment to generate at least 100,000 light-sensing cells in the retina. That could translate to substantial vision. So far, the only commercial technology to restore limited sight to blind people is an electrical implant called the Argus II that transmits video from a camera to a sheet of 60 electrodes stitched inside the retina, but it provides only a few pixels of visual information at a time.

    The algae protein has some limitations. One is that it responds only to the blue component of natural light. As a result, RetroSense expects patients to experience monochromatic vision. Perhaps the brain will process this as black and white, says Ainsworth. Patients might perceive an object that doesn’t reflect any blue light at all as being black.

    Speculation about what people will or won’t see—and what that subjective experience will be like—stems from results of studies on blind mice. Jens Duebel, who leads a group studying optogenetic vision restoration at the Institut de la Vision, in Paris, says that after treatment blind mice will move their heads to follow an image and also avoid a bright light when held in a dark box, just as healthy mice do.

    Because the algae protein isn’t as sensitive to light as a normal retina, Duebel thinks patients might see in outdoor light but not very well indoors. Duebel is associated with GenSight Biologics of Paris, a company that developed a pair of goggle-mounted microprojectors it thinks could overcome that problem. The goggles will convert a video feed into wavelengths of light that a genetically altered retina can respond to. The French company remains a few years away from starting a clinical trial of its technology, Duebel says.

    Other treatments using optogenetics are under development. A California company, Circuit Therapeutics, is developing an optogenetic treatment for chronic pain. So far, in experiments on mice, shutting off pain signals has required implanting an optical fiber into the spinal cord. Circuit is also being funded by the Michael J. Fox Foundation for Parkinson’s Research, which wants to determine whether it’s possible to control Parkinson’s tremors using a light source inside the brain. Until now, this has been attempted with drugs or implanted electrodes.

    Bonci says that before optogenetics can be used therapeutically in the brain, researchers will need more information about which cells to target. “But that’s five years away, not 20 years away,” he says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: