Tagged: MIT News Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:23 am on March 17, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “A second minor planet may possess Saturn-like rings” 


    MIT News

    March 17, 2015
    Jennifer Chu | MIT News Office

    1
    Image courtesy of the European Southern Observatory

    There are only five bodies in our solar system that are known to bear rings. The most obvious is the planet Saturn; to a lesser extent, rings of gas and dust also encircle Jupiter, Uranus, and Neptune. The fifth member of this haloed group is Chariklo, one of a class of minor planets called centaurs: small, rocky bodies that possess qualities of both asteroids and comets.

    Scientists only recently detected Chariklo’s ring system — a surprising finding, as it had been thought that centaurs are relatively dormant. Now scientists at MIT and elsewhere have detected a possible ring system around a second centaur, Chiron.

    In November 2011, the group observed a stellar occultation in which Chiron passed in front of a bright star, briefly blocking its light. The researchers analyzed the star’s light emissions, and the momentary shadow created by Chiron, and identified optical features that suggest the centaur may possess a circulating disk of debris. The team believes the features may signify a ring system, a circular shell of gas and dust, or symmetric jets of material shooting out from the centaur’s surface.

    “It’s interesting, because Chiron is a centaur — part of that middle section of the solar system, between Jupiter and Pluto, where we originally weren’t thinking things would be active, but it’s turning out things are quite active,” says Amanda Bosh, a lecturer in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    Bosh and her colleagues at MIT — Jessica Ruprecht, Michael Person, and Amanda Gulbis — have published their results in the journal Icarus.

    Catching a shadow

    Chiron, discovered in 1977, was the first planetary body categorized as a centaur, after the mythological Greek creature — a hybrid of man and beast. Like their mythological counterparts, centaurs are hybrids, embodying traits of both asteroids and comets. Today, scientists estimate there are more than 44,000 centaurs in the solar system, concentrated mainly in a band between the orbits of Jupiter and Pluto.

    While most centaurs are thought to be dormant, scientists have seen glimmers of activity from Chiron. Starting in the late 1980s, astronomers observed patterns of brightening from the centaur, as well as activity similar to that of a streaking comet.

    In 1993 and 1994, James Elliot, then a professor of planetary astronomy and physics at MIT, observed a stellar occultation of Chiron and made the first estimates of its size. Elliot also observed features in the optical data that looked like jets of water and dust spewing from the centaur’s surface.

    Now MIT researchers — some of them former members of Elliot’s group — have obtained more precise observations of Chiron, using two large telescopes in Hawaii: NASA’s Infrared Telescope Facility, on Mauna Kea, and the Las Cumbres Observatory Global Telescope Network at Haleakala.

    NASA Infrared Telescope facility
    NASA’s Infrared Telescope Facility, on Mauna Kea

    Las Cumbres Observatory Global Telescope Network telescope at Haleakala
    Las Cumbres Observatory Global Telescope Network at Haleakal

    In 2010, the team started to chart the orbits of Chiron and nearby stars in order to pinpoint exactly when the centaur might pass across a star bright enough to detect. The researchers determined that such a stellar occultation would occur on Nov. 29, 2011, and reserved time on the two large telescopes in hopes of catching Chiron’s shadow.

    “There’s an aspect of serendipity to these observations,” Bosh says. “We need a certain amount of luck, waiting for Chiron to pass in front of a star that is bright enough. Chiron itself is small enough that the event is very short; if you blink, you might miss it.”

    The team observed the stellar occultation remotely, from MIT’s Building 54. The entire event lasted just a few minutes, and the telescopes recorded the fading light as Chiron cast its shadow over the telescopes.

    Rings around a theory

    The group analyzed the resulting light, and detected something unexpected. A simple body, with no surrounding material, would create a straightforward pattern, blocking the star’s light entirely. But the researchers observed symmetrical, sharp features near the start and end of the stellar occultation — a sign that material such as dust might be blocking a fraction of the starlight.

    The researchers observed two such features, each about 300 kilometers from the center of the centaur. Judging from the optical data, the features are 3 and 7 kilometers wide, respectively. The features are similar to what Elliot observed in the 1990s.

    In light of these new observations, the researchers say that Chiron may still possess symmetrical jets of gas and dust, as Elliot first proposed. However, other interpretations may be equally valid, including the “intriguing possibility,” Bosh says, of a shell or ring of gas and dust.

    Ruprecht, who is a researcher at MIT’s Lincoln Laboratory, says it is possible to imagine a scenario in which centaurs may form rings: For example, when a body breaks up, the resulting debris can be captured gravitationally around another body, such as Chiron. Rings can also be leftover material from the formation of Chiron itself.

    “Another possibility involves the history of Chiron’s distance from the sun,” Ruprecht says. “Centaurs may have started further out in the solar system and, through gravitational interactions with giant planets, have had their orbits perturbed closer in to the sun. The frozen material that would have been stable out past Pluto is becoming less stable closer in, and can turn into gases that spray dust and material off the surface of a body. ”

    An independent group has since combined the MIT group’s occultation data with other light data, and has concluded that the features around Chiron most likely represent a ring system. However, Ruprecht says that researchers will have to observe more stellar occultations of Chiron to truly determine which interpretation — rings, shell, or jets — is the correct one.

    “If we want to make a strong case for rings around Chiron, we’ll need observations by multiple observers, distributed over a few hundred kilometers, so that we can map the ring geometry,” Ruprecht says. “But that alone doesn’t tell us if the rings are a temporary feature of Chiron, or a more permanent one. There’s a lot of work that needs to be done.”

    Nevertheless, Bosh says the possibility of a second ringed centaur in the solar system is an enticing one.

    “Until Chariklo’s rings were found, it was commonly believed that these smaller bodies don’t have ring systems,” Bosh says. “If Chiron has a ring system, it will show it’s more common than previously thought.”

    Matthew Knight, an astronomer at the Lowell Observatory in Flagstaff, Arizona, says the possibility that Chiron possesses a ring system “makes the solar system feel a bit more intimate.”

    “We have a pretty good feel for what most of the inner solar system is like from spacecraft missions, but the small, icy worlds of the outer solar system are still mysterious,” says Knight, who was not involved in the research. “At least to me, being able to picture a centaur having a ring around it makes it seem more tangible.”

    This research was funded in part by NASA and the National Research Foundation of South Africa.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 12:09 pm on March 16, 2015 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Quantum sensor’s advantages survive entanglement breakdown” 


    MIT News

    March 9, 2015
    Larry Hardesty | MIT News Office

    1
    In the researchers’ new system, a returning beam of light is mixed with a locally stored beam, and the correlation of their phase, or period of oscillation, helps remove noise caused by interactions with the environment. Illustration: Jose-Luis Olivares/MIT

    Preserving the fragile quantum property known as entanglement isn’t necessary to reap benefits.

    The extraordinary promise of quantum information processing — solving problems that classical computers can’t, perfectly secure communication — depends on a phenomenon called “entanglement,” in which the physical states of different quantum particles become interrelated. But entanglement is very fragile, and the difficulty of preserving it is a major obstacle to developing practical quantum information systems.

    In a series of papers since 2008, members of the Optical and Quantum Communications Group at MIT’s Research Laboratory of Electronics have argued that optical systems that use entangled light can outperform classical optical systems — even when the entanglement breaks down.

    Two years ago, they showed that systems that begin with entangled light could offer much more efficient means of securing optical communications. And now, in a paper appearing in Physical Review Letters, they demonstrate that entanglement can also improve the performance of optical sensors, even when it doesn’t survive light’s interaction with the environment.

    “That is something that has been missing in the understanding that a lot of people have in this field,” says senior research scientist Franco Wong, one of the paper’s co-authors and, together with Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, co-director of the Optical and Quantum Communications Group. “They feel that if unavoidable loss and noise make the light being measured look completely classical, then there’s no benefit to starting out with something quantum. Because how can it help? And what this experiment shows is that yes, it can still help.”

    Phased in

    Entanglement means that the physical state of one particle constrains the possible states of another. Electrons, for instance, have a property called spin, which describes their magnetic orientation. If two electrons are orbiting an atom’s nucleus at the same distance, they must have opposite spins. This spin entanglement can persist even if the electrons leave the atom’s orbit, but interactions with the environment break it down quickly.

    In the MIT researchers’ system, two beams of light are entangled, and one of them is stored locally — racing through an optical fiber — while the other is projected into the environment. When light from the projected beam — the “probe” — is reflected back, it carries information about the objects it has encountered. But this light is also corrupted by the environmental influences that engineers call “noise.” Recombining it with the locally stored beam helps suppress the noise, recovering the information.

    The local beam is useful for noise suppression because its phase is correlated with that of the probe. If you think of light as a wave, with regular crests and troughs, two beams are in phase if their crests and troughs coincide. If the crests of one are aligned with the troughs of the other, their phases are anti-correlated.

    But light can also be thought of as consisting of particles, or photons. And at the particle level, phase is a murkier concept.

    “Classically, you can prepare beams that are completely opposite in phase, but this is only a valid concept on average,” says Zheshen Zhang, a postdoc in the Optical and Quantum Communications Group and first author on the new paper. “On average, they’re opposite in phase, but quantum mechanics does not allow you to precisely measure the phase of each individual photon.”

    Improving the odds

    Instead, quantum mechanics interprets phase statistically. Given particular measurements of two photons, from two separate beams of light, there’s some probability that the phases of the beams are correlated. The more photons you measure, the greater your certainty that the beams are either correlated or not. With entangled beams, that certainty increases much more rapidly than it does with classical beams.

    When a probe beam interacts with the environment, the noise it accumulates also increases the uncertainty of the ensuing phase measurements. But that’s as true of classical beams as it is of entangled beams. Because entangled beams start out with stronger correlations, even when noise causes them to fall back within classical limits, they still fare better than classical beams do under the same circumstances.

    “Going out to the target and reflecting and then coming back from the target attenuates the correlation between the probe and the reference beam by the same factor, regardless of whether you started out at the quantum limit or started out at the classical limit,” Shapiro says. “If you started with the quantum case that’s so many times bigger than the classical case, that relative advantage stays the same, even as both beams become classical due to the loss and the noise.”

    In experiments that compared optical systems that used entangled light and classical light, the researchers found that the entangled-light systems increased the signal-to-noise ratio — a measure of how much information can be recaptured from the reflected probe — by 20 percent. That accorded very well with their theoretical predictions.

    But the theory also predicts that improvements in the quality of the optical equipment used in the experiment could double or perhaps even quadruple the signal-to-noise ratio. Since detection error declines exponentially with the signal-to-noise ratio, that could translate to a million-fold increase in sensitivity.

    “This is a breakthrough,” says Stefano Pirandola, an associate professor of computer science at the University of York in England. “One of the main technical challenges was the experimental realization of a practical receiver for quantum illumination. Shapiro and Wong experimentally implemented a quantum receiver, which is not optimal but is still able to prove the quantum illumination advantage. In particular, they were able to overcome the major problem associated with the loss in the optical storage of the idler beam.”

    “This research can potentially lead to the development of a quantum LIDAR which is able to spot almost-invisible objects in a very noisy background,” he adds. “The working mechanism of quantum illumination could in fact be exploited at short-distances as well, for instance to develop non-invasive techniques of quantum sensing with potential applications in biomedicine.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 4:43 pm on March 5, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Why isn’t the universe as bright as it should be?” 


    MIT News

    March 4, 2015
    Jennifer Chu | MIT News Office

    1
    This Hubble Space Telescope image of galaxy NGC 1275 reveals the fine, thread-like filamentary structures in the gas surrounding the galaxy. The red filaments are composed of cool gas being suspended by a magnetic field, and are surrounded by the 100-million-degree Fahrenheit gas in the center of the Perseus galaxy cluster. The filaments are dramatic markers of the feedback process through which energy is transferred from the central massive black hole to the surrounding gas. Courtesy of NASA (edited by Jose-Luis Olivares/MIT)

    A handful of new stars are born each year in the Milky Way, while many more blink on across the universe. But astronomers have observed that galaxies should be churning out millions more stars, based on the amount of interstellar gas available.

    Now researchers from MIT, Columbia University, and Michigan State University have pieced together a theory describing how clusters of galaxies may regulate star formation. They describe their framework this week in the journal Nature.

    When intracluster gas cools rapidly, it condenses, then collapses to form new stars. Scientists have long thought that something must be keeping the gas from cooling enough to generate more stars — but exactly what has remained a mystery.

    For some galaxy clusters, the researchers say, the intracluster gas may simply be too hot — on the order of hundreds of millions of degrees Celsius. Even if one region experiences some cooling, the intensity of the surrounding heat would keep that region from cooling further — an effect known as conduction.

    “It would be like putting an ice cube in a boiling pot of water — the average temperature is pretty much still boiling,” says Michael McDonald, a Hubble Fellow in MIT’s Kavli Institute for Astrophysics and Space Research. “At super-high temperatures, conduction smooths out the temperature distribution so you don’t get any of these cold clouds that should form stars.”

    For so-called “cool core” galaxy clusters, the gas near the center may be cool enough to form some stars. However, a portion of this cooled gas may rain down into a central black hole, which then spews out hot material that serves to reheat the surroundings, preventing many stars from forming — an effect the team terms “precipitation-driven feedback.”

    “Some stars will form, but before it gets too out of hand, the black hole will heat everything back up — it’s like a thermostat for the cluster,” McDonald says. “The combination of conduction and precipitation-driven feedback provides a simple, clear picture of how star formation is governed in galaxy clusters.”

    Crossing a galactic threshold

    Throughout the universe, there exist two main classes of galaxy clusters: cool core clusters — those that are rapidly cooling and forming stars — and non-cool core clusters — those have not had sufficient time to cool.

    The Coma cluster, a non-cool cluster, is filled with gas at a scorching 100 million degrees Celsius.

    3
    A Sloan Digital Sky Survey [SDSS]/Spitzer Space Telescope mosaic of the Coma Cluster in long-wavelength infrared (red), short-wavelength infrared (green), and visible light. The many faint green smudges are dwarf galaxies in the cluster. Credit: NASA/JPL-Caltech/GSFC/SDSS

    Sloan Digital Sky Survey Telescope
    SDSS telescope

    NASA Spitzer Telescope
    Spitzer

    To form any stars, this gas would have to cool for several billion years. In contrast, the nearby Perseus cluster is a cool core cluster whose intracluster gas is a relatively mild several million degrees Celsius. New stars occasionally emerge from the cooling of this gas in the Perseus cluster, though not as many as scientists would predict.

    4
    Chandra X-ray Observatory observations of the central regions of the Perseus galaxy cluster. Image is 284 arcsec across. RA 03h 19m 47.60s Dec +41° 30′ 37.00″ in Perseus. Observation dates: 13 pointings between August 8, 2002 and October 20, 2004. Color code: Energy (Red 0.3-1.2 keV, Green 1.2-2 keV, Blue 2-7 keV). Instrument: ACIS.

    NASA Chandra Telescope
    Chandra

    “The amount of fuel for star formation outpaces the amount of stars 10 times, so these clusters should be really star-rich,” McDonald says. “You really need some mechanism to prevent gas from cooling, otherwise the universe would have 10 times as many stars.”

    McDonald and his colleagues worked out a theoretical framework that relies on two anti-cooling mechanisms.

    The group calculated the behavior of intracluster gas based on a galaxy cluster’s radius, mass, density, and temperature. The researchers found that there is a critical temperature threshold below which the cooling of gas accelerates significantly, causing gas to cool rapidly enough to form stars.

    According to the group’s theory, two different mechanisms regulate star formation, depending on whether a galaxy cluster is above or below the temperature threshold. For clusters that are significantly above the threshold, conduction puts a damper on star formation: The surrounding hot gas overwhelms any pockets of cold gas that may form, keeping everything in the cluster at high temperatures.

    “For these hotter clusters, they’re stuck in this hot state, and will never cool and form stars,” McDonald says. “Once you get into this very high-temperature regime, cooling is really inefficient, and they’re stuck there forever.”

    For gas at temperatures closer to the lower threshold, it’s much easier to cool to form stars. However, in these clusters, precipitation-driven feedback starts to kick in to regulate star formation: While cooling gas can quickly condense into clouds of droplets that can form stars, these droplets can also rain down into a central black hole — in which case the black hole may emit hot jets of material back into the cluster, heating the surrounding gas back up to prevent further stars from forming.

    “In the Perseus cluster, we see these jets acting on hot gas, with all these bubbles and ripples and shockwaves,” McDonald says. “Now we have a good sense of what triggered those jets, which was precipitating gas falling onto the black hole.”

    On track

    McDonald and his colleagues compared their theoretical framework to observations of distant galaxy clusters, and found that their theory matched the observed differences between clusters. The team collected data from the Chandra X-ray Observatory and the South Pole Telescope [SPT] — an observatory in Antarctica that searches for far-off massive galaxy clusters.

    South Pole Telescope
    SPT

    The researchers compared their theoretical framework with the gas cooling times of every known galaxy cluster, and found that clusters filtered into two populations — very slowly cooling clusters, and clusters that are cooling rapidly, closer to the rate predicted by the group as a critical threshold.

    By using the theoretical framework, McDonald says researchers may be able to predict the evolution of galaxy clusters, and the stars they produce.

    “We’ve built a track that clusters follow,” McDonald says. “The nice, simple thing about this framework is that you’re stuck in one of two modes, for a very long time, until something very catastrophic bumps you out, like a head-on collision with another cluster.”

    The researchers hope to look deeper into the theory to see whether the mechanisms regulating star formation in clusters also apply to individual galaxies. Preliminary evidence, he says, suggests that is the case.

    “If we can use all this information to understand why or why not stars form around us, then we’ve made a big step forward,” McDonald says.

    “[These results] look very promising,” says Paul Nulsen, an astronomer at the Harvard-Smithsonian Center for Astrophysics who was not involved in this research. “More work will be needed to show conclusively that precipitation is the main source of the gas that powers feedback. Other processes in the feedback cycle also need to be understood. For example, there is still no consensus on how the gas falling into a massive black hole produces energetic jets, or how they inhibit cooling in the remaining gas. This is not the end of the story, but it is an important insight into a problem that has proved a lot more difficult than anyone ever anticipated.”

    This research was funded in part by the National Science Foundation and NASA.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 6:45 pm on March 4, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “New technique allows analysis of clouds around exoplanets” 


    MIT News

    1
    Analysis of data from the Kepler space telescope has shown that roughly half of the dayside of the exoplanet Kepler-7b is covered by a large cloud mass. Statistical comparison of more than 1,000 atmospheric models show that these clouds are most likely made of Enstatite, a common Earth mineral that is in vapor form at the extreme temperature on Kepler-7b. These models varied the altitude, condensation, particle size, and chemical composition of the clouds to find the right reflectivity and color properties to match the observed signal from the exoplanet.

    Courtesy of NASA (edited by Jose-Luis Olivares/MIT)

    March 3, 2015
    Helen Knight | MIT News

    Meteorologists sometimes struggle to accurately predict the weather here on Earth, but now we can find out how cloudy it is on planets outside our solar system, thanks to researchers at MIT.

    In a paper to be published in the Astrophysical Journal, researchers in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS) at MIT describe a technique that analyzes data from NASA’s Kepler space observatory to determine the types of clouds on planets that orbit other stars, known as exoplanets.

    NASA Kepler Telescope
    Kepler

    The team, led by Kerri Cahoy, an assistant professor of aeronautics and astronautics at MIT, has already used the method to determine the properties of clouds on the exoplanet Kepler-7b. The planet is known as a “hot Jupiter,” as temperatures in its atmosphere hover at around 1,700 kelvins.

    NASA’s Kepler spacecraft was designed to search for Earth-like planets orbiting other stars. It was pointed at a fixed patch of space, constantly monitoring the brightness of 145,000 stars. An orbiting exoplanet crossing in front of one of these stars causes a temporary dimming of this brightness, allowing researchers to detect its presence.

    Researchers have previously shown that by studying the variations in the amount of light coming from these star systems as a planet transits, or crosses in front or behind them, they can detect the presence of clouds in that planet’s atmosphere. That is because particles within the clouds will scatter different wavelengths of light.

    Modeling cloud formation

    To find out if this data could be used to determine the composition of these clouds, the MIT researchers studied the light signal from Kepler-7b. They used models of the temperature and pressure of the planet’s atmosphere to determine how different types of clouds would form within it, says lead author Matthew Webber, a graduate student in Cahoy’s group at MIT.

    “We then used those cloud models to determine how light would reflect off the atmosphere of the planet [for each type of cloud], and tried to match these possibilities to the actual observations from the Kepler mission itself,” Webber says. “So we ran a large set of models, to see which models fit best statistically to the observations.”

    By working backward in this way, they were able to match the Kepler spacecraft data to a type of cloud made out of vaporized silicates and magnesium. The extremely high temperatures in the Kepler-7b atmosphere mean that some minerals that commonly exist as rocks on Earth’s surface instead exist as vapors high up in the planet’s atmosphere. These mineral vapors form small cloud particles as they cool and condense.

    Kepler-7b is a tidally locked planet, meaning it always shows the same face to its star — just as the moon does to Earth. As a result, around half of the planet’s day side — that which constantly faces the star — is covered by these magnesium silicate clouds, the team found.

    “We are really doing nothing more complicated than putting a telescope into space and staring at a star with a camera,” Cahoy says. “Then we can use what we know about the universe, in terms of temperatures and pressures, how things mix, how they stratify in an atmosphere, to try to figure out what mix of things would be causing the observations that we’re seeing from these very basic instruments,” she says.

    A clue on exoplanet atmospheres

    Understanding the properties of the clouds on Kepler-7b, such as their mineral composition and average particle size, tells us a lot about the underlying physical nature of the planet’s atmosphere, says team member Nikole Lewis, a postdoc in EAPS. What’s more, the method could be used to study the properties of clouds on different types of planet, Lewis says: “It’s one of the few methods out there that can help you determine if a planet even has an atmosphere, for example.”

    A planet’s cloud coverage and composition also has a significant impact on how much of the energy from its star it will reflect, which in turn affects its climate and ultimately its habitability, Lewis says. “So right now we are looking at these big gas-giant planets because they give us a stronger signal,” she says. “But the same methodology could be applied to smaller planets, to help us determine if a planet is habitable or not.”

    The researchers hope to use the method to analyze data from NASA’s follow-up to the Kepler mission, known as K2, which began studying different patches of space last June. They also hope to use it on data from MIT’s planned Transiting Exoplanet Survey Satellite (TESS) mission, says Cahoy.

    NASA TESS
    TESS

    “TESS is the follow-up to Kepler, led by principal investigator George Ricker, a senior research scientist in the MIT Kavli Institute for Astrophysics and Space Research. It will essentially be taking similar measurements to Kepler, but of different types of stars,” Cahoy says. “Kepler was tasked with staring at one group of stars, but there are a lot of stars, and TESS is going to be sampling the brightest stars across the whole sky,” she says.

    This paper is the first to take circulation models including clouds and compare them with the observed distribution of clouds on Kepler-7b, says Heather Knutson, an assistant professor of planetary science at Caltech who was not involved in the research.

    “Their models indicate that the clouds on this planet are most likely made from liquid rock,” Knutson says. “This may sound exotic, but this planet is a roasting hot gas-giant planet orbiting very close to its host star, and we should expect that it might look quite different than our own Jupiter.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:17 pm on February 20, 2015 Permalink | Reply
    Tags: , , MIT News, Solar shockwaves   

    From MIT: “For the first time, spacecraft catch a solar shockwave in the act” 


    MIT News

    February 19, 2015
    Jennifer Chu | MIT News Office

    Solar storm found to produce “ultrarelativistic, killer electrons” in 60 seconds.

    1
    Earth’s magnetosphere is depicted with the high-energy particles of the Van Allen radiation belts (shown in red) and various processes responsible for accelerating these particles to relativistic energies indicated. The effects of an interplanetary shock penetrate deep into this system, energizing electrons to ultra-relativistic energies in a matter of seconds. Courtesy of NASA

    On Oct. 8, 2013, an explosion on the sun’s surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth’s magnetic field, setting off a magnetized sound pulse around the planet.

    NASA’s Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth’s magnetic field, captured the effects of the solar shockwave just before and after it struck.

    NASA Van Allen Probes
    Van Allen Probe

    Now scientists at MIT’s Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes’ data, and observed a sudden and dramatic effect in the shockwave’s aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth’s radiation belts, accelerating certain particles to ultrahigh energies.

    H3
    Haystack Observatory

    “These are very lightweight particles, but they are ultrarelativistic, killer electrons — electrons that can go right through a satellite,” says John Foster, associate director of MIT’s Haystack Observatory. “These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it’s exciting: We see something that, in terms of the radiation belt, is really quick.”

    The findings represent the first time the effects of a solar shockwave on Earth’s radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

    Catching a shockwave in the act

    Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes’ mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

    One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons — particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

    The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth’s magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave’s aftermath.

    Dealing a “sledgehammer blow”

    Foster and his colleagues analyzed the probes’ data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck “a sledgehammer blow” to the protective barrier of the Earth’s magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse — a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

    In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

    Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles’ velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed “drift resonant,” and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

    Foster says solar shockwaves can impact Earth’s radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

    “This was a relatively small shock. We know they can be much, much bigger,” Foster says. “Interactions between solar activity and Earth’s magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics.”

    Barry Mauk, a project scientist at Johns Hopkins University’s Applied Physics Laboratory, views the group’s findings as “the most comprehensive analysis of shock-induced acceleration within Earth’s space environment ever achieved.”

    “Significant shock-induced acceleration of Earth’s radiation belts occur only occasionally, but these events are important because they have the potential of suddenly generating the most intense and energetic electrons, and therefore the most dangerous conditions for astronauts and satellites,” says Mauk, who did not contribute to the study. “Earth’s space environment serves as a wonderful laboratory for studying the nature of shock acceleration that is occurring elsewhere in the solar system and universe.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:04 am on February 19, 2015 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “New nanogel for drug delivery” 


    MIT News

    February 19, 2015
    Anne Trafton | MIT News Office

    1

    Self-healing gel can be injected into the body and act as a long-term drug depot.

    Scientists are interested in using gels to deliver drugs because they can be molded into specific shapes and designed to release their payload over a specified time period. However, current versions aren’t always practical because must be implanted surgically.

    To help overcome that obstacle, MIT chemical engineers have designed a new type of self-healing hydrogel that could be injected through a syringe. Such gels, which can carry one or two drugs at a time, could be useful for treating cancer, macular degeneration, or heart disease, among other diseases, the researchers say.

    The new gel consists of a mesh network made of two components: nanoparticles made of polymers entwined within strands of another polymer, such as cellulose.

    “Now you have a gel that can change shape when you apply stress to it, and then, importantly, it can re-heal when you relax those forces. That allows you to squeeze it through a syringe or a needle and get it into the body without surgery,” says Mark Tibbitt, a postdoc at MIT’s Koch Institute for Integrative Cancer Research and one of the lead authors of a paper describing the gel in Nature Communications on Feb. 19.

    Koch Institute postdoc Eric Appel is also a lead author of the paper, and the paper’s senior author is Robert Langer, the David H. Koch Institute Professor at MIT. Other authors are postdoc Matthew Webber, undergraduate Bradley Mattix, and postdoc Omid Veiseh.

    Heal thyself

    Scientists have previously constructed hydrogels for biomedical uses by forming irreversible chemical linkages between polymers. These gels, used to make soft contact lenses, among other applications, are tough and sturdy, but once they are formed their shape cannot easily be altered.

    The MIT team set out to create a gel that could survive strong mechanical forces, known as shear forces, and then reform itself. Other researchers have created such gels by engineering proteins that self-assemble into hydrogels, but this approach requires complex biochemical processes. The MIT team wanted to design something simpler.

    “We’re working with really simple materials,” Tibbitt says. “They don’t require any advanced chemical functionalization.”

    The MIT approach relies on a combination of two readily available components. One is a type of nanoparticle formed of PEG-PLA copolymers, first developed in Langer’s lab decades ago and now commonly used to package and deliver drugs. To form a hydrogel, the researchers mixed these particles with a polymer — in this case, cellulose.

    Each polymer chain forms weak bonds with many nanoparticles, producing a loosely woven lattice of polymers and nanoparticles. Because each attachment point is fairly weak, the bonds break apart under mechanical stress, such as when injected through a syringe. When the shear forces are over, the polymers and nanoparticles form new attachments with different partners, healing the gel.

    Using two components to form the gel also gives the researchers the opportunity to deliver two different drugs at the same time. PEG-PLA nanoparticles have an inner core that is ideally suited to carry hydrophobic small-molecule drugs, which include many chemotherapy drugs. Meanwhile, the polymers, which exist in a watery solution, can carry hydrophilic molecules such as proteins, including antibodies and growth factors.

    Long-term drug delivery

    In this study, the researchers showed that the gels survived injection under the skin of mice and successfully released two drugs, one hydrophobic and one hydrophilic, over several days.

    This type of gel offers an important advantage over injecting a liquid solution of drug-delivery nanoparticles: While a solution will immediately disperse throughout the body, the gel stays in place after injection, allowing the drug to be targeted to a specific tissue. Furthermore, the properties of each gel component can be tuned so the drugs they carry are released at different rates, allowing them to be tailored for different uses.

    The researchers are now looking into using the gel to deliver anti-angiogenesis drugs to treat macular degeneration. Currently, patients receive these drugs, which cut off the growth of blood vessels that interfere with sight, as an injection into the eye once a month. The MIT team envisions that the new gel could be programmed to deliver these drugs over several months, reducing the frequency of injections.

    Another potential application for the gels is delivering drugs, such as growth factors, that could help repair damaged heart tissue after a heart attack. The researchers are also pursuing the possibility of using this gel to deliver cancer drugs to kill tumor cells that get left behind after surgery. In that case, the gel would be loaded with a chemical that lures cancer cells toward the gel, as well as a chemotherapy drug that would kill them. This could help eliminate the residual cancer cells that often form new tumors following surgery.

    “Removing the tumor leaves behind a cavity that you could fill with our material, which would provide some therapeutic benefit over the long term in recruiting and killing those cells,” Appel says. “We can tailor the materials to provide us with the drug-release profile that makes it the most effective at actually recruiting the cells.”

    The research was funded by the Wellcome Trust, the Misrock Foundation, the Department of Defense, and the National Institutes of Health.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:22 pm on February 9, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Engineered insulin could offer better diabetes control” 


    MIT News

    February 9, 2015
    Anne Trafton | MIT News Office

    Temp 1

    For patients with diabetes, insulin is critical to maintaining good health and normal blood-sugar levels. However, it’s not an ideal solution because it can be difficult for patients to determine exactly how much insulin they need to prevent their blood sugar from swinging too high or too low.

    MIT engineers hope to improve treatment for diabetes patients with a new type of engineered insulin. In tests in mice, the researchers showed that their modified insulin can circulate in the bloodstream for at least 10 hours, and that it responds rapidly to changes in blood-sugar levels. This could eliminate the need for patients to repeatedly monitor their blood sugar levels and inject insulin throughout the day.

    “The real challenge is getting the right amount of insulin available when you need it, because if you have too little insulin your blood sugar goes up, and if you have too much, it can go dangerously low,” says Daniel Anderson, the Samuel A. Goldblith Associate Professor in MIT’s Department of Chemical Engineering, and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Currently available insulins act independent of the sugar levels in the patient.”

    Anderson and Robert Langer, the David H. Koch Institute Professor at MIT, are the senior authors of a paper describing the engineered insulin in this week’s Proceedings of the National Academy of Sciences. The paper’s lead authors are Hung-Chieh (Danny) Chou, former postdoc Matthew Webber, and postdoc Benjamin Tang. Other authors are technical assistants Amy Lin and Lavanya Thapa, David Deng, Jonathan Truong, and Abel Cortinas.

    Glucose-responsive insulin

    Patients with Type I diabetes lack insulin, which is normally produced by the pancreas and regulates metabolism by stimulating muscle and fat tissue to absorb glucose from the bloodstream. Insulin injections, which form the backbone of treatment for diabetes patients, can be deployed in different ways. Some people take a modified form called long-acting insulin, which stays in the bloodstream for up to 24 hours, to ensure there is always some present when needed. Other patients calculate how much they should inject based on how many calories they consume or how much sugar is present in their blood.

    The MIT team set out to create a new form of insulin that would not only circulate for a long time, but would be activated only when needed — that is, when blood-sugar levels are too high. This would prevent patients’ blood-sugar levels from becoming dangerously low, a condition known as hypoglycemia that can lead to shock and even death.

    To create this glucose-responsive insulin, the researchers first added a hydrophobic molecule called an aliphatic domain, which is a long chain of fatty molecules dangling from the insulin molecule. This helps the insulin circulate in the bloodstream longer, although the researchers do not yet know exactly why that is. One theory is that the fatty tail may bind to albumin, a protein found in the bloodstream, sequestering the insulin and preventing it from latching onto sugar molecules.

    The researchers also attached a chemical group called PBA, which can reversibly bind to glucose. When blood-glucose levels are high, the sugar binds to insulin and activates it, allowing the insulin to stimulate cells to absorb the excess sugar.

    The research team created four variants of the engineered molecule, each of which contained a PBA molecule with a different chemical modification, such as an atom of fluorine and nitrogen. They then tested these variants, along with regular insulin and long-acting insulin, in mice engineered to have an insulin deficiency.

    To compare each type of insulin, the researchers measured how the mice’s blood-sugar levels responded to surges of glucose every few hours for 10 hours. They found that the engineered insulin containing PBA with fluorine worked the best: Mice that received that form of insulin showed the fastest response to blood-glucose spikes.

    “The modified insulin was able to give more appropriate control of blood sugar than the unmodified insulin or the long-acting insulin,” Anderson says.

    The new molecule represents a significant conceptual advance that could help scientists realize the decades-old goal of better controlling diabetes with a glucose-responsive insulin, says Michael Weiss, a professor of biochemistry and medicine at Case Western Reserve University.

    “It would be a breathtaking advance in diabetes treatment if the Anderson/Langer technology could accomplish the translation of this idea into a routine treatment of diabetes,” says Weiss, who was not part of the research team.

    New alternative

    Giving this type of insulin once a day instead of long-acting insulin could offer patients a better alternative that reduces their blood-sugar swings, which can cause health problems when they continue for years and decades, Anderson says. The researchers now plan to test this type of insulin in other animal models and are also working on tweaking the chemical composition of the insulin to make it even more responsive to blood-glucose levels.

    “We’re continuing to think about how we might further tune this to give improved performance so it’s even safer and more efficacious,” Anderson says.

    The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the Tayebati Family Foundation, the National Institutes of Health, and the Juvenile Diabetes Research Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:40 am on January 27, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Biology, driven by data” 


    MIT News

    January 27, 2015
    Anne Trafton | MIT News Office

    1
    Ernest Fraenkel (No image credit)

    Cells are incredibly complicated machines with thousands of interacting parts — and disruptions to any of those interactions can cause disease.

    Tracing those connections to seek the root cause of disease is a daunting task, but it is one that MIT biological engineer Ernest Fraenkel relishes. His lab takes a systematic approach to the problem: By comparing datasets that include thousands of events inside healthy and diseased cells, they can try to figure out what has gone awry in cells that are not functioning properly.

    “The central challenge of this field is how you take all those different kinds of data to get a coherent picture of what’s going on in a cell, what is wrong in a diseased cell, and how you might fix it,” says Fraenkel, an associate professor of biological engineering.

    This type of computational modeling of biological interactions, known as systems biology, can help to reveal possible new drug targets that might not emerge through more traditional biological studies. Using this approach, Fraenkel has deciphered some key interactions that underlie Huntington’s disease as well as glioblastoma, an incurable type of brain cancer.

    Science without borders

    As a high-school student in New York City, Fraenkel had broad interests, and participated in a special program where physics, chemistry, and biology were taught together. The program’s teacher, a Columbia University student, suggested that Fraenkel do some summer research at a lab at Columbia. The lab was run by Cyrus Levinthal, a physicist who had previously taught one of the first biophysics classes at MIT.

    “He had this cool lab where they were doing image analysis of neurons, and modeling proteins, and doing experiments. I just thought it was fantastic. That’s when I decided I wanted to go into science,” Fraenkel recalls.

    He enjoyed the lab so much that he dropped out of high school and starting working there full time, while also taking a few classes at Columbia. After earning a high-school equivalency degree, Fraenkel went to Harvard University to study chemistry and physics, then earned his PhD in biology from MIT. As in high school, he was drawn to all of the sciences, and enjoyed pursuing knowledge from all angles, ignoring the traditional boundaries between fields.

    “My early experience was that they were all deeply connected,” Fraenkel says.

    As a graduate student, he studied structural biology, which uses tools such as X-ray crystallography to understand biological molecules. “What drew me to the field was really the fact that it was very data-rich in a way that biology, at the time, was not,” Fraenkel says.

    However, that was about to change: While Fraenkel was doing a postdoctoral fellowship in structural biology at Harvard, new techniques — such as genome sequencing and measurement of RNA levels inside cells — were generating huge amounts of information. Helping to crunch those numbers seemed an enticing prospect.

    “As I was finishing up my postdoc I was realizing more and more that I wanted to study biology at a more general level,” Fraenkel says. “I really wanted to find out whether there was a more systematic way of trying to understand biology.”

    After leaving Harvard, he became a Whitehead Fellow, allowing him to set up his own lab at the Whitehead Institute and pursue his new interest in systems biology. From there, he joined MIT’s Department of Biological Engineering, which had just been formed.

    Network analysis

    Now, Fraenkel’s lab analyzes vast amounts of data, including not only genomic data but also measurements of proteins and other molecules found in cells. For each set of cells, healthy or diseased, he tries to devise models that could explain what is producing the data. “One way to think about it is a map of a city where these proteins or genes are lighting up different things, and you have to figure out what the wiring is underneath that’s got them talking to each other,” he says.

    To do that, his team uses algorithms they have developed themselves or adapted from network analysis strategies used to analyze the Internet. In the biological networks that Fraenkel studies, connections form between nodes representing a protein, gene, or other small molecule. Nodes that differ between diseased and healthy cells light up in a different color. Ideally, just a few such nodes would light up, but this is usually not the case, Fraenkel says. Instead, you end up with a wiring diagram with color all over the place.

    “We lovingly call those things ‘hairballs,’” he says. “You get these giant hairball diagrams which really haven’t made the problem any easier — in fact, they’ve made it harder. So our algorithms go into that hairball and try to figure out which piece of it is most relevant to the disease, by weighing the probability of different kinds of events being disease-relevant.”

    Those algorithms filter out the irrelevant information, or noise, and zoom in on the pieces of the network that seem to be the most likely to be related to the disease in question. Then, the researchers do experiments in living cells or animals to test the models generated by the algorithms.

    Using this approach, Fraenkel has developed model networks for Huntington’s disease and glioblastoma. Such studies have revealed interactions that might never have been otherwise identified: For example, blocking estrogen can help prevent the growth of glioblastoma cells.

    “The fundamental thing we’re trying to do is take an unbiased view of the biology,” Fraenkel says. “We’re going to look everywhere. We’ll let the data tell us which processes are important and which ones are not.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:29 pm on January 23, 2015 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Particles accelerate without a push” 


    MIT News

    January 20, 2015
    David L. Chandler | MIT News Office

    temp
    This image shows the spatial distribution of charge for an accelerating wave packet, representing an electron, as calculated by this team’s approach. Brightest colors represent the highest charge levels. The self-acceleration of a particle predicted by this work is indistinguishable from acceleration that would be produced by a conventional electromagnetic field. Courtesy of the researchers.

    New analysis shows a way to self-propel subatomic particles, extend the lifetime of unstable isotopes.

    Some physical principles have been considered immutable since the time of [Sir]Isaac Newton: Light always travels in straight lines. No physical object can change its speed unless some outside force acts on it.

    Not so fast, says a new generation of physicists: While the underlying physical laws haven’t changed, new ways of “tricking” those laws to permit seemingly impossible actions have begun to appear. For example, work that began in 2007 proved that under special conditions, light could be made to move along a curved trajectory — a finding that is already beginning to find some practical applications.

    Now, in a new variation on the methods used to bend light, physicists at MIT and Israel’s Technion have found that subatomic particles can be induced to speed up all by themselves, almost to the speed of light, without the application of any external forces. The same underlying principle could also be used to extend the lifetime of some unstable isotopes, perhaps opening up new avenues of research in basic particle physics.

    The findings, based on a theoretical analysis, were published in the journal Nature Physics by MIT postdoc Ido Kaminer and four colleagues at the Technion.

    The new findings are based on a novel set of solutions for a set of basic quantum-physics principles called the Dirac equations; these describe the relativistic behavior of fundamental particles, such as electrons, in terms of a wave structure. (In quantum mechanics, waves and particles are considered to be two aspects of the same physical phenomena). By manipulating the wave structure, the team found, it should be possible to cause electrons to behave in unusual and counterintuitive ways.

    Unexpected behavior

    This manipulation of waves could be accomplished using specially engineered phase masks — similar to those used to create holograms, but at a much smaller scale. Once created, the particles “self-accelerate,” the researchers say, in a way that is indistinguishable from how they would behave if propelled by an electromagnetic field.

    “The electron is gaining speed, getting faster and faster,” Kaminer says. “It looks impossible. You don’t expect physics to allow this to happen.”

    It turns out that this self-acceleration does not actually violate any physical laws — such as the conservation of momentum — because at the same time the particle is accelerating, it is also spreading out spatially in the opposite direction.

    “The electron’s wave packet is not just accelerating, it’s also expanding,” Kaminer says, “so there is some part of it that compensates. It’s referred to as the tail of the wave packet, and it will go backward, so the total momentum will be conserved. There is another part of the wave packet that is paying the price for the main part’s acceleration.”

    It turns out, according to further analysis, that this self-acceleration produces effects that are associated with relativity theory: It is a variation on the dilation of time and contraction of space, effects predicted by Albert Einstein to take place when objects move close to the speed of light. An example of this is Einstein’s famous twin paradox, in which a twin who travels at high speed in a rocket ages more slowly than another twin who remains on Earth.

    Extending lifetimes

    In this case, the time dilation could be applied to subatomic particles that naturally decay and have very short lifetimes — causing these particles to last much longer than they ordinarily would.

    This could make it easier to study such particles by causing them to stay around longer, Kaminer suggests. “Maybe you could measure effects in particle physics that you couldn’t do otherwise,” he says.

    In addition, it might induce differences in the behavior of those particles that might reveal new, unexpected aspects of physics. “You could get different properties — not just for electrons, but for other particles as well,” Kaminer says.

    Now that these effects have been predicted based on theoretical calculations, Kaminer says it should be possible to demonstrate the phenomenon in laboratory experiments. He is beginning work with MIT physics professor Marin Soljačić on the design of such experiments.

    The experiments would make use of an electron microscope fitted with a specially designed phase mask that would produce 1,000 times higher resolution than those used for holography. “It’s the most exact way known today to affect the field of the electron,” Kaminer says.

    While this is such early-stage work that it’s hard to predict what practical applications it might eventually have, Kaminer says this unusual way of accelerating electrons might prove to have practical uses, such as for medical imaging.

    “Research on self-accelerating and shape-preserving beams became very active in recent years, with demonstration of different types of optical, plasmonic, and electron beams, and study of their propagation in different media,” says Ady Arie, a professor of electrical engineering at Tel Aviv University who was not involved in this research. “The authors derive shape-preserving solutions for the Dirac equation that describe the wave propagation of relativistic particles, which were not taken into account in most of the previous works.”

    Arie adds, “Perhaps the most interesting result is the use of these particles to demonstrate the analog of the famous twin paradox of special relativity: The authors show that time dilation occurs between a self-accelerating particle that propagates along a curved trajectory and its ‘twin’ particle that remains at rest.”

    In addition to Kaminer, who was the paper’s lead author, the research team included Jonathan Nemirovsky, Michael Rechtsman, Rivka Bekenstein, and Mordecai Segev, all of the Technion. The work was supported by the Israeli Center of Research Excellence, the U.S.-Israel Binational Science Foundation, and a Marie Curie grant from the European Commission.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:50 pm on January 19, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “New fibers can deliver many simultaneous stimuli” 


    MIT News

    January 19, 2015
    David L. Chandler | MIT News Office

    1
    Christina Tringides, a senior at MIT and member of the research team, holds a sample of the multifunction fiber produced using the group’s new methodology. Photo: Melanie Gonick/MIT


    MIT researchers discuss their novel implantable device that can deliver optical signals and drugs to the brain, without harming the brain tissue. Video: Melanie Gonick/MIT

    The human brain’s complexity makes it extremely challenging to study — not only because of its sheer size, but also because of the variety of signaling methods it uses simultaneously. Conventional neural probes are designed to record a single type of signaling, limiting the information that can be derived from the brain at any point in time. Now researchers at MIT may have found a way to change that.

    By producing complex multimodal fibers that could be less than the width of a hair, they have created a system that could deliver optical signals and drugs directly into the brain, along with simultaneous electrical readout to continuously monitor the effects of the various inputs. The new technology is described in a paper appearing in the journal Nature Biotechnology, written by MIT’s Polina Anikeeva and 10 others. An earlier paper by the team described the use of similar technology for use in spinal cord research.

    In addition to transmitting different kinds of signals, the new fibers are made of polymers that closely resemble the characteristics of neural tissues, Anikeeva says, allowing them to stay in the body much longer without harming the delicate tissues around them.

    “We’re building neural interfaces that will interact with tissues in a more organic way than devices that have been used previously,” says Anikeeva, an assistant professor of materials science and engineering. To do that, her team made use of novel fiber-fabrication technology pioneered by MIT professor of materials science (and paper co-author) Yoel Fink and his team, for use in photonics and other applications.

    Flexible fiber-based probes

    The result, Anikeeva explains, is the fabrication of polymer fibers “that are soft and flexible and look more like natural nerves.” Devices currently used for neural recording and stimulation, she says, are made of metals, semiconductors, and glass, and can damage nearby tissues during ordinary movement.

    “It’s a big problem in neural prosthetics,” Anikeeva says. “They are so stiff, so sharp — when you take a step and the brain moves with respect to the device, you end up scrambling the tissue.”

    The key to the technology is making a larger-scale version, called a preform, of the desired arrangement of channels within the fiber: optical waveguides to carry light, hollow tubes to carry drugs, and conductive electrodes to carry electrical signals. These polymer templates, which can have dimensions on the scale of inches, are then heated until they become soft, and drawn into a thin fiber, while retaining the exact arrangement of features within them.

    A single draw of the fiber reduces the cross-section of the material 200-fold, and the process can be repeated, making the fibers thinner each time and approaching nanometer scale. During this process, Anikeeva says, “Features that used to be inches across are now microns.”

    Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes. For example, light could be transmitted through the optical channels to enable optogenetic neural stimulation, the effects of which could then be monitored with embedded electrodes. At the same time, one or more drugs could be injected into the brain through the hollow channels, while electrical signals in the neurons are recorded to determine, in real time, exactly what effect the drugs are having.

    Customizable toolkit for neural engineering

    The system can be tailored for a specific research or therapeutic application by creating the exact combination of channels needed for that task. “You can have a really broad palette of devices,” Anikeeva says.

    While a single preform a few inches long can produce hundreds of feet of fiber, the materials must be carefully selected so they all soften at the same temperature. The fibers could ultimately be used for precision mapping of the responses of different regions of the brain or spinal cord, Anikeeva says, and ultimately may also lead to long-lasting devices for treatment of conditions such as Parkinson’s disease.

    John Rogers, a professor of materials science and engineering and of chemistry at the University of Illinois at Urbana-Champaign who was not involved in this research, says, “These authors describe a fascinating, diverse collection of multifunctional fibers, tailored for insertion into the brain where they can stimulate and record neural behaviors through electrical, optical, and fluidic means. The results significantly expand the toolkit of techniques that will be essential to our development of a basic understanding of brain function.”

    In addition to Anikeeva and Fink, the work was carried out by Andres Canales, Xiaoting Jia, Ulrich Froriep, Ryan Koppes, Christina Tringides, Jennifer Selvidge, Chi Lu, Chong Hou, and Lei Wei, all of MIT. The work was supported by the National Science Foundation, the Center for Materials Science and Engineering, the Center for Sensorimotor Neural Engineering, the McGovern Institute for Brain Research, the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, and the Simons Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 427 other followers

%d bloggers like this: