Tagged: MIT News Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:17 pm on February 20, 2015 Permalink | Reply
    Tags: , , MIT News, Solar shockwaves   

    From MIT: “For the first time, spacecraft catch a solar shockwave in the act” 

    MIT News

    February 19, 2015
    Jennifer Chu | MIT News Office

    Solar storm found to produce “ultrarelativistic, killer electrons” in 60 seconds.

    Earth’s magnetosphere is depicted with the high-energy particles of the Van Allen radiation belts (shown in red) and various processes responsible for accelerating these particles to relativistic energies indicated. The effects of an interplanetary shock penetrate deep into this system, energizing electrons to ultra-relativistic energies in a matter of seconds. Courtesy of NASA

    On Oct. 8, 2013, an explosion on the sun’s surface sent a supersonic blast wave of solar wind out into space. This shockwave tore past Mercury and Venus, blitzing by the moon before streaming toward Earth. The shockwave struck a massive blow to the Earth’s magnetic field, setting off a magnetized sound pulse around the planet.

    NASA’s Van Allen Probes, twin spacecraft orbiting within the radiation belts deep inside the Earth’s magnetic field, captured the effects of the solar shockwave just before and after it struck.

    NASA Van Allen Probes
    Van Allen Probe

    Now scientists at MIT’s Haystack Observatory, the University of Colorado, and elsewhere have analyzed the probes’ data, and observed a sudden and dramatic effect in the shockwave’s aftermath: The resulting magnetosonic pulse, lasting just 60 seconds, reverberated through the Earth’s radiation belts, accelerating certain particles to ultrahigh energies.

    Haystack Observatory

    “These are very lightweight particles, but they are ultrarelativistic, killer electrons — electrons that can go right through a satellite,” says John Foster, associate director of MIT’s Haystack Observatory. “These particles are accelerated, and their number goes up by a factor of 10, in just one minute. We were able to see this entire process taking place, and it’s exciting: We see something that, in terms of the radiation belt, is really quick.”

    The findings represent the first time the effects of a solar shockwave on Earth’s radiation belts have been observed in detail from beginning to end. Foster and his colleagues have published their results in the Journal of Geophysical Research.

    Catching a shockwave in the act

    Since August 2012, the Van Allen Probes have been orbiting within the Van Allen radiation belts. The probes’ mission is to help characterize the extreme environment within the radiation belts, so as to design more resilient spacecraft and satellites.

    One question the mission seeks to answer is how the radiation belts give rise to ultrarelativistic electrons — particles that streak around the Earth at 1,000 kilometers per second, circling the planet in just five minutes. These high-speed particles can bombard satellites and spacecraft, causing irreparable damage to onboard electronics.

    The two Van Allen probes maintain the same orbit around the Earth, with one probe following an hour behind the other. On Oct. 8, 2013, the first probe was in just the right position, facing the sun, to observe the radiation belts just before the shockwave struck the Earth’s magnetic field. The second probe, catching up to the same position an hour later, recorded the shockwave’s aftermath.

    Dealing a “sledgehammer blow”

    Foster and his colleagues analyzed the probes’ data, and laid out the following sequence of events: As the solar shockwave made impact, according to Foster, it struck “a sledgehammer blow” to the protective barrier of the Earth’s magnetic field. But instead of breaking through this barrier, the shockwave effectively bounced away, generating a wave in the opposite direction, in the form of a magnetosonic pulse — a powerful, magnetized sound wave that propagated to the far side of the Earth within a matter of minutes.

    In that time, the researchers observed that the magnetosonic pulse swept up certain lower-energy particles. The electric field within the pulse accelerated these particles to energies of 3 to 4 million electronvolts, creating 10 times the number of ultrarelativistic electrons that previously existed.

    Taking a closer look at the data, the researchers were able to identify the mechanism by which certain particles in the radiation belts were accelerated. As it turns out, if particles’ velocities as they circle the Earth match that of the magnetosonic pulse, they are deemed “drift resonant,” and are more likely to gain energy from the pulse as it speeds through the radiation belts. The longer a particle interacts with the pulse, the more it is accelerated, giving rise to an extremely high-energy particle.

    Foster says solar shockwaves can impact Earth’s radiation belts a couple of times each month. The event in 2013 was a relatively minor one.

    “This was a relatively small shock. We know they can be much, much bigger,” Foster says. “Interactions between solar activity and Earth’s magnetosphere can create the radiation belt in a number of ways, some of which can take months, others days. The shock process takes seconds to minutes. This could be the tip of the iceberg in how we understand radiation-belt physics.”

    Barry Mauk, a project scientist at Johns Hopkins University’s Applied Physics Laboratory, views the group’s findings as “the most comprehensive analysis of shock-induced acceleration within Earth’s space environment ever achieved.”

    “Significant shock-induced acceleration of Earth’s radiation belts occur only occasionally, but these events are important because they have the potential of suddenly generating the most intense and energetic electrons, and therefore the most dangerous conditions for astronauts and satellites,” says Mauk, who did not contribute to the study. “Earth’s space environment serves as a wonderful laboratory for studying the nature of shock acceleration that is occurring elsewhere in the solar system and universe.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:04 am on February 19, 2015 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “New nanogel for drug delivery” 

    MIT News

    February 19, 2015
    Anne Trafton | MIT News Office


    Self-healing gel can be injected into the body and act as a long-term drug depot.

    Scientists are interested in using gels to deliver drugs because they can be molded into specific shapes and designed to release their payload over a specified time period. However, current versions aren’t always practical because must be implanted surgically.

    To help overcome that obstacle, MIT chemical engineers have designed a new type of self-healing hydrogel that could be injected through a syringe. Such gels, which can carry one or two drugs at a time, could be useful for treating cancer, macular degeneration, or heart disease, among other diseases, the researchers say.

    The new gel consists of a mesh network made of two components: nanoparticles made of polymers entwined within strands of another polymer, such as cellulose.

    “Now you have a gel that can change shape when you apply stress to it, and then, importantly, it can re-heal when you relax those forces. That allows you to squeeze it through a syringe or a needle and get it into the body without surgery,” says Mark Tibbitt, a postdoc at MIT’s Koch Institute for Integrative Cancer Research and one of the lead authors of a paper describing the gel in Nature Communications on Feb. 19.

    Koch Institute postdoc Eric Appel is also a lead author of the paper, and the paper’s senior author is Robert Langer, the David H. Koch Institute Professor at MIT. Other authors are postdoc Matthew Webber, undergraduate Bradley Mattix, and postdoc Omid Veiseh.

    Heal thyself

    Scientists have previously constructed hydrogels for biomedical uses by forming irreversible chemical linkages between polymers. These gels, used to make soft contact lenses, among other applications, are tough and sturdy, but once they are formed their shape cannot easily be altered.

    The MIT team set out to create a gel that could survive strong mechanical forces, known as shear forces, and then reform itself. Other researchers have created such gels by engineering proteins that self-assemble into hydrogels, but this approach requires complex biochemical processes. The MIT team wanted to design something simpler.

    “We’re working with really simple materials,” Tibbitt says. “They don’t require any advanced chemical functionalization.”

    The MIT approach relies on a combination of two readily available components. One is a type of nanoparticle formed of PEG-PLA copolymers, first developed in Langer’s lab decades ago and now commonly used to package and deliver drugs. To form a hydrogel, the researchers mixed these particles with a polymer — in this case, cellulose.

    Each polymer chain forms weak bonds with many nanoparticles, producing a loosely woven lattice of polymers and nanoparticles. Because each attachment point is fairly weak, the bonds break apart under mechanical stress, such as when injected through a syringe. When the shear forces are over, the polymers and nanoparticles form new attachments with different partners, healing the gel.

    Using two components to form the gel also gives the researchers the opportunity to deliver two different drugs at the same time. PEG-PLA nanoparticles have an inner core that is ideally suited to carry hydrophobic small-molecule drugs, which include many chemotherapy drugs. Meanwhile, the polymers, which exist in a watery solution, can carry hydrophilic molecules such as proteins, including antibodies and growth factors.

    Long-term drug delivery

    In this study, the researchers showed that the gels survived injection under the skin of mice and successfully released two drugs, one hydrophobic and one hydrophilic, over several days.

    This type of gel offers an important advantage over injecting a liquid solution of drug-delivery nanoparticles: While a solution will immediately disperse throughout the body, the gel stays in place after injection, allowing the drug to be targeted to a specific tissue. Furthermore, the properties of each gel component can be tuned so the drugs they carry are released at different rates, allowing them to be tailored for different uses.

    The researchers are now looking into using the gel to deliver anti-angiogenesis drugs to treat macular degeneration. Currently, patients receive these drugs, which cut off the growth of blood vessels that interfere with sight, as an injection into the eye once a month. The MIT team envisions that the new gel could be programmed to deliver these drugs over several months, reducing the frequency of injections.

    Another potential application for the gels is delivering drugs, such as growth factors, that could help repair damaged heart tissue after a heart attack. The researchers are also pursuing the possibility of using this gel to deliver cancer drugs to kill tumor cells that get left behind after surgery. In that case, the gel would be loaded with a chemical that lures cancer cells toward the gel, as well as a chemotherapy drug that would kill them. This could help eliminate the residual cancer cells that often form new tumors following surgery.

    “Removing the tumor leaves behind a cavity that you could fill with our material, which would provide some therapeutic benefit over the long term in recruiting and killing those cells,” Appel says. “We can tailor the materials to provide us with the drug-release profile that makes it the most effective at actually recruiting the cells.”

    The research was funded by the Wellcome Trust, the Misrock Foundation, the Department of Defense, and the National Institutes of Health.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:22 pm on February 9, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Engineered insulin could offer better diabetes control” 

    MIT News

    February 9, 2015
    Anne Trafton | MIT News Office

    Temp 1

    For patients with diabetes, insulin is critical to maintaining good health and normal blood-sugar levels. However, it’s not an ideal solution because it can be difficult for patients to determine exactly how much insulin they need to prevent their blood sugar from swinging too high or too low.

    MIT engineers hope to improve treatment for diabetes patients with a new type of engineered insulin. In tests in mice, the researchers showed that their modified insulin can circulate in the bloodstream for at least 10 hours, and that it responds rapidly to changes in blood-sugar levels. This could eliminate the need for patients to repeatedly monitor their blood sugar levels and inject insulin throughout the day.

    “The real challenge is getting the right amount of insulin available when you need it, because if you have too little insulin your blood sugar goes up, and if you have too much, it can go dangerously low,” says Daniel Anderson, the Samuel A. Goldblith Associate Professor in MIT’s Department of Chemical Engineering, and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Currently available insulins act independent of the sugar levels in the patient.”

    Anderson and Robert Langer, the David H. Koch Institute Professor at MIT, are the senior authors of a paper describing the engineered insulin in this week’s Proceedings of the National Academy of Sciences. The paper’s lead authors are Hung-Chieh (Danny) Chou, former postdoc Matthew Webber, and postdoc Benjamin Tang. Other authors are technical assistants Amy Lin and Lavanya Thapa, David Deng, Jonathan Truong, and Abel Cortinas.

    Glucose-responsive insulin

    Patients with Type I diabetes lack insulin, which is normally produced by the pancreas and regulates metabolism by stimulating muscle and fat tissue to absorb glucose from the bloodstream. Insulin injections, which form the backbone of treatment for diabetes patients, can be deployed in different ways. Some people take a modified form called long-acting insulin, which stays in the bloodstream for up to 24 hours, to ensure there is always some present when needed. Other patients calculate how much they should inject based on how many calories they consume or how much sugar is present in their blood.

    The MIT team set out to create a new form of insulin that would not only circulate for a long time, but would be activated only when needed — that is, when blood-sugar levels are too high. This would prevent patients’ blood-sugar levels from becoming dangerously low, a condition known as hypoglycemia that can lead to shock and even death.

    To create this glucose-responsive insulin, the researchers first added a hydrophobic molecule called an aliphatic domain, which is a long chain of fatty molecules dangling from the insulin molecule. This helps the insulin circulate in the bloodstream longer, although the researchers do not yet know exactly why that is. One theory is that the fatty tail may bind to albumin, a protein found in the bloodstream, sequestering the insulin and preventing it from latching onto sugar molecules.

    The researchers also attached a chemical group called PBA, which can reversibly bind to glucose. When blood-glucose levels are high, the sugar binds to insulin and activates it, allowing the insulin to stimulate cells to absorb the excess sugar.

    The research team created four variants of the engineered molecule, each of which contained a PBA molecule with a different chemical modification, such as an atom of fluorine and nitrogen. They then tested these variants, along with regular insulin and long-acting insulin, in mice engineered to have an insulin deficiency.

    To compare each type of insulin, the researchers measured how the mice’s blood-sugar levels responded to surges of glucose every few hours for 10 hours. They found that the engineered insulin containing PBA with fluorine worked the best: Mice that received that form of insulin showed the fastest response to blood-glucose spikes.

    “The modified insulin was able to give more appropriate control of blood sugar than the unmodified insulin or the long-acting insulin,” Anderson says.

    The new molecule represents a significant conceptual advance that could help scientists realize the decades-old goal of better controlling diabetes with a glucose-responsive insulin, says Michael Weiss, a professor of biochemistry and medicine at Case Western Reserve University.

    “It would be a breathtaking advance in diabetes treatment if the Anderson/Langer technology could accomplish the translation of this idea into a routine treatment of diabetes,” says Weiss, who was not part of the research team.

    New alternative

    Giving this type of insulin once a day instead of long-acting insulin could offer patients a better alternative that reduces their blood-sugar swings, which can cause health problems when they continue for years and decades, Anderson says. The researchers now plan to test this type of insulin in other animal models and are also working on tweaking the chemical composition of the insulin to make it even more responsive to blood-glucose levels.

    “We’re continuing to think about how we might further tune this to give improved performance so it’s even safer and more efficacious,” Anderson says.

    The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the Tayebati Family Foundation, the National Institutes of Health, and the Juvenile Diabetes Research Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:40 am on January 27, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Biology, driven by data” 

    MIT News

    January 27, 2015
    Anne Trafton | MIT News Office

    Ernest Fraenkel (No image credit)

    Cells are incredibly complicated machines with thousands of interacting parts — and disruptions to any of those interactions can cause disease.

    Tracing those connections to seek the root cause of disease is a daunting task, but it is one that MIT biological engineer Ernest Fraenkel relishes. His lab takes a systematic approach to the problem: By comparing datasets that include thousands of events inside healthy and diseased cells, they can try to figure out what has gone awry in cells that are not functioning properly.

    “The central challenge of this field is how you take all those different kinds of data to get a coherent picture of what’s going on in a cell, what is wrong in a diseased cell, and how you might fix it,” says Fraenkel, an associate professor of biological engineering.

    This type of computational modeling of biological interactions, known as systems biology, can help to reveal possible new drug targets that might not emerge through more traditional biological studies. Using this approach, Fraenkel has deciphered some key interactions that underlie Huntington’s disease as well as glioblastoma, an incurable type of brain cancer.

    Science without borders

    As a high-school student in New York City, Fraenkel had broad interests, and participated in a special program where physics, chemistry, and biology were taught together. The program’s teacher, a Columbia University student, suggested that Fraenkel do some summer research at a lab at Columbia. The lab was run by Cyrus Levinthal, a physicist who had previously taught one of the first biophysics classes at MIT.

    “He had this cool lab where they were doing image analysis of neurons, and modeling proteins, and doing experiments. I just thought it was fantastic. That’s when I decided I wanted to go into science,” Fraenkel recalls.

    He enjoyed the lab so much that he dropped out of high school and starting working there full time, while also taking a few classes at Columbia. After earning a high-school equivalency degree, Fraenkel went to Harvard University to study chemistry and physics, then earned his PhD in biology from MIT. As in high school, he was drawn to all of the sciences, and enjoyed pursuing knowledge from all angles, ignoring the traditional boundaries between fields.

    “My early experience was that they were all deeply connected,” Fraenkel says.

    As a graduate student, he studied structural biology, which uses tools such as X-ray crystallography to understand biological molecules. “What drew me to the field was really the fact that it was very data-rich in a way that biology, at the time, was not,” Fraenkel says.

    However, that was about to change: While Fraenkel was doing a postdoctoral fellowship in structural biology at Harvard, new techniques — such as genome sequencing and measurement of RNA levels inside cells — were generating huge amounts of information. Helping to crunch those numbers seemed an enticing prospect.

    “As I was finishing up my postdoc I was realizing more and more that I wanted to study biology at a more general level,” Fraenkel says. “I really wanted to find out whether there was a more systematic way of trying to understand biology.”

    After leaving Harvard, he became a Whitehead Fellow, allowing him to set up his own lab at the Whitehead Institute and pursue his new interest in systems biology. From there, he joined MIT’s Department of Biological Engineering, which had just been formed.

    Network analysis

    Now, Fraenkel’s lab analyzes vast amounts of data, including not only genomic data but also measurements of proteins and other molecules found in cells. For each set of cells, healthy or diseased, he tries to devise models that could explain what is producing the data. “One way to think about it is a map of a city where these proteins or genes are lighting up different things, and you have to figure out what the wiring is underneath that’s got them talking to each other,” he says.

    To do that, his team uses algorithms they have developed themselves or adapted from network analysis strategies used to analyze the Internet. In the biological networks that Fraenkel studies, connections form between nodes representing a protein, gene, or other small molecule. Nodes that differ between diseased and healthy cells light up in a different color. Ideally, just a few such nodes would light up, but this is usually not the case, Fraenkel says. Instead, you end up with a wiring diagram with color all over the place.

    “We lovingly call those things ‘hairballs,’” he says. “You get these giant hairball diagrams which really haven’t made the problem any easier — in fact, they’ve made it harder. So our algorithms go into that hairball and try to figure out which piece of it is most relevant to the disease, by weighing the probability of different kinds of events being disease-relevant.”

    Those algorithms filter out the irrelevant information, or noise, and zoom in on the pieces of the network that seem to be the most likely to be related to the disease in question. Then, the researchers do experiments in living cells or animals to test the models generated by the algorithms.

    Using this approach, Fraenkel has developed model networks for Huntington’s disease and glioblastoma. Such studies have revealed interactions that might never have been otherwise identified: For example, blocking estrogen can help prevent the growth of glioblastoma cells.

    “The fundamental thing we’re trying to do is take an unbiased view of the biology,” Fraenkel says. “We’re going to look everywhere. We’ll let the data tell us which processes are important and which ones are not.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 2:29 pm on January 23, 2015 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Particles accelerate without a push” 

    MIT News

    January 20, 2015
    David L. Chandler | MIT News Office

    This image shows the spatial distribution of charge for an accelerating wave packet, representing an electron, as calculated by this team’s approach. Brightest colors represent the highest charge levels. The self-acceleration of a particle predicted by this work is indistinguishable from acceleration that would be produced by a conventional electromagnetic field. Courtesy of the researchers.

    New analysis shows a way to self-propel subatomic particles, extend the lifetime of unstable isotopes.

    Some physical principles have been considered immutable since the time of [Sir]Isaac Newton: Light always travels in straight lines. No physical object can change its speed unless some outside force acts on it.

    Not so fast, says a new generation of physicists: While the underlying physical laws haven’t changed, new ways of “tricking” those laws to permit seemingly impossible actions have begun to appear. For example, work that began in 2007 proved that under special conditions, light could be made to move along a curved trajectory — a finding that is already beginning to find some practical applications.

    Now, in a new variation on the methods used to bend light, physicists at MIT and Israel’s Technion have found that subatomic particles can be induced to speed up all by themselves, almost to the speed of light, without the application of any external forces. The same underlying principle could also be used to extend the lifetime of some unstable isotopes, perhaps opening up new avenues of research in basic particle physics.

    The findings, based on a theoretical analysis, were published in the journal Nature Physics by MIT postdoc Ido Kaminer and four colleagues at the Technion.

    The new findings are based on a novel set of solutions for a set of basic quantum-physics principles called the Dirac equations; these describe the relativistic behavior of fundamental particles, such as electrons, in terms of a wave structure. (In quantum mechanics, waves and particles are considered to be two aspects of the same physical phenomena). By manipulating the wave structure, the team found, it should be possible to cause electrons to behave in unusual and counterintuitive ways.

    Unexpected behavior

    This manipulation of waves could be accomplished using specially engineered phase masks — similar to those used to create holograms, but at a much smaller scale. Once created, the particles “self-accelerate,” the researchers say, in a way that is indistinguishable from how they would behave if propelled by an electromagnetic field.

    “The electron is gaining speed, getting faster and faster,” Kaminer says. “It looks impossible. You don’t expect physics to allow this to happen.”

    It turns out that this self-acceleration does not actually violate any physical laws — such as the conservation of momentum — because at the same time the particle is accelerating, it is also spreading out spatially in the opposite direction.

    “The electron’s wave packet is not just accelerating, it’s also expanding,” Kaminer says, “so there is some part of it that compensates. It’s referred to as the tail of the wave packet, and it will go backward, so the total momentum will be conserved. There is another part of the wave packet that is paying the price for the main part’s acceleration.”

    It turns out, according to further analysis, that this self-acceleration produces effects that are associated with relativity theory: It is a variation on the dilation of time and contraction of space, effects predicted by Albert Einstein to take place when objects move close to the speed of light. An example of this is Einstein’s famous twin paradox, in which a twin who travels at high speed in a rocket ages more slowly than another twin who remains on Earth.

    Extending lifetimes

    In this case, the time dilation could be applied to subatomic particles that naturally decay and have very short lifetimes — causing these particles to last much longer than they ordinarily would.

    This could make it easier to study such particles by causing them to stay around longer, Kaminer suggests. “Maybe you could measure effects in particle physics that you couldn’t do otherwise,” he says.

    In addition, it might induce differences in the behavior of those particles that might reveal new, unexpected aspects of physics. “You could get different properties — not just for electrons, but for other particles as well,” Kaminer says.

    Now that these effects have been predicted based on theoretical calculations, Kaminer says it should be possible to demonstrate the phenomenon in laboratory experiments. He is beginning work with MIT physics professor Marin Soljačić on the design of such experiments.

    The experiments would make use of an electron microscope fitted with a specially designed phase mask that would produce 1,000 times higher resolution than those used for holography. “It’s the most exact way known today to affect the field of the electron,” Kaminer says.

    While this is such early-stage work that it’s hard to predict what practical applications it might eventually have, Kaminer says this unusual way of accelerating electrons might prove to have practical uses, such as for medical imaging.

    “Research on self-accelerating and shape-preserving beams became very active in recent years, with demonstration of different types of optical, plasmonic, and electron beams, and study of their propagation in different media,” says Ady Arie, a professor of electrical engineering at Tel Aviv University who was not involved in this research. “The authors derive shape-preserving solutions for the Dirac equation that describe the wave propagation of relativistic particles, which were not taken into account in most of the previous works.”

    Arie adds, “Perhaps the most interesting result is the use of these particles to demonstrate the analog of the famous twin paradox of special relativity: The authors show that time dilation occurs between a self-accelerating particle that propagates along a curved trajectory and its ‘twin’ particle that remains at rest.”

    In addition to Kaminer, who was the paper’s lead author, the research team included Jonathan Nemirovsky, Michael Rechtsman, Rivka Bekenstein, and Mordecai Segev, all of the Technion. The work was supported by the Israeli Center of Research Excellence, the U.S.-Israel Binational Science Foundation, and a Marie Curie grant from the European Commission.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:50 pm on January 19, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “New fibers can deliver many simultaneous stimuli” 

    MIT News

    January 19, 2015
    David L. Chandler | MIT News Office

    Christina Tringides, a senior at MIT and member of the research team, holds a sample of the multifunction fiber produced using the group’s new methodology. Photo: Melanie Gonick/MIT

    MIT researchers discuss their novel implantable device that can deliver optical signals and drugs to the brain, without harming the brain tissue. Video: Melanie Gonick/MIT

    The human brain’s complexity makes it extremely challenging to study — not only because of its sheer size, but also because of the variety of signaling methods it uses simultaneously. Conventional neural probes are designed to record a single type of signaling, limiting the information that can be derived from the brain at any point in time. Now researchers at MIT may have found a way to change that.

    By producing complex multimodal fibers that could be less than the width of a hair, they have created a system that could deliver optical signals and drugs directly into the brain, along with simultaneous electrical readout to continuously monitor the effects of the various inputs. The new technology is described in a paper appearing in the journal Nature Biotechnology, written by MIT’s Polina Anikeeva and 10 others. An earlier paper by the team described the use of similar technology for use in spinal cord research.

    In addition to transmitting different kinds of signals, the new fibers are made of polymers that closely resemble the characteristics of neural tissues, Anikeeva says, allowing them to stay in the body much longer without harming the delicate tissues around them.

    “We’re building neural interfaces that will interact with tissues in a more organic way than devices that have been used previously,” says Anikeeva, an assistant professor of materials science and engineering. To do that, her team made use of novel fiber-fabrication technology pioneered by MIT professor of materials science (and paper co-author) Yoel Fink and his team, for use in photonics and other applications.

    Flexible fiber-based probes

    The result, Anikeeva explains, is the fabrication of polymer fibers “that are soft and flexible and look more like natural nerves.” Devices currently used for neural recording and stimulation, she says, are made of metals, semiconductors, and glass, and can damage nearby tissues during ordinary movement.

    “It’s a big problem in neural prosthetics,” Anikeeva says. “They are so stiff, so sharp — when you take a step and the brain moves with respect to the device, you end up scrambling the tissue.”

    The key to the technology is making a larger-scale version, called a preform, of the desired arrangement of channels within the fiber: optical waveguides to carry light, hollow tubes to carry drugs, and conductive electrodes to carry electrical signals. These polymer templates, which can have dimensions on the scale of inches, are then heated until they become soft, and drawn into a thin fiber, while retaining the exact arrangement of features within them.

    A single draw of the fiber reduces the cross-section of the material 200-fold, and the process can be repeated, making the fibers thinner each time and approaching nanometer scale. During this process, Anikeeva says, “Features that used to be inches across are now microns.”

    Combining the different channels in a single fiber, she adds, could enable precision mapping of neural activity, and ultimately treatment of neurological disorders, that would not be possible with single-function neural probes. For example, light could be transmitted through the optical channels to enable optogenetic neural stimulation, the effects of which could then be monitored with embedded electrodes. At the same time, one or more drugs could be injected into the brain through the hollow channels, while electrical signals in the neurons are recorded to determine, in real time, exactly what effect the drugs are having.

    Customizable toolkit for neural engineering

    The system can be tailored for a specific research or therapeutic application by creating the exact combination of channels needed for that task. “You can have a really broad palette of devices,” Anikeeva says.

    While a single preform a few inches long can produce hundreds of feet of fiber, the materials must be carefully selected so they all soften at the same temperature. The fibers could ultimately be used for precision mapping of the responses of different regions of the brain or spinal cord, Anikeeva says, and ultimately may also lead to long-lasting devices for treatment of conditions such as Parkinson’s disease.

    John Rogers, a professor of materials science and engineering and of chemistry at the University of Illinois at Urbana-Champaign who was not involved in this research, says, “These authors describe a fascinating, diverse collection of multifunctional fibers, tailored for insertion into the brain where they can stimulate and record neural behaviors through electrical, optical, and fluidic means. The results significantly expand the toolkit of techniques that will be essential to our development of a basic understanding of brain function.”

    In addition to Anikeeva and Fink, the work was carried out by Andres Canales, Xiaoting Jia, Ulrich Froriep, Ryan Koppes, Christina Tringides, Jennifer Selvidge, Chi Lu, Chong Hou, and Lei Wei, all of MIT. The work was supported by the National Science Foundation, the Center for Materials Science and Engineering, the Center for Sensorimotor Neural Engineering, the McGovern Institute for Brain Research, the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, and the Simons Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 5:33 am on January 16, 2015 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “MIT team enlarges brain samples, making them easier to image” 

    MIT News

    January 15, 2015
    Anne Trafton | MIT News Office

    New technique enables nanoscale-resolution microscopy of large biological specimens.


    Beginning with the invention of the first microscope in the late 1500s, scientists have been trying to peer into preserved cells and tissues with ever-greater magnification. The latest generation of so-called “super-resolution” microscopes can see inside cells with resolution better than 250 nanometers.

    A team of researchers from MIT has now taken a novel approach to gaining such high-resolution images: Instead of making their microscopes more powerful, they have discovered a method that enlarges tissue samples by embedding them in a polymer that swells when water is added. This allows specimens to be physically magnified, and then imaged at a much higher resolution.

    This technique, which uses inexpensive, commercially available chemicals and microscopes commonly found in research labs, should give many more scientists access to super-resolution imaging, the researchers say.

    “Instead of acquiring a new microscope to take images with nanoscale resolution, you can take the images on a regular microscope. You physically make the sample bigger, rather than trying to magnify the rays of light that are emitted by the sample,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.

    Boyden is the senior author of a paper describing the new method in the Jan. 15 online edition of Science. Lead authors of the paper are graduate students Fei Chen and Paul Tillberg.

    Physical magnification

    Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.

    “Unfortunately, in biology that’s right where things get interesting,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. Protein complexes, molecules that transport payloads in and out of cells, and other cellular activities are all organized at the nanoscale.

    Scientists have come up with some “really clever tricks” to overcome this limitation, Boyden says. However, these super-resolution techniques work best with small, thin samples, and take a long time to image large samples. “If you want to map the brain, or understand how cancer cells are organized in a metastasizing tumor, or how immune cells are configured in an autoimmune attack, you have to look at a large piece of tissue with nanoscale precision,” he says.

    To achieve this, the MIT team focused its attention on the sample rather than the microscope. Their idea was to make specimens easier to image at high resolution by embedding them in an expandable polymer gel made of polyacrylate, a very absorbent material commonly found in diapers.

    Before enlarging the tissue, the researchers first label the cell components or proteins that they want to examine, using an antibody that binds to the chosen targets. This antibody is linked to a fluorescent dye, as well as a chemical anchor that can attach the dye to the polyacrylate chain.

    Once the tissue is labeled, the researchers add the precursor to the polyacrylate gel and heat it to form the gel. They then digest the proteins that hold the specimen together, allowing it to expand uniformly. The specimen is then washed in salt-free water to induce a 100-fold expansion in volume. Even though the proteins have been broken apart, the original location of each fluorescent label stays the same relative to the overall structure of the tissue because it is anchored to the polyacrylate gel.

    “What you’re left with is a three-dimensional, fluorescent cast of the original material. And the cast itself is swollen, unimpeded by the original biological structure,” Tillberg says.

    The MIT team imaged this “cast” with commercially available confocal microscopes, commonly used for fluorescent imaging but usually limited to a resolution of hundreds of nanometers. With their enlarged samples, the researchers achieved resolution down to 70 nanometers. “The expansion microscopy process … should be compatible with many existing microscope designs and systems already in laboratories,” Chen adds.

    Large tissue samples

    Using this technique, the MIT team was able to image a section of brain tissue 500 by 200 by 100 microns with a standard confocal microscope. Imaging such large samples would not be feasible with other super-resolution techniques, which require minutes to image a tissue slice only 1 micron thick and are limited in their ability to image large samples by optical scattering and other aberrations.

    “The exciting part is that this approach can acquire data at the same high speed per pixel as conventional microscopy, contrary to most other methods that beat the diffraction limit for microscopy, which can be 1,000 times slower per pixel,” says George Church, a professor of genetics at Harvard Medical School who was not part of the research team.

    “The other methods currently have better resolution, but are harder to use, or slower,” Tillberg says. “The benefits of our method are the ease of use and, more importantly, compatibility with large volumes, which is challenging with existing technologies.”

    The researchers envision that this technology could be very useful to scientists trying to image brain cells and map how they connect to each other across large regions.

    “There are lots of biological questions where you have to understand a large structure,” Boyden says. “Especially for the brain, you have to be able to image a large volume of tissue, but also to see where all the nanoscale components are.”

    While Boyden’s team is focused on the brain, other possible applications for this technique include studying tumor metastasis and angiogenesis (growth of blood vessels to nourish a tumor), or visualizing how immune cells attack specific organs during autoimmune disease.

    The research was funded by the National Institutes of Health, the New York Stem Cell Foundation, Jeremy and Joyce Wertheimer, the National Science Foundation, and the Fannie and John Hertz Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 4:34 pm on January 14, 2015 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “A twist on planetary origins” 

    MIT News

    January 14, 2015
    Jennifer Chu | MIT News Office

    Meteors that have crashed to Earth have long been regarded as relics of the early solar system. These craggy chunks of metal and rock are studded with chondrules — tiny, glassy, spherical grains that were once molten droplets. Scientists have thought that chondrules represent early kernels of terrestrial planets: As the solar system started to coalesce, these molten droplets collided with bits of gas and dust to form larger planetary precursors.

    However, researchers at MIT and Purdue University have now found that chondrules may have played less of a fundamental role. Based on computer simulations, the group concludes that chondrules were not building blocks, but rather byproducts of a violent and messy planetary process.

    Chondrules in the chondrite Grassland. A millimeter scale is shown.

    The team found that bodies as large as the moon likely existed well before chondrules came on the scene. In fact, the researchers found that chondrules were most likely created by the collision of such moon-sized planetary embryos: These bodies smashed together with such violent force that they melted a fraction of their material, and shot a molten plume out into the solar nebula. Residual droplets would eventually cool to form chondrules, which in turn attached to larger bodies — some of which would eventually impact Earth, to be preserved as meteorites.

    An artist’s rendering of a protoplanetary impact. Early in the impact, molten jetted material is ejected at a high velocity and breaks up to form chondrules, the millimeter-scale, formerly molten droplets found in most meteorites. These droplets cool and solidify over hours to days. Image: NASA/California Institute of Technology

    Brandon Johnson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, says the findings revise one of the earliest chapters of the solar system.

    “This tells us that meteorites aren’t actually representative of the material that formed planets — they’re these smaller fractions of material that are the byproduct of planet formation,” Johnson says. “But it also tells us the early solar system was more violent than we expected: You had these massive sprays of molten material getting ejected out from these really big impacts. It’s an extreme process.”

    Johnson and his colleagues, including Maria Zuber, the E.A. Griswold Professor of Geophysics and MIT’s vice president for research, have published their results this week in the journal Nature.

    High-velocity molten rock

    To get a better sense of the role of chondrules in a fledgling solar system, the researchers first simulated collisions between protoplanets — rocky bodies between the size of an asteroid and the moon. The team modeled all the different types of impacts that might occur in an early solar system, including their location, timing, size, and velocity. They found that bodies the size of the moon formed relatively quickly, within the first 10,000 years, before chondrules were thought to have appeared.

    A surviving protoplanet, 4 Vesta.

    Johnson then used another model to determine the type of collision that could melt and eject molten material. From these simulations, he determined that a collision at a velocity of 2.5 kilometers per second would be forceful enough to produce a plume of melt that is ejected out into space — a phenomenon known as impact jetting.

    “Once the two bodies collide, a very small amount of material is shocked up to high temperature, to the point where it can melt,” Johnson says. “Then this really hot material shoots out from the collision point.”

    The team then estimated the number of impact-jetting collisions that likely occurred in a solar system’s first 5 million years — the period of time during which it’s believed that chondrules first appeared. From these results, Johnson and his team found that such collisions would have produced enough chondrules in the asteroid belt region to explain the number that have been detected in meteorites today.

    Falling into place

    To go a step further, the researchers ran a third simulation to calculate chondrules’ cooling rate. Previous experiments in the lab have shown that chondrules cool down at a rate of 10 to 1,000 kelvins per hour — a rate that would produce the texture of chondrules seen in meteorites. Johnson and his colleagues used a radiative transfer model to simulate the impact conditions required to produce such a cooling rate. They found that bodies colliding at 2.5 kilometers per second would indeed produce molten droplets that, ejected into space, would cool at 10 to 1,000 kelvins per hour.

    “Then I had this ‘Eureka!’ moment where I realized that jetting during these really big impacts could possibly explain the formation of chondrules,” Johnson says. “It all fell into place.”

    Going forward, Johnson plans to look into the effects of other types of impacts. The group has so far modeled vertical impacts — bodies colliding straight-on. Johnson predicts that oblique impacts, or collisions occurring at an angle, may be even more efficient at producing molten plumes of chondrules. He also hopes to explore what happens to chondrules once they are launched into the solar nebula.

    “Chondrules were long viewed as planetary building blocks,” Zuber notes. “It’s ironic that they now appear to be the remnants of early protoplanetary collisions.”

    Fred Ciesla, an associate professor of planetary science at the University of Chicago, says the findings may reclassify chondrites, a class of meteorites that are thought to be examples of the original material from which planets formed.

    “This would be a major shift in how people think about our solar system,” says Ciesla, who did not contribute to the research. “If this finding is correct, then it would suggest that chondrites are not good analogs for the building blocks of the Earth and other planets. Meteorites as a whole are still important clues about what processes occurred during the formation of the solar system, but which ones are the best analogs for what the planets were made out of would change.”

    This research was funded in part by NASA.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:37 am on January 13, 2015 Permalink | Reply
    Tags: , Devices, MIT News   

    From MIT: “Watching how cells interact” 

    MIT News

    January 13, 2015
    Anne Trafton | MIT News Office

    The immune system is a complex network of many different cells working together to defend against invaders. Successfully fighting off an infection depends on the interactions between these cells.

    MIT researchers have designed a microfluidic device that allows them to precisely trap pairs of cells (one red, one green) and observe how they interact over time. Image: Burak Dura

    A new device developed by MIT engineers offers a much more detailed picture of that cellular communication. Using this device, which captures pairs of cells and collects data on each as they interact with each other, the researchers have already learned more about how T cells — major players in the immune response — become activated during infection.

    The device is based on microfluidic technology developed by Joel Voldman, an MIT professor of electrical engineering and computer science (EECS), in 2009. His team used that earlier version to fuse adult cells with embryonic stem cells, allowing the researchers to observe the genetic reprogramming that occurred in these hybrids.

    MIT researchers use this microchip to trap and fuse pairs of cells. Image: Alison Skelley

    After that study, immunologists contacted Voldman wondering if the device could be adapted to study immune cells. “A lot of what occurs in the immune system is cells talking to other cells by coming in contact with them,” says Voldman, one of the senior authors of a paper describing the new device in the Jan. 13 issue of Nature Communications.

    Voldman and Burak Dura, the paper’s lead author and a graduate student in EECS, spent several years re-engineering the device to get it to work with immune cells, which are much smaller than the cells analyzed in 2009. Hidde Ploegh, an MIT professor of biology and member of the Whitehead Institute for Biomedical Research, is also a senior author of the paper.

    Controlled contact

    Until now, the most common way to measure interactions between two types of cells was to mix the cells together in a test tube and observe them. However, this approach has limited usefulness because there is no guarantee that each cell is interacting with only one other cell.

    “All that uncontrollability makes it hard to interpret the results you get,” Voldman says.

    In contrast, Voldman’s device allows for complete control over cell pairings. The device consists of a chip with cell-trapping cups that are strategically arranged to capture and pair up cells. First, type A cells are flowed across the chip in one direction and captured in single-cell traps. Then, the flow of liquid is reversed, drawing the A cells into larger traps located opposite the single cell-traps. When each A cell is in a large trap, B cells are flowed in, and each one joins an A cell in the large traps.

    This technique allows the researchers to follow hundreds of cell pairs over time and monitor what is happening in each cell, which has not been possible previously. It also allows them to precisely control the timing of cell interactions.

    “We know the exact contact time, and we can keep them in contact as long as they are within the cups,” Dura says. “This allows us to not only measure the single cell parameters but also do measurements of the two cells together and correlate the responses with one another.”

    In the new version of the device, the researchers added high-resolution imaging, allowing them to see when cells’ calcium levels fluctuate and when they turn on a type of protein signaling known as phosphorylation.

    “This is a very elegant way of doing these experiments,” says Hang Lu, a professor of chemical and biomolecular engineering at the Georgia Institute of Technology who was not involved in the research. “It’s very well-controlled and you know exactly where to look for the cells, and that makes imaging them extremely efficient and high-throughput.”

    Launching an immune response

    In the Nature Communications paper, Dura worked with Stephanie Dougan, a former postdoc at the Whitehead Institute, to study the interaction between T cells and B cells, which is key to launching an immune response. When B cells encounter viruses or bacteria, they absorb them and display pieces of viral or bacterial proteins (known as antigens) on their cell surfaces. When these B cells encounter T cells with receptors that recognize the antigen, the T cells become activated, provoking them to release cytokines — inflammatory chemicals that control the immune response — or to seek out and destroy infected cells.

    Although all of the T cells in this study had identical T cell receptors, the MIT team found that they did not all respond the same way after encountering B cells carrying identical antigens on their surfaces.

    Using calcium imaging to measure T cell activation, the researchers found that the initial activation level depends on how much of the antigen is presented. At high levels, most of the cells respond the same way. However, at lower antigen levels, the T cell responses vary greatly. These differences also correlated to differences in T cell cytokine production.

    In future studies, the researchers hope to further trace how T cells go through the decision-making process that determines their eventual fates. They also plan to investigate other types of interactions — for example, how immune cells called natural killer cells recognize and destroy cancer cells.

    The research was funded by the Singapore-MIT Alliance, the AACR-Pancreatic Cancer Action Network, Janssen Pharmaceuticals, and the Frank Quick Faculty Research Innovation Fellowship.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:23 am on January 9, 2015 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Toward quantum chips” 

    MIT News

    January 9, 2015
    Larry Hardesty | MIT News Office

    A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

    One of the researchers’ new photon detectors, deposited athwart a light channel — or “waveguide” (horizontal black band) — on a silicon optical chip.
    Image courtesy of Nature Communications

    Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

    In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

    “You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

    Thinking small

    According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

    With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

    “Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

    The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.


    The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

    Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

    “It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

    With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

    In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

    “This work is a technical tour de force,” says Robert Hadfield, a professor of photonics at the University of Glasgow who was not involved in the research. “There is potential for scale-up to large circuits requiring hundreds of detectors using commercial pick-and-place technology.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 413 other followers

%d bloggers like this: