Tagged: MIT News Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:02 am on October 17, 2014 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “Nanoparticles get a magnetic handle” 


    MIT News

    October 9, 2014
    David L. Chandler | MIT News Office

    A long-sought goal of creating particles that can emit a colorful fluorescent glow in a biological environment, and that could be precisely manipulated into position within living cells, has been achieved by a team of researchers at MIT and several other institutions. The finding is reported this week in the journal Nature Communications.

    4
    Elemental mapping of the location of iron atoms (blue) in the magnetic nanoparticles and cadmium (red) in the fluorescent quantum dots provide a clear visualization of the way the two kinds of particles naturally separate themselves into a core-and-shell structure. Image courtesy of the researchers

    The new technology could make it possible to track the position of the nanoparticles as they move within the body or inside a cell. At the same time, the nanoparticles could be manipulated precisely by applying a magnetic field to pull them along. And finally, the particles could have a coating of a bioreactive substance that could seek out and bind with particular molecules within the body, such as markers for tumor cells or other disease agents.

    “It’s been a dream of mine for many years to have a nanomaterial that incorporates both fluorescence and magnetism in a single compact object,” says Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT and senior author of the new paper. While other groups have achieved some combination of these two properties, Bawendi says that he “was never very satisfied” with results previously achieved by his own team or others.

    For one thing, he says, such particles have been too large to make practical probes of living tissue: “They’ve tended to have a lot of wasted volume,” Bawendi says. “Compactness is critical for biological and a lot of other applications.”

    In addition, previous efforts were unable to produce particles of uniform and predictable size, which could also be an essential property for diagnostic or therapeutic applications.

    Moreover, Bawendi says, “We wanted to be able to manipulate these structures inside the cells with magnetic fields, but also know exactly what it is we’re moving.” All of these goals are achieved by the new nanoparticles, which can be identified with great precision by the wavelength of their fluorescent emissions.

    The new method produces the combination of desired properties “in as small a package as possible,” Bawendi says — which could help pave the way for particles with other useful properties, such as the ability to bind with a specific type of bioreceptor, or another molecule of interest.

    In the technique developed by Bawendi’s team, led by lead author and postdoc Ou Chen, the nanoparticles crystallize such that they self-assemble in exactly the way that leads to the most useful outcome: The magnetic particles cluster at the center, while fluorescent particles form a uniform coating around them. That puts the fluorescent molecules in the most visible location for allowing the nanoparticles to be tracked optically through a microscope.

    “These are beautiful structures, they’re so clean,” Bawendi says. That uniformity arises, in part, because the starting material, fluorescent nanoparticles that Bawendi and his group have been perfecting for years, are themselves perfectly uniform in size. “You have to use very uniform material to produce such a uniform construction,” Chen says.

    Initially, at least, the particles might be used to probe basic biological functions within cells, Bawendi suggests. As the work continues, later experiments may add additional materials to the particles’ coating so that they interact in specific ways with molecules or structures within the cell, either for diagnosis or treatment.

    The ability to manipulate the particles with electromagnets is key to using them in biological research, Bawendi explains: The tiny particles could otherwise get lost in the jumble of molecules circulating within a cell. “Without a magnetic ‘handle,’ it’s like a needle in a haystack,” he says. “But with the magnetism, you can find it easily.”

    A silica coating on the particles allows additional molecules to attach, causing the particles to bind with specific structures within the cell. “Silica makes it completely flexible; it’s a well developed material that can bind to almost anything,” Bawendi says.

    For example, the coating could have a molecule that binds to a specific type of tumor cells; then, “You could use them to enhance the contrast of an MRI, so you could see the spatial macroscopic outlines of a tumor,” he says.

    The next step for the team is to test the new nanoparticles in a variety of biological settings. “We’ve made the material,” Chen says. “Now we’ve got to use it, and we’re working with a number of groups around the world for a variety of applications.”

    Christopher Murray, a professor of chemistry and materials science and engineering at the University of Pennsylvania who was not connected with this research, says, “This work exemplifies the power of using nanocrystals as building blocks for multiscale and multifunctional structures. We often use the term ‘artificial atoms’ in the community to describe how we are exploiting a new periodic table of fundamental building blocks to design materials, and this is a very elegant example.”

    The study included researchers at MIT; Massachusetts General Hospital; Institut Curie in Paris; the Heinrich-Pette Institute and the Bernhard-Nocht Institute for Tropical Medicine in Hamburg, Germany; Children’s Hospital Boston; and Cornell University. The work was supported by the National Institutes of Health, the Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and the Department of Energy.

    See the full article, with video, here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:20 am on October 17, 2014 Permalink | Reply
    Tags: , MIT News, ,   

    From MIT: “Superconducting circuits, simplified” 


    MIT News

    October 17, 2014
    Larry Hardesty | MIT News Office

    Computer chips with superconducting circuits — circuits with zero electrical resistance — would be 50 to 100 times as energy-efficient as today’s chips, an attractive trait given the increasing power consumption of the massive data centers that power the Internet’s most popular sites.

    chip
    Shown here is a square-centimeter chip containing the nTron adder, which performed the first computation using the researchers’ new superconducting circuit. Photo: Adam N. McCaughan

    Superconducting chips also promise greater processing power: Superconducting circuits that use so-called Josephson junctions have been clocked at 770 gigahertz, or 500 times the speed of the chip in the iPhone 6.

    But Josephson-junction chips are big and hard to make; most problematic of all, they use such minute currents that the results of their computations are difficult to detect. For the most part, they’ve been relegated to a few custom-engineered signal-detection applications.

    In the latest issue of the journal Nano Letters, MIT researchers present a new circuit design that could make simple superconducting devices much cheaper to manufacture. And while the circuits’ speed probably wouldn’t top that of today’s chips, they could solve the problem of reading out the results of calculations performed with Josephson junctions.

    The MIT researchers — Adam McCaughan, a graduate student in electrical engineering, and his advisor, professor of electrical engineering and computer science Karl Berggren — call their device the nanocryotron, after the cryotron, an experimental computing circuit developed in the 1950s by MIT professor Dudley Buck. The cryotron was briefly the object of a great deal of interest — and federal funding — as the possible basis for a new generation of computers, but it was eclipsed by the integrated circuit.

    “The superconducting-electronics community has seen a lot of devices come and go, without any real-world application,” McCaughan says. “But in our paper, we have already applied our device to applications that will be highly relevant to future work in superconducting computing and quantum communications.”

    Superconducting circuits are used in light detectors that can register the arrival of a single light particle, or photon; that’s one of the applications in which the researchers tested the nanocryotron. McCaughan also wired together several of the circuits to produce a fundamental digital-arithmetic component called a half-adder.

    Resistance is futile

    Superconductors have no electrical resistance, meaning that electrons can travel through them completely unimpeded. Even the best standard conductors — like the copper wires in phone lines or conventional computer chips — have some resistance; overcoming it requires operational voltages much higher than those that can induce current in a superconductor. Once electrons start moving through an ordinary conductor, they still collide occasionally with its atoms, releasing energy as heat.

    Superconductors are ordinary materials cooled to extremely low temperatures, which damps the vibrations of their atoms, letting electrons zip past without collision. Berggren’s lab focuses on superconducting circuits made from niobium nitride, which has the relatively high operating temperature of 16 Kelvin, or minus 257 degrees Celsius. That’s achievable with liquid helium, which, in a superconducting chip, would probably circulate through a system of pipes inside an insulated housing, like Freon in a refrigerator.

    A liquid-helium cooling system would of course increase the power consumption of a superconducting chip. But given that the starting point is about 1 percent of the energy required by a conventional chip, the savings could still be enormous. Moreover, superconducting computation would let data centers dispense with the cooling systems they currently use to keep their banks of servers from overheating.

    Cheap superconducting circuits could also make it much more cost-effective to build single-photon detectors, an essential component of any information system that exploits the computational speedups promised by quantum computing.

    Engineered to a T

    The nanocryotron — or nTron — consists of a single layer of niobium nitride deposited on an insulator in a pattern that looks roughly like a capital “T.” But where the base of the T joins the crossbar, it tapers to only about one-tenth its width. Electrons sailing unimpeded through the base of the T are suddenly crushed together, producing heat, which radiates out into the crossbar and destroys the niobium nitride’s superconductivity.

    A current applied to the base of the T can thus turn off a current flowing through the crossbar. That makes the circuit a switch, the basic component of a digital computer.

    After the current in the base is turned off, the current in the crossbar will resume only after the junction cools back down. Since the superconductor is cooled by liquid helium, that doesn’t take long. But the circuits are unlikely to top the 1 gigahertz typical of today’s chips. Still, they could be useful for some lower-end applications where speed isn’t as important as energy efficiency.

    Their most promising application, however, could be in making calculations performed by Josephson junctions accessible to the outside world. Josephson junctions use tiny currents that until now have required sensitive lab equipment to detect. They’re not strong enough to move data to a local memory chip, let alone to send a visual signal to a computer monitor.

    In experiments, McCaughan demonstrated that currents even smaller than those found in Josephson-junction devices were adequate to switch the nTron from a conductive to a nonconductive state. And while the current in the base of the T can be small, the current passing through the crossbar could be much larger — large enough to carry information to other devices on a computer motherboard.

    “I think this is a great device,” says Oleg Mukhanov, chief technology officer of Hypres, a superconducting-electronics company whose products rely on Josephson junctions. “We are currently looking very seriously at the nTron for use in memory.”

    “There are several attractions of this device,” Mukhanov says. “First, it’s very compact, because after all, it’s a nanowire. One of the problems with Josephson junctions is that they are big. If you compare them with CMOS transistors, they’re just physically bigger. The second is that Josephson junctions are two-terminal devices. Semiconductor transistors are three-terminal, and that’s a big advantage. Similarly, nTrons are three-terminal devices.”

    “As far as memory is concerned,” Mukhanov adds, “one of the features that also attracts us is that we plan to integrate it with magnetoresistive spintronic devices, mRAM, magnetic random-access memories, at room temperature. And one of the features of these devices is that they are high-impedance. They are in the kilo-ohms range, and if you look at Josephson junctions, they are just a few ohms. So there is a big mismatch, which makes it very difficult from an electrical-engineering standpoint to match these two devices. NTrons are nanowire devices, so they’re high-impedance, too. They’re naturally compatible with the magnetoresistive elements.”

    McCaughan and Berggren’s research was funded by the National Science Foundation and by the Director of National Intelligence’s Intelligence Advanced Research Projects Activity.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:33 pm on October 13, 2014 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Solid nanoparticles can deform like a liquid” 


    MIT News

    October 12, 2014
    David L. Chandler | MIT News Office

    Unexpected finding shows tiny particles keep their internal crystal structure while flexing like droplets.

    A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

    drops
    Image: Yan Liang

    The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits.

    The results, published in the journal Nature Materials, come from a combination of laboratory analysis and computer modeling, by an international team that included researchers in China, Japan, and Pittsburgh, as well as at MIT.

    The experiments were conducted at room temperature, with particles of pure silver less than 10 nanometers across — less than one-thousandth of the width of a human hair. But the results should apply to many different metals, says Li, senior author of the paper and the BEA Professor of Nuclear Science and Engineering.

    Silver has a relatively high melting point — 962 degrees Celsius, or 1763 degrees Fahrenheit — so observation of any liquidlike behavior in its nanoparticles was “quite unexpected,” Li says. Hints of the new phenomenon had been seen in earlier work with tin, which has a much lower melting point, he says.

    The use of nanoparticles in applications ranging from electronics to pharmaceuticals is a lively area of research; generally, Li says, these researchers “want to form shapes, and they want these shapes to be stable, in many cases over a period of years.” So the discovery of these deformations reveals a potentially serious barrier to many such applications: For example, if gold or silver nanoligaments are used in electronic circuits, these deformations could quickly cause electrical connections to fail.

    Only skin deep

    The researchers’ detailed imaging with a transmission electron microscope and atomistic modeling revealed that while the exterior of the metal nanoparticles appears to move like a liquid, only the outermost layers — one or two atoms thick — actually move at any given time. As these outer layers of atoms move across the surface and redeposit elsewhere, they give the impression of much greater movement — but inside each particle, the atoms stay perfectly lined up, like bricks in a wall.

    “The interior is crystalline, so the only mobile atoms are the first one or two monolayers,” Li says. “Everywhere except the first two layers is crystalline.”

    By contrast, if the droplets were to melt to a liquid state, the orderliness of the crystal structure would be eliminated entirely — like a wall tumbling into a heap of bricks.

    Technically, the particles’ deformation is pseudoelastic, meaning that the material returns to its original shape after the stresses are removed — like a squeezed rubber ball — as opposed to plasticity, as in a deformable lump of clay that retains a new shape.

    The phenomenon of plasticity by interfacial diffusion was first proposed by Robert L. Coble, a professor of ceramic engineering at MIT, and is known as “Coble creep.” “What we saw is aptly called Coble pseudoelasticity,” Li says.

    Now that the phenomenon has been understood, researchers working on nanocircuits or other nanodevices can quite easily compensate for it, Li says. If the nanoparticles are protected by even a vanishingly thin layer of oxide, the liquidlike behavior is almost completely eliminated, making stable circuits possible.

    Possible benefits

    On the other hand, for some applications this phenomenon might be useful: For example, in circuits where electrical contacts need to withstand rotational reconfiguration, particles designed to maximize this effect might prove useful, using noble metals or a reducing atmosphere, where the formation of an oxide layer is destabilized, Li says.

    The new finding flies in the face of expectations — in part, because of a well-understood relationship, in most materials, in which mechanical strength increases as size is reduced.

    “In general, the smaller the size, the higher the strength,” Li says, but “at very small sizes, a material component can get very much weaker. The transition from ‘smaller is stronger’ to ‘smaller is much weaker’ can be very sharp.”

    That crossover, he says, takes place at about 10 nanometers at room temperature — a size that microchip manufacturers are approaching as circuits shrink. When this threshold is reached, Li says, it causes “a very precipitous drop” in a nanocomponent’s strength.

    The findings could also help explain a number of anomalous results seen in other research on small particles, Li says.

    “The … work reported in this paper is first-class,” says Horacio Espinosa, a professor of manufacturing and entrepreneurship at Northwestern University who was not involved in this research. “These are very difficult experiments, which revealed for the first time shape recovery of silver nanocrystals in the absence of dislocation. … Li’s interpretation of the experiments using atomistic modeling illustrates recent progress in comparing experiments and simulations as it relates to spatial and time scales. This has implications to many aspects of mechanics of materials, so I expect this work to be highly cited.”

    The research team included Jun Sun, Longbing He, Tao Xu, Hengchang Bi, and Litao Sun, all of Southeast University in Nanjing, China; Yu-Chieh Lo of MIT and Kyoto University; Ze Zhang of Zhejiang University; and Scott Mao of the University of Pittsburgh. It was supported by the National Basic Research Program of China; the National Natural Science Foundation of China; the Chinese Ministry of Education; the National Science Foundation of Jiangsu Province, China; and the U.S. National Science Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:04 pm on September 29, 2014 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Modeling shockwaves through the brain” 


    MIT News

    September 29, 2014
    Jennifer Chu | MIT News Office

    New scaling law helps estimate humans’ risk of blast-induced traumatic brain injury.

    Since the start of the military conflicts in Iraq and Afghanistan, more than 300,000 soldiers have returned to the United States with traumatic brain injury (TBI) caused by exposure to bomb blasts — and in particular, exposure to improvised explosive devices, or IEDs. Symptoms of traumatic brain injury can range from the mild, such as lingering headaches and nausea, to more severe impairments in memory and cognition.

    brain
    Jose-Luis Olivares/MIT

    Since 2007, the U.S. Department of Defense has recognized the critical importance and complexity of this problem, and has made significant investments in traumatic brain injury research. Nevertheless, there remain many gaps in scientists’ understanding of the effects of blasts on the human brain; most new knowledge has come from experiments with animals.

    br
    MIT researchers have developed a model of the human head for use in simulations to predict the risk for blast-induced traumatic brain injury. Relevant tissue structures include the skull (green), brain (red), and flesh (blue). Courtesy of the researchers

    Now MIT researchers have developed a scaling law that predicts a human’s risk of brain injury, based on previous studies of blasts’ effects on animal brains. The method may help the military develop more protective helmets, as well as aid clinicians in diagnosing traumatic brain injury — often referred to as the “invisible wounds” of battle.

    “We’re really focusing on mild traumatic brain injury, where we know the least, but the problem is the largest,” says Raul Radovitzky, a professor of aeronautics and astronautics and associate director of the MIT Institute for Soldier Nanotechnologies (ISN). “It often remains undetected. And there’s wide consensus that this is clearly a big issue.”

    While previous scaling laws predicted that humans’ brains would be more resilient to blasts than animals’, Radovitzky’s team found the opposite: that in fact, humans are much more vulnerable, as they have thinner skulls to protect much larger brains.

    A group of ISN researchers led by Aurélie Jean, a postdoc in Radovitzky’s group, developed simulations of human, pig, and rat heads, and exposed each to blasts of different intensities. Their simulations predicted the effects of the blasts’ shockwaves as they propagated through the skulls and brains of each species. Based on the resulting differences in intracranial pressure, the team developed an equation, or scaling law, to estimate the risk of brain injury for each species.

    “The great thing about doing this on the computer is that it allows you to reduce and possibly eventually eliminate animal experiments,” Radovitzky says.

    The MIT team and co-author James Q. Zheng, chief scientist at the U.S. Army’s soldier protection and individual equipment program, detail their results this week in the Proceedings of the National Academy of Sciences.

    Air (through the) head

    A blast wave is the shockwave, or wall of compressed air, that rushes outward from the epicenter of an explosion. Aside from the physical fallout of shrapnel and other chemical elements, the blast wave alone can cause severe injuries to the lungs and brain. In the brain, a shockwave can slam through soft tissue, with potentially devastating effects.

    In 2010, Radovitzky’s group, working in concert with the Defense and Veterans Brain Injury Center, a part of the U.S. military health system, developed a highly sophisticated, image-based computational model of the human head that illustrates the ways in which pressurized air moves through its soft tissues. With this model, the researchers showed how the energy from a blast wave can easily reach the brain through openings such as the eyes and sinuses — and also how covering the face with a mask can prevent such injuries. Since then, the team has developed similar models for pigs and rats, capturing the mechanical response of brain tissue to shockwaves.

    In their current work, the researchers calculated the vulnerability of each species to brain injury by establishing a mathematical relationship between properties of the skull, brain, and surrounding flesh, and the propagation of incoming shockwaves. The group considered each brain structure’s volume, density, and celerity — how fast stress waves propagate through a tissue. They then simulated the brain’s response to blasts of different intensities.

    “What the simulation allows you to do is take what happens outside, which is the same across species, and look at how strong was the effect of the blast inside the brain,” Jean says.

    In general, they found that an animal’s skull and other fleshy structures act as a shield, blunting the effects of a blast wave: The thicker these structures are, the less vulnerable an animal is to injury. Compared with the more prominent skulls of rats and pigs, a human’s thinner skull increases the risk for traumatic brain injury.

    Shifting the problem

    This finding runs counter to previous theories, which held that an animal’s vulnerability to blasts depends on its overall mass, but which ignored the role of protective physical structures. According to these theories, humans, being more massive than pigs or rats, would be better protected against blast waves.

    Radovitzky says this reasoning stems from studies of “blast lung” — blast-induced injuries such as tearing, hemorrhaging, and swelling of the lungs, where it was found that mass matters: The larger an animal is, the more resilient it may be to lung damage. Informed by such studies, the military has since developed bulletproof vests that have dramatically decreased the number of blast-induced lung injuries in recent years.

    “There have essentially been no reported cases of blast lung in the last 10 years in Iraq or Afghanistan,” Radovitzky notes. “Now we’ve shifted that problem to traumatic brain injury.”

    In collaboration with Army colleagues, Radovitzky and his group are performing basic research to help the Army develop helmets that better protect soldiers. To this end, the team is extending the simulation approach they used for blast to other types of threats.

    His group is also collaborating with audiologists at Massachusetts General Hospital, where victims of the Boston Marathon bombing are being treated for ruptured eardrums.

    “They have an exact map of where each victim was, relative to the blast,” Radovitzky says. “In principle, we could simulate the event, find out the level of exposure of each of those victims, put it in our scaling law, and we could estimate their risk of developing a traumatic brain injury that may not be detected in an MRI.”

    Joe Rosen, a professor of surgery at Dartmouth Medical School, sees the group’s scaling law as a promising window into identifying a long-sought mechanism for blast-induced traumatic brain injury.

    “Eighty percent of the injuries coming off the battlefield are blast-induced, and mild TBIs may not have any evidence of injury, but they end up the rest of their lives impaired,” says Rosen, who was not involved in the research. “Maybe we can realize they’re getting doses of these blasts, and that a cumulative dose is what causes [TBI], and before that point, we can pull them off the field. I think this work will be important, because it puts a stake in the ground so we can start making some progress.”

    This work was supported by the U.S. Army through ISN.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:54 pm on September 28, 2014 Permalink | Reply
    Tags: , , , , MIT News   

    From MIT: “Biologists find an early sign of cancer” 


    MIT News

    September 28, 2014
    Anne Trafton | MIT News Office

    Patients show boost in certain amino acids years before diagnosis of pancreatic cancer.

    Years before they show any other signs of disease, pancreatic cancer patients have very high levels of certain amino acids in their bloodstream, according to a new study from MIT, Dana-Farber Cancer Institute, and the Broad Institute.

    cancer
    Christine Daniloff/MIT

    This finding, which suggests that muscle tissue is broken down in the disease’s earliest stages, could offer new insights into developing early diagnostics for pancreatic cancer, which kills about 40,000 Americans every year and is usually not caught until it is too late to treat.

    The study, which appears today in the journal Nature Medicine, is based on an analysis of blood samples from 1,500 people participating in long-term health studies. The researchers compared samples from people who were eventually diagnosed with pancreatic cancer and samples from those who were not. The findings were dramatic: People with a surge in amino acids known as branched chain amino acids were far more likely to be diagnosed with pancreatic cancer within one to 10 years.

    “Pancreatic cancer, even at its very earliest stages, causes breakdown of body protein and deregulated metabolism. What that means for the tumor, and what that means for the health of the patient — those are long-term questions still to be answered,” says Matthew Vander Heiden, an associate professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the paper’s senior authors.

    The paper’s other senior author is Brian Wolpin, an assistant professor of medical oncology at Dana-Farber. Wolpin, a clinical epidemiologist, assembled the patient sample from several large public-health studies. All patients had their blood drawn when they began participating in the studies and subsequently filled out annual health questionnaires.

    Working with researchers at the Broad Institute, the team analyzed blood samples for more than 100 different metabolites — molecules, such as proteins and sugars, produced as the byproducts of metabolic processes.

    “What we found was that this really interesting signature fell out as predicting pancreatic cancer diagnosis, which was elevation in these three branched chain amino acids: leucine, isoleucine, and valine,” Vander Heiden says. These are among the 20 amino acids — the building blocks for proteins — normally found in the human body.

    Some of the patients in the study were diagnosed with pancreatic cancer just one year after their blood samples were taken, while others were diagnosed two, five, or even 10 years later.

    “We found that higher levels of branched chain amino acids were present in people who went on to develop pancreatic cancer compared to those who did not develop the disease,” Wolpin says. “These findings led us to hypothesize that the increase in branched chain amino acids is due to the presence of an early pancreatic tumor.”

    Early protein breakdown

    Vander Heiden’s lab tested this hypothesis by studying mice that are genetically programmed to develop pancreatic cancer. “Using those mouse models, we found that we could perfectly recapitulate these exact metabolic changes during the earliest stages of cancer,” Vander Heiden says. “What happens is, as people or mice develop pancreatic cancer, at the very earliest stages, it causes the body to enter this altered metabolic state where it starts breaking down protein in distant tissues.”

    “This is a finding of fundamental importance in the biology of pancreatic cancer,” says David Tuveson, a professor at the Cancer Center at Cold Spring Harbor Laboratory who was not involved in the work. “It really opens a window of possibility for labs to try to determine the mechanism of this metabolic breakdown.”

    The researchers are now investigating why this protein breakdown, which has not been seen in other types of cancer, occurs in the early stages of pancreatic cancer. They suspect that pancreatic tumors may be trying to feed their own appetite for amino acids that they need to build cancerous cells. The researchers are also exploring possible links between this early protein breakdown and the wasting disease known as cachexia, which often occurs in the late stages of pancreatic cancer.

    Also to be answered is the question of whether this signature could be used for early detection. The findings need to be validated with more data, and it may be difficult to develop a reliable diagnostic based on this signature alone, Vander Heiden says. However, he believes that studying this metabolic dysfunction further may reveal additional markers, such as misregulated hormones, that could be combined to generate a more accurate test.

    The findings may also allow scientists to pursue new treatments that would work by targeting tumor metabolism and cutting off a tumor’s nutrient supply, Vander Heiden says.

    MIT’s contribution to this research was funded by the Lustgarten Foundation, the National Institutes of Health, the Burroughs Wellcome Fund, and the Damon Runyon Cancer Research Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:23 pm on September 25, 2014 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “How to make stronger, ‘greener’ cement” 


    MIT News

    September 25, 2014
    David L. Chandler | MIT News Office

    Concrete is the world’s most-used construction material, and a leading contributor to global warming, producing as much as one-tenth of industry-generated greenhouse-gas emissions. Now a new study suggests a way in which those emissions could be reduced by more than half — and the result would be a stronger, more durable material.

    pour

    The findings come from the most detailed molecular analysis yet of the complex structure of concrete, which is a mixture of sand, gravel, water, and cement. Cement is made by cooking calcium-rich material, usually limestone, with silica-rich material — typically clay — at temperatures of 1,500 degrees Celsius, yielding a hard mass called “clinker.” This is then ground up into a powder. The decarbonation of limestone, and the heating of cement, are responsible for most of the material’s greenhouse-gas output.

    The new analysis suggests that reducing the ratio of calcium to silicate would not only cut those emissions, but would actually produce better, stronger concrete. These findings are described in the journal Nature Communications by MIT senior research scientist Roland Pellenq; professors Krystyn Van Vliet, Franz-Josef Ulm, Sidney Yip, and Markus Buehler; and eight co-authors at MIT and at CNRS in Marseille, France.

    “Cement is the most-used material on the planet,” Pellenq says, noting that its present usage is estimated to be three times that of steel. “There’s no other solution to sheltering mankind in a durable way — turning liquid into stone in 10 hours, easily, at room temperature. That’s the magic of cement.”

    In conventional cements, Pellenq explains, the calcium-to-silica ratio ranges anywhere from about 1.2 to 2.2, with 1.7 accepted as the standard. But the resulting molecular structures have never been compared in detail. Pellenq and his colleagues built a database of all these chemical formulations, finding that the optimum mixture was not the one typically used today, but rather a ratio of about 1.5.

    As the ratio varies, he says, the molecular structure of the hardened material progresses from a tightly ordered crystalline structure to a disordered glassy structure. They found the ratio of 1.5 parts calcium for every one part silica to be “a magical ratio,” Pellenq says, because at that point the material can achieve “two times the resistance of normal cement, in mechanical resistance to fracture, with some molecular-scale design.”

    The findings, Pellenq adds, were “validated against a large body of experimental data.” Since emissions related to concrete production are estimated to represent 5 to 10 percent of industrial greenhouse-gas emissions, he says, “any reduction in calcium content in the cement mix will have an impact on the CO2.” In fact, he says, the reduction in carbon emissions could be as much as 60 percent.

    In addition to the overall improvement in mechanical strength, Pellenq says, because the material would be more glassy and less crystalline, there would be “no residual stresses in the material, so it would be more fracture-resistant.”

    The work is the culmination of five years of research by a collaborative team from MIT and CNRS, where Pellenq is research director. The two institutions have a joint laboratory at MIT called the Multi-Scale Materials Science for Energy and Environment, run by Pellenq and Ulm, who is director of MIT’s Concrete Sustainability Hub, and hosted by the MIT Energy Initiative.

    Because of its improved resistance to mechanical stress, Pellenq says the revised formulation could be of particular interest to the oil and gas industries, where cement around well casings is crucial to preventing leakage and blowouts. “More resistant cement certainly is something they would consider,” Pellenq says.

    So far, the work has remained at the molecular level of analysis, he says. “Next, we have to make sure these nanoscale properties translate to the mesoscale” — that is, to the engineering scale of applications for infrastructure, housing, and other uses.

    Zdeněk Bažant, a professor of civil and environmental engineering, mechanical engineering, and materials science and engineering at Northwestern University who was not involved in this research, says, “Roland Pellenq, with his group at MIT, is doing cutting-edge research, clarifying the nanostructure and properties of cement hydrates.”

    The Concrete Sustainability Hub is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:14 pm on September 21, 2014 Permalink | Reply
    Tags: , , , MIT News   

    From M.I.T.: “Magnetic fields make the excitons go ’round” 


    MIT News

    September 21, 2014
    David L. Chandler | MIT News Office

    A major limitation in the performance of solar cells happens within the photovoltaic material itself: When photons strike the molecules of a solar cell, they transfer their energy, producing quasi-particles called excitons — an energized state of molecules. That energized state can hop from one molecule to the next until it’s transferred to electrons in a wire, which can light up a bulb or turn a motor.

    temp

    But as the excitons hop through the material, they are prone to getting stuck in minuscule defects, or traps — causing them to release their energy as wasted light.

    Now a team of researchers at MIT and Harvard University has found a way of rendering excitons immune to these traps, possibly improving photovoltaic devices’ efficiency. The work is described in a paper in the journal Nature Materials.

    Their approach is based on recent research on exotic electronic states known as topological insulators, in which the bulk of a material is an electrical insulator — that is, it does not allow electrons to move freely — while its surface is a good conductor.

    The MIT-Harvard team used this underlying principle, called topological protection, but applied it to excitons instead of electrons, explains lead author Joel Yuen, a postdoc in MIT’s Center for Excitonics, part of the Research Laboratory of Electronics. Topological protection, he says, “has been a very popular idea in the physics and materials communities in the last few years,” and has been successfully applied to both electronic and photonic materials.

    Moving on the surface

    Topological excitons would move only at the surface of a material, Yuen explains, with the direction of their motion determined by the direction of an applied magnetic field. In that respect, their behavior is similar to that of topological electrons or photons.

    In its theoretical analysis, the team studied the behavior of excitons in an organic material, a porphyrin thin film, and determined that their motion through the material would be immune to the kind of defects that tend to trap excitons in conventional solar cells.

    The choice of porphyrin for this analysis was based on the fact that it is a well-known and widely studied family of materials, says co-author Semion Saikin, a postdoc at Harvard and an affiliate of the Center for Excitonics. The next step, he says, will be to extend the analysis to other kinds of materials.

    por
    Structure of porphine, the simplest porphyrin

    While the work so far has been theoretical, experimentalists are eager to pursue the concept. Ultimately, this approach could lead to novel circuits that are similar to electronic devices but based on controlling the flow of excitons rather that electrons, Yuen says. “If there are ever excitonic circuits,” he says, “this could be the mechanism” that governs their functioning. But the likely first application of the work would be in creating solar cells that are less vulnerable to the trapping of excitons.

    Eric Bittner, a professor of chemistry at the University of Houston who was not associated with this work, says, “The work is interesting on both the fundamental and practical levels. On the fundamental side, it is intriguing that one may be able to create excitonic materials with topological properties. This opens a new avenue for both theoretical and experimental work. … On the practical side, the interesting properties of these materials and the fact that we’re talking about pretty simple starting components — porphyrin thin films — makes them novel materials for new devices.”

    The work received support from the U.S. Department of Energy and the Defense Threat Reduction Agency. Norman Yao, a graduate student at Harvard, was also a co-author.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:53 pm on September 21, 2014 Permalink | Reply
    Tags: , , , MIT News   

    From M.I.T.: “New formulation leads to improved liquid battery” 


    MIT News

    September 21, 2014
    David L. Chandler | MIT News Office

    Cheaper, longer-lasting materials could enable batteries that make wind and solar energy more competitive.

    temp

    Researchers at MIT have improved a proposed liquid battery system that could enable renewable energy sources to compete with conventional power plants.

    Donald Sadoway and colleagues have already started a company to produce electrical-grid-scale liquid batteries, whose layers of molten material automatically separate due to their differing densities. But the new formula — published in the journal Nature by Sadoway, former postdocs Kangli Wang and Kai Jiang, and seven others — substitutes different metals for the molten layers used in a battery previously developed by the team.

    Sadoway, the John F. Elliott Professor of Materials Chemistry, says the new formula allows the battery to work at a temperature more than 200 degrees Celsius lower than the previous formulation. In addition to the lower operating temperature, which should simplify the battery’s design and extend its working life, the new formulation will be less expensive to make, he says.

    The battery uses two layers of molten metal, separated by a layer of molten salt that acts as the battery’s electrolyte (the layer that charged particles pass through as the battery is charged or discharged). Because each of the three materials has a different density, they naturally separate into layers, like oil floating on water.

    The original system, using magnesium for one of the battery’s electrodes and antimony for the other, required an operating temperature of 700 C. But with the new formulation, with one electrode made of lithium and the other a mixture of lead and antimony, the battery can operate at temperatures of 450 to 500 C.

    Extensive testing has shown that even after 10 years of daily charging and discharging, the system should retain about 85 percent of its initial efficiency — a key factor in making such a technology an attractive investment for electric utilities.

    Currently, the only widely used system for utility-scale storage of electricity is pumped hydro, in which water is pumped uphill to a storage reservoir when excess power is available, and then flows back down through a turbine to generate power when it is needed. Such systems can be used to match the intermittent production of power from irregular sources, such as wind and solar power, with variations in demand. Because of inevitable losses from the friction in pumps and turbines, such systems return about 70 percent of the power that is put into them (which is called the “round-trip efficiency”).

    Sadoway says his team’s new liquid-battery system can already deliver the same 70 percent efficiency, and with further refinements may be able to do better. And unlike pumped hydro systems — which are only feasible in locations with sufficient water and an available hillside — the liquid batteries could be built virtually anywhere, and at virtually any size. “The fact that we don’t need a mountain, and we don’t need lots of water, could give us a decisive advantage,” Sadoway says.

    The biggest surprise for the researchers was that the antimony-lead electrode performed so well. They found that while antimony could produce a high operating voltage, and lead gave a low melting point, a mixture of the two combined both advantages, with a voltage as high as antimony alone, and a melting point between that of the two constituents — contrary to expectations that lowering the melting point would come at the expense of also reducing the voltage.

    “We hoped [the characteristics of the two metals] would be nonlinear,” Sadoway says — that is, that the operating voltage would not end up halfway between that of the two individual metals. “They proved to be [nonlinear], but beyond our imagination. There was no decline in the voltage. That was a stunner for us.”

    Not only did that provide significantly improved materials for the group’s battery system, but it opens up whole new avenues of research, Sadoway says. Going forward, the team will continue to search for other combinations of metals that might provide even lower-temperature, lower-cost, and higher-performance systems. “Now we understand that liquid metals bond in ways that we didn’t understand before,” he says.

    With this fortuitous finding, Sadoway says, “Nature tapped us on the shoulder and said, ‘You know, there’s a better way!’” And because there has been little commercial interest in exploring the properties and potential uses of liquid metals and alloys of the type that are most attractive as electrodes for liquid metal batteries, he says, “I think there’s still room for major discoveries in this field.”

    Robert Metcalfe, professor of innovation at the University of Texas at Austin, who was not involved in this work, says, “The Internet gave us cheap and clean connectivity using many kinds of digital storage. Similarly, we will solve cheap and clean energy with many kinds of storage. Energy storage will absorb the increasing randomness of energy supply and demand, shaving peaks, increasing availability, improving efficiency, lowering costs.”

    Metcalfe adds that Sadoway’s approach to storage using liquid metals “is very promising.”

    The research was supported by the U.S. Department of Energy’s Advanced Research Projects Agency-Energy and by French energy company Total.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 12:15 pm on September 19, 2014 Permalink | Reply
    Tags: , MIT News, Mechanical Engineering   

    From M.I.T.: “Fingertip sensor gives robot unprecedented dexterity” 


    MIT News

    September 19, 2014
    Larry Hardesty | MIT News Office

    Researchers at MIT and Northeastern University have equipped a robot with a novel tactile sensor that lets it grasp a USB cable draped freely over a hook and insert it into a USB port.

    hook
    Armed with the GelSight sensor, a robot can grasp a freely hanging USB cable and plug it into a USB port. Photo: Melanie Gonick/MIT

    The sensor is an adaptation of a technology called GelSight, which was developed by the lab of Edward Adelson, the John and Dorothy Wilson Professor of Vision Science at MIT, and first described in 2009. The new sensor isn’t as sensitive as the original GelSight sensor, which could resolve details on the micrometer scale. But it’s smaller — small enough to fit on a robot’s gripper — and its processing algorithm is faster, so it can give the robot feedback in real time.

    gel

    Industrial robots are capable of remarkable precision when the objects they’re manipulating are perfectly positioned in advance. But according to Robert Platt, an assistant professor of computer science at Northeastern and the research team’s robotics expert, for a robot taking its bearings as it goes, this type of fine-grained manipulation is unprecedented.

    “People have been trying to do this for a long time,” Platt says, “and they haven’t succeeded because the sensors they’re using aren’t accurate enough and don’t have enough information to localize the pose of the object that they’re holding.”

    The researchers presented their results at the International Conference on Intelligent Robots and Systems this week. The MIT team — which consists of Adelson; first author Rui Li, a PhD student; Wenzhen Yuan, a master’s student; and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering — designed and built the sensor. Platt’s team at Northeastern, which included Andreas ten Pas and Nathan Roscup, developed the robotic controller and conducted the experiments.

    Synesthesia

    Whereas most tactile sensors use mechanical measurements to gauge mechanical forces, GelSight uses optics and computer-vision algorithms.

    “I got interested in touch because I had children,” Adelson says. “I expected to be fascinated by watching how they used their visual systems, but I was actually more fascinated by how they used their fingers. But since I’m a vision guy, the most sensible thing, if you wanted to look at the signals coming into the finger, was to figure out a way to transform the mechanical, tactile signal into a visual signal — because if it’s an image, I know what to do with it.”

    A GelSight sensor — both the original and the new, robot-mounted version — consists of a slab of transparent, synthetic rubber coated on one side with a metallic paint. The rubber conforms to any object it’s pressed against, and the metallic paint evens out the light-reflective properties of diverse materials, making it much easier to make precise optical measurements.

    In the new device, the gel is mounted in a cubic plastic housing, with just the paint-covered face exposed. The four walls of the cube adjacent to the sensor face are translucent, and each conducts a different color of light — red, green, blue, or white — emitted by light-emitting diodes at the opposite end of the cube. When the gel is deformed, light bounces off of the metallic paint and is captured by a camera mounted on the same cube face as the diodes.

    From the different intensities of the different-colored light, the algorithms developed by Adelson’s team can infer the three-dimensional structure of ridges or depressions of the surface against which the sensor is pressed.

    Although there was a 3-millimeter variation in where the robot grasped the plug, it was still able to measure its position accurately enough to insert it into a USB port that tolerated only about a millimeter’s error. By that measure, even the lower-resolution, robot-mounted version of the GelSight sensor is about 100 times more sensitive than a human finger.

    Plug ‘n play

    In Platt’s experiments, a Baxter robot from MIT spinout Rethink Robotics was equipped with a two-pincer gripper, one of whose pincers had a GelSight sensor on its tip. Using conventional computer-vision algorithms, the robot identified the dangling USB plug and attempted to grasp it. It then determined the position of the USB plug relative to its gripper from an embossed USB symbol. Although there was a 3-millimeter variation, in each of two dimensions, in where the robot grasped the plug, it was still able to insert it into a USB port that tolerated only about a millimeter’s error.

    “Having a fast optical sensor to do this kind of touch sensing is a novel idea,” says Daniel Lee, a professor of electrical and systems engineering at the University of Pennsylvania and director of the GRASP robotics lab, “and I think the way that they’re doing it with such low-cost components — using just basically colored LEDs and a standard camera — is quite interesting.”

    How GelSight fares against other approaches to tactile sensing will depend on “the application domain and what the price points are,” Lee says. “What Rui’s device has going for it is that it has very good spatial resolution. It’s able to see heights on the level of tens of microns. Compared to other devices in the domain that use things like barometers, the spatial resolution is very good.”

    “As roboticists, we are always looking for new sensors,” Lee adds. “This is a promising prototype. It could be developed into practical device.”

    See the full article, with video, here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 11:39 am on September 16, 2014 Permalink | Reply
    Tags: , , , MIT News   

    From M.I.T.: “Neuroscientists identify key role of language gene” 


    MIT News

    September 15, 2014
    Anne Trafton | MIT News Office

    Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech.

    image

    Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

    The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

    “This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

    Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

    All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

    In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

    Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

    The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

    Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

    The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

    Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

    This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

    The research was funded by the Nancy Lurie Marks Family Foundation, the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Wellcome Trust, the Fondation pour la Recherche Médicale, and the Max Planck Society.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 345 other followers

%d bloggers like this: