Tagged: MIT News Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:28 pm on November 20, 2014 Permalink | Reply
    Tags: , , , MIT News   

    From MIT: “Controlling a material with voltage” 


    MIT News

    November 20, 2014
    David L. Chandler | MIT News Office

    Technique could let a small electrical signal change materials’ electrical, thermal, and optical characteristics.

    A new way of switching the magnetic properties of a material using just a small applied voltage, developed by researchers at MIT and collaborators elsewhere, could signal the beginning of a new family of materials with a variety of switchable properties, the researchers say.

    temp
    This diagram shows the principle behind using voltage to change material properties. In this sandwich of materials, applying a voltage results in movement of ions — electrically charged atoms — from the middle, functional layer of material into the target layer. This modifies some of the properties — magnetic, thermal, or optical — of the target material, and the changes remain after the voltage is removed. Diagram courtesy of the researchers; edited by Jose-Luis Olivares/MIT

    The technique could ultimately be used to control properties other than magnetism, including reflectivity or thermal conductivity, they say. The first application of the new finding is likely to be a new kind of memory chip that requires no power to maintain data once it’s written, drastically lowering its overall power needs. This could be especially useful for mobile devices, where battery life is often a major limitation.

    The findings were published this week in the journal Nature Materials by MIT doctoral student Uwe Bauer, associate professor Geoffrey Beach, and six other co-authors.

    Beach, the Class of ’58 Associate Professor of Materials Science and Engineering, says the work is the culmination of Bauer’s PhD thesis research on voltage-programmable materials. The work could lead to a new kind of nonvolatile, ultralow-power memory chips, Beach says.

    The concept of using an electrical signal to control a magnetic memory element is the subject of much research by chip manufacturers, Beach says. But the MIT-based team has made important strides in making the technique practical, he says.

    The structure of these devices is similar to that of a capacitor, Beach explains, with two thin layers of conductive material separated by an insulating layer. The insulating layer is so thin that under certain conditions, electrons can tunnel right through it.

    But unlike in a capacitor, the conductive layers in these low-power chips are magnetized. In the new device, one conductive layer has fixed magnetization, but the other can be toggled between two magnetic orientations by applying a voltage to it. When the magnetic orientations are aligned, it is easier for electrons to tunnel from one layer to the other; when they have opposite orientations, the device is more insulating. These states can be used to represent “zero” and “one.”

    The work at MIT shows that it takes just a small voltage to flip the state of the device — which then retains its new state even after power is switched off. Conventional memory devices require a continuous source of power to maintain their state.

    The MIT team was able to design a system in which voltage changes the magnetic properties 100 times more powerfully than other groups have been able to achieve; this strong change in magnetism makes possible the long-term stability of the new memory cells.

    They achieved this by using an insulating layer made of an oxide material in which the applied voltage can rearrange the locations of the oxygen ions. They showed that the properties of the magnetic layer could be changed dramatically by moving the oxygen ions back and forth near the interface.

    The team is now working to ramp up the speed at which these changes can be made to the memory elements. They have already reached rates of a megahertz (millions of times per second) in switching, but a fully competitive memory module will require further increase on the order of a hundredfold to a thousandfold, they say.

    The team also found that the magnetic properties could be changed using a pulse of laser light that heats the oxide layer, helping the oxygen ions to move more easily. The laser beam used to alter the state of the material can scan across its surface, making changes as it goes.

    The same techniques could be used to alter other properties of materials, Beach explains, such as reflectivity or thermal conductivity. Such properties can ordinarily be changed only through mechanical or chemical processing. “All these properties could come under electrical control, to be switched on and off, and even ‘written’ using a beam of light,” Beach says. This ability to make such changes on the fly essentially produces “an Etch-a-Sketch for material properties,” he says.The new findings “started as a fluke,” Beach says: Bauer was experimenting with the layered material, expecting to see standard temporary capacitive effects from an applied voltage. “But he turned off the voltage and it stayed that way,” with a reversed magnetic state, Beach says, leading to further investigation.

    “I think this will have broad applications,” Beach says, adding that it uses methods and materials that are already standard in microchip manufacturing.

    In addition to Bauer and Beach, the team included Lide Yao and Sebastiaan van Dijken of Aalto University in Finland and, at MIT, graduate students Aik Jun Tan, Parnika Agrawal, and Satoru Emori and professor of ceramics and electronic materials Harry Tuller. The work was supported by the National Science Foundation and Samsung.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:12 pm on November 20, 2014 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “New 2-D quantum materials for nanoelectronics” 


    MIT News

    November 20, 2014
    David L. Chandler | MIT News Office

    MIT team provides theoretical roadmap to making 2-D electronics with novel properties.

    Researchers at MIT say they have carried out a theoretical analysis showing that a family of two-dimensional materials exhibits exotic quantum properties that may enable a new type of nanoscale electronics.

    These materials are predicted to show a phenomenon called the quantum spin Hall (QSH) effect, and belong to a class of materials known as transition metal dichalcogenides, with layers a few atoms thick. The findings are detailed in a paper appearing this week in the journal Science, co-authored by MIT postdocs Xiaofeng Qian and Junwei Liu; assistant professor of physics Liang Fu; and Ju Li, a professor of nuclear science and engineering and materials science and engineering.

    temp
    This diagram illustrates the concept behind the MIT team’s vision of a new kind of electronic device based on 2-D materials. The 2-D material is at the middle of a layered “sandwich,” with layers of another material, boron nitride, at top and bottom (shown in gray). When an electric field is applied to the material, by way of the rectangular areas at top, it switches the quantum state of the middle layer (yellow areas). The boundaries of these “switched” regions act as perfect quantum wires, potentially leading to new electronic devices with low losses. Illustration: Yan Liang

    QSH materials have the unusual property of being electrical insulators in the bulk of the material, yet highly conductive on their edges. This could potentially make them a suitable material for new kinds of quantum electronic devices, many researchers believe.

    But only two materials with QSH properties have been synthesized, and potential applications of these materials have been hampered by two serious drawbacks: Their bandgap, a property essential for making transistors and other electronic devices, is too small, giving a low signal-to-noise ratio; and they lack the ability to switch rapidly on and off. Now the MIT researchers say they have found ways to potentially circumvent both obstacles using 2-D materials that have been explored for other purposes.

    Existing QSH materials only work at very low temperatures and under difficult conditions, Fu says, adding that “the materials we predicted to exhibit this effect are widely accessible. … The effects could be observed at relatively high temperatures.”

    “What is discovered here is a true 2-D material that has this [QSH] characteristic,” Li says. “The edges are like perfect quantum wires.”

    The MIT researchers say this could lead to new kinds of low-power quantum electronics, as well as spintronics devices — a kind of electronics in which the spin of electrons, rather than their electrical charge, is used to carry information.

    Graphene, a two-dimensional, one-atom-thick form of carbon with unusual electrical and mechanical properties, has been the subject of much research, which has led to further research on similar 2-D materials. But until now, few researchers have examined these materials for possible QSH effects, the MIT team says. “Two-dimensional materials are a very active field for a lot of potential applications,” Qian says — and this team’s theoretical work now shows that at least six such materials do share these QSH properties.

    g
    Graphene is an atomic-scale honeycomb lattice made of carbon atoms.

    The MIT researchers studied materials known as transition metal dichalcogenides, a family of compounds made from the transition metals molybdenum or tungsten and the nonmetals tellurium, selenium, or sulfur. These compounds naturally form thin sheets, just atoms thick, that can spontaneously develop a dimerization pattern in their crystal structure. It is this lattice dimerization that produces the effects studied by the MIT team.

    While the new work is theoretical, the team produced a design for a new kind of transistor based on the calculated effects. Called a topological field-effect transistor, or TFET, the design is based on a single layer of the 2-D material sandwiched by two layers of 2-D boron nitride. The researchers say such devices could be produced at very high density on a chip and have very low losses, allowing high-efficiency operation.

    By applying an electric field to the material, the QSH state can be switched on and off, making possible a host of electronic and spintronic devices, they say.

    In addition, this is one of the most promising known materials for possible use in quantum computers, the researchers say. Quantum computing is usually susceptible to disruption — technically, a loss of coherence — from even very small perturbations. But, Li says, topological quantum computers “cannot lose coherence from small perturbations. It’s a big advantage for quantum information processing.”

    Because so much research is already under way on these 2-D materials for other purposes, methods of making them efficiently may be developed by other groups and could then be applied to the creation of new QSH electronic devices, Qian says.

    Nai Phuan Ong, a professor of physics at Princeton University who was not connected to this work, says, “Although some of the ideas have been mentioned before, the present system seems especially promising. This exciting result will bridge two very active subfields of condensed matter physics, topological insulators and dichalcogenides.”

    The research was supported by the National Science Foundation, the U.S. Department of Energy, and the STC Center for Integrated Quantum Materials. Qian and Liu contributed equally to the work.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 5:46 pm on November 14, 2014 Permalink | Reply
    Tags: , , , , MIT News   

    From MIT: “Pulling together the early solar system” 


    MIT News

    November 13, 2014
    Jennifer Chu | MIT News Office

    Infant planetary systems are usually nothing more than swirling disks of gas and dust. Over the course of a few million years, this gas gets sucked into the center of the disk to build a star, while the remaining dust accumulates into larger and larger chunks — the building blocks for terrestrial planets.

    pla

    Astronomers have observed this protoplanetary disk evolution throughout our galaxy — a process that our own solar system underwent early in its history. However, the mechanism by which planetary disks evolve at such a rapid rate has eluded scientists for decades.

    Now researchers at MIT, Cambridge University, and elsewhere have provided the first experimental evidence that our solar system’s protoplanetary disk was shaped by an intense magnetic field that drove a massive amount of gas into the sun within just a few million years. The same magnetic field may have propelled dust grains along collision courses, eventually smashing them together to form the initial seeds of terrestrial planets.

    temp
    Magnified image of the section of the Semarkona meteorite used in this study. Chondrules are millimeter sized, light-colored objects. Copyright: MIT Paleomagnetism Laboratory

    The team analyzed a meteorite known as Semarkona — a space rock that crashed in northern India in 1940, and which is considered one of the most pristine known relics of the early solar system. In their experiments, the researchers painstakingly extracted individual pellets, or chondrules, from a small sample of the meteorite, and measured the magnetic orientations of each grain to determine that, indeed, the meteorite was unaltered since its formation in the early galactic disk.

    The researchers then measured the magnetic strength of each grain, and calculated the original magnetic field in which those grains were created. Based on their calculations, the group determined that the early solar system harbored a magnetic field as strong as 5 to 54 microteslas — up to 100,000 times stronger than what exists in interstellar space today. Such a magnetic field would be strong enough to drive gas toward the sun at an extremely fast rate.

    “Explaining the rapid timescale in which these disks evolve — in only a few million years — has always been a big mystery,” says Roger Fu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It turns out that this magnetic field is strong enough to affect the motion of gas at a large scale, in a very significant way.”

    Fu and his colleagues, including Ben Weiss, a professor of planetary sciences at MIT, publish their results today in the journal Science.

    High fidelity

    More than 99 percent of mass in a primordial galactic disk is composed of ionized gas, leaving less than 1 percent as solid particles — the dusty seeds of terrestrial planets. Observations of far-off galaxies have revealed that such massive amounts of gas are accreted, or absorbed, into the central star within just a few million years. However, theoretical models have been unable to identify a mechanism to explain such a rapid accretion rate.

    “The idea that the disk gets depleted within just 3 million years is fundamental to understanding how planets form,” Fu says. “But theoretically, that’s difficult to do, and people have had to invoke all these intricate mechanisms to make that happen.”

    There are theoretical models that incorporate magnetic fields as a mechanism for disk evolution, but until now, there has been no observational data to support the theories.

    Fu points out that researchers have been searching since the 1960s — “with little success” — for evidence of early magnetic fields in meteorite samples. That’s because, for the most part, the meteorites studied had been altered in some form or other.

    “Most of these meteorites … were heated, or had water coursing through them, so the chances of any one meteorite retaining a recording of the most primordial magnetic field in the nebula was almost zero,” Fu says.

    He and his colleagues chose to analyze the Semarkona meteorite because of its reputation as a pristine sample from the early solar system.

    “This thing has the unusual advantage of being unaltered, but also happens to be a really excellent magnetic recording device,” Weiss says. “When it formed, it formed the right kind of metal. Many things, even though pristine, didn’t form the right magnetic recording properties. So this thing is really high-fidelity.”

    From millimeter- to kilometer-sized planets

    To determine whether the meteorite was indeed unchanged since its formation, the group identified and extracted a handful of millimeter-sized grains, or chondrules, from a small sample of the meteorite, and then measured their individual magnetic orientations.

    As the meteorite likely formed from the accumulation of individual grains that tumbled onto the meteorite parent body during its assembly, their collective magnetic directions should be random if they have not been remagnetized since they were free-floating in space. If, however, the meteorite underwent heating at some point after its assembly, the individual magnetic orientations would have been wiped clean, replaced by a uniform orientation.

    The researchers found that each grain they analyzed bore a unique magnetic orientation — proof that the meteorite was indeed pristine.

    “There’s no other alternative but to say this recording is coming from an original nebular field,” Fu says.

    The group then calculated the strength of the original magnetic field, based on the magnetic strength of each chondrule. Their result could support one of two theories of early planetary disk formation: magnetorotational instability, the theory that a turbulent configuration of magnetic fields drove gas toward the sun, or magnetocentrifugal wind, the idea that gas accreted onto the sun via a more orderly, hourglass-shaped pattern of magnetic fields.

    The group’s data also supports two theories of very early planet formation, while ruling out a third.

    “A persistent challenge for understanding how planets form is how to go from micron-sized dust to kilometer-sized planets in only a few million years,” Fu says. “How chondrules formed was probably instrumental to how planets formed.”

    Now, based on the group’s results, Fu says it’s likely that chondrules formed either as molten droplets resulting from the collisions of 10- to 1,000-kilometer rocky bodies, or through the spontaneous compression of surrounding gas, which melted dust particles together.

    It’s unlikely that chondrules formed via electric currents, or X-wind — flash-heating events that occur close to the sun. According to theoretical models, such events can only take place within magnetic fields stronger than 100 microteslas — far greater than what Fu and his colleagues measured.

    “Until now, we were missing data,” Fu says. “Now there is a data point. And to understand fully the implications of what 50 microteslas can do in a gas, there’s a lot more theoretical work to be done.”

    Jerome Gattacceca, research director at the European Centre for Research and Education in Environmental Sciences, says the solar system would have looked very different today if it had not been exposed to magnetic fields.

    “Without this kind of mechanism, all the matter in the solar system would have ended up in the sun, and we would not be here to discuss it,” says Gattacceca, who was not involved in the research. “There has to be a mechanism to prevent that. Several models exist, and this paper provides a viable mechanism, based on the existence of a significant magnetic field, to form the solar system as we know it.”

    This work was funded in part by NASA and the National Science Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 10:13 am on November 7, 2014 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Nanoscale work yields big results” 


    MIT News

    November 6, 2014
    Leda Zimmerman | MIT Spectrum

    An avid amateur astronomer during her childhood in Vukovar, Croatia, Silvija Gradečak, associate professor in materials science and engineering, was not content observing the physical world only from a distance: “I discovered what I really liked about science were experiments, and having the ability to make something with my hands,” she says.

    sg
    Silvija Gradečak’s nanoscale work creates big-scale results that could transform energy production, storage, and lighting. Photo illustration: Len Rubenstein

    Today, handling the smallest elements in nature, Gradečak is generating large-scale results that may transform energy production, storage, and lighting. Her enthusiasm for both basic and applied research will help to power MIT.nano, the Institute’s $350 million nanoscale laboratory now under construction. Gradečak looks forward to working “with people from different backgrounds, advanced nanofabrication tools, and the seamless integration of the technologies needed to work on these problems.”

    At the Swiss Federal Institute of Technology, where Gradečak pursued her doctorate, an electron microscope revealed a new terrain ripe for exploration and manipulation. “I saw individual atoms for the first time, and came to realize that having the ability to arrange them on the nanoscale is a powerful tool,” she says. “There were so many new problems available to work on. All kinds of possibilities emerge when you have the capability to develop materials with unique structure and properties not found in nature.”

    Teasing out these properties becomes possible when examining materials at the nanoscale (a nanometer is one-billionth of a meter, and nanoscale materials run one to 100 nanometers in size). During graduate school, Gradečak zeroed in on gallium nitride, GaN, a synthetic compound used by the semiconductor industry that turned out to feature some extraordinary optical properties: If the composition of GaN is altered at the nanoscale, the compound can produce light ranging from the ultraviolet to the infrared.

    As a young researcher investigating nanoscale defects in GaN that changed the compound’s behavior, Gradečak “opened up a new world,” she says. “We all have to find a niche, our passion, and learning that I could design materials, tune their properties and emissions — this ability was amazing to me.”

    Gradečak was especially fascinated by the wealth of potential optical and electrical applications for these nanoscale materials. GaN and similar semiconducting compounds are capable not just of emitting light at a range of wavelengths, but of conducting electricity and heat more efficiently, too.

    Gradečak set about harnessing the power of nanoscale compounds. She developed a unique repertoire of laboratory methods that involve manipulating compounds in their vapor phase in a growth chamber. Inside, atoms take root on substrates in particular configurations based on Gradečak’s desired outcomes.

    In one venture, Gradečak created nanowires, slender, solid fibers composed of nanoscale semiconductor materials that can be grown on varied surfaces such as silicon or flexible polymers. Of infinitesimal diameter, these nanowires are essentially one-dimensional objects, and because they can be millions of times longer than they are wide, they are ideally suited for transmitting energy in the form of electricity, heat, and light.

    One signature application to emerge from this nanowire research is a new and different kind of light-emitting diode (LED). Gradečak’s device more closely approximates sunlight’s red and green wavelengths than current LED technologies. In addition, instead of utilizing expensive materials such as sapphire as a growth medium, as is the typical practice of current manufacturers, Gradečak’s nanowire-based LEDs can be grown on abundant, inexpensive substrates, including flexible plastics. Her invention may prove much more economical for home and industry consumers.

    Another key development from Gradečak’s lab is a solar cell made from zinc oxide nanowires embedded with tiny quantum dots — nanocrystals made from a semiconductor material that are so small they essentially have no dimension. While the device does not yet convert solar energy to electricity as efficiently as today’s silicon-based solar cells, Gradečak notes, “Our devices are transparent and flexible, and in just a few years, we’ve improved efficiency of our cell by two orders of magnitude; this is an amazing accomplishment.”

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:54 pm on November 4, 2014 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “Doctoral students seek quantum control in Paola Cappellaro’s Quantum Engineering Group” 


    MIT News

    November 4, 2014
    Peter Dunn | Nuclear Science & Engineering

    MIT’s Quantum Engineering Group (QEG) has a challenging but potentially world-changing mission: to harness the quantum properties of matter for use in information technology, metrology, defense, healthcare, and many other fields.

    The interdisciplinary laboratory, headed by Department of Nuclear Science and Engineering (NSE) Associate Professor Paola Cappellaro, is working theoretically and experimentally towards achieving quantum control — the ability to control and utilize the behavior of nanoscale particles, which obey the laws of quantum mechanics rather than classical physics. A close-knit group of student researchers plays a central role, developing basic enabling tools that could yield dividends for decades to come.

    “In one sense we’re doing fundamental research, but it’s really engineering — we’re seeking practical applications,” explains Cappellaro. The QEG is based in MIT’s Research Lab of Electronics, which brings together faculty and students from many departments (including NSE, physics, materials science, and electrical engineering and computer science) and other quantum-oriented programs at the Institute.

    “Everyone’s working from different directions, but the common goal is to get control of quantum systems and be able to exploit them to build quantum devices,” says Cappellaro. “The common theme is using nitrogen-vacancy (NV) centers in diamonds as a promising experimental platform for this goal.” NV centers, a type of crystal defect, allow access to nearby electrons and their intrinsic angular momentum, or spin.

    temp
    Graduate students Alexandre Cooper-Roy (left), Masashi Hirose (center), and Ashok Ajoy work to harness the quantum properties of matter in MIT’s Quantum Engineering Group. Photo: Susan Young

    Applications could include leveraging spin to create quantum bits (qubits) for information processing and storage, which would provide exponential increases in computing power, and using NV-containing diamonds as ultra-responsive sensors, able to map molecular structures or monitor nano-scale magnetic fields like those created by brain functions.

    There are many hurdles. One is that spin cannot be completely isolated, and is subjected to deleterious noise. This can lead to the decay of quantum superposition, or decoherence, which represents one of the major challenges to quantum control.

    Doctoral student Masashi Hirose hopes to help solve the decoherence problem by learning how to use the electronic spin of NV centers to control the spin of nearby atomic nuclei (nuclear spin). “The nuclear spin is much harder to control directly than the electronic spin, but it’s more resistant to noise and can stay in a superposition state for milliseconds rather than microseconds, a 1,000 times increase,” explains Hirose. “The nuclear spin can act as an auxiliary qubit to protect the electronic spin from decoherence.”

    Thus, the nuclear spin is a good candidate for memory functions in quantum computing, while computation could be handled with the electronic spin. Development of a control theory for interactions between the two is a research priority for Hirose, who joined Cappellaro’s lab group in 2009 after undergraduate work in Japan.

    “The subject matter is a dream for me, and the QEG environment is very cooperative, everyone shares their problems,” he says. “I meet with Paola at least every couple of days, and she always welcomes new ideas, and helps figure out next steps.”

    Fellow doctoral student Alexandre Cooper-Roy is also focused on leveraging the NV center electronic spin, and extending today’s rudimentary control abilities to create quantum resources, like multiple-qubit computing structures or sensors.

    “NV centers are interesting because we can use them to control and read out the electronic spins associated with other impurities in the diamond lattice,” says Cooper-Roy. “Now we’d like to be able to put a few of these objects together so that we can do things that aren’t currently accessible, but the dynamics become really complicated.”

    Cooper-Roy, who has studied in Japan and France as well as his native Canada, notes that the QEG’s research approach emphasizes work at room temperature using relatively simple equipment, which makes it possible for individuals to manage projects.

    “A student can come here and build an experiment from scratch; it’s an easy test bed for exploration of quantum mechanics,” he says. “It’s great for training and for seeking practical applications. MIT is a great experience, because there’s an engineering approach — people really focus on optimizing the process. The final product is very important.”

    Meanwhile, another PhD candidate, Ashok Ajoy, is attacking the problem of using quantum sensor technology as a sub-molecular microscope. “We’d like to be able to determine the structure of biomolecules, like proteins,” he explains. “The structure and the function are closely tied, and this would enable, say, the design of antibiotics, or the ability to block or amplify what a molecule does.”

    The current goal is to achieve resolution of a few angstroms, which requires a synergistic combination of theory and experimentation. “In our lab we do both; I like it that way,” he says, adding that Cappellaro’s personal mentorship is an important enabler.

    Long term, there is the potential to achieve sensing at resolution of a single spin: “That’s the holy grail; no one has been able to measure single external nuclear spins and map out their positions,” says Ajoy, who did his undergraduate studies in India. “With these sensors that we and our community have we’re on the cusp; it’s an exciting threshold, a stepping stone to something that could be very useful going forward in all sorts of applications.”

    That broad applicability has attracted funding from blue-chip sources, like the National Science Foundation and the Department of Defense. “All the work is fundamentally enabling,” says Cappellaro. “We develop the tools we use in order to create better devices, but also seek to understand the physics so we can come up with smarter ways to use them.”

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 9:02 am on October 17, 2014 Permalink | Reply
    Tags: , , MIT News,   

    From MIT: “Nanoparticles get a magnetic handle” 


    MIT News

    October 9, 2014
    David L. Chandler | MIT News Office

    A long-sought goal of creating particles that can emit a colorful fluorescent glow in a biological environment, and that could be precisely manipulated into position within living cells, has been achieved by a team of researchers at MIT and several other institutions. The finding is reported this week in the journal Nature Communications.

    4
    Elemental mapping of the location of iron atoms (blue) in the magnetic nanoparticles and cadmium (red) in the fluorescent quantum dots provide a clear visualization of the way the two kinds of particles naturally separate themselves into a core-and-shell structure. Image courtesy of the researchers

    The new technology could make it possible to track the position of the nanoparticles as they move within the body or inside a cell. At the same time, the nanoparticles could be manipulated precisely by applying a magnetic field to pull them along. And finally, the particles could have a coating of a bioreactive substance that could seek out and bind with particular molecules within the body, such as markers for tumor cells or other disease agents.

    “It’s been a dream of mine for many years to have a nanomaterial that incorporates both fluorescence and magnetism in a single compact object,” says Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT and senior author of the new paper. While other groups have achieved some combination of these two properties, Bawendi says that he “was never very satisfied” with results previously achieved by his own team or others.

    For one thing, he says, such particles have been too large to make practical probes of living tissue: “They’ve tended to have a lot of wasted volume,” Bawendi says. “Compactness is critical for biological and a lot of other applications.”

    In addition, previous efforts were unable to produce particles of uniform and predictable size, which could also be an essential property for diagnostic or therapeutic applications.

    Moreover, Bawendi says, “We wanted to be able to manipulate these structures inside the cells with magnetic fields, but also know exactly what it is we’re moving.” All of these goals are achieved by the new nanoparticles, which can be identified with great precision by the wavelength of their fluorescent emissions.

    The new method produces the combination of desired properties “in as small a package as possible,” Bawendi says — which could help pave the way for particles with other useful properties, such as the ability to bind with a specific type of bioreceptor, or another molecule of interest.

    In the technique developed by Bawendi’s team, led by lead author and postdoc Ou Chen, the nanoparticles crystallize such that they self-assemble in exactly the way that leads to the most useful outcome: The magnetic particles cluster at the center, while fluorescent particles form a uniform coating around them. That puts the fluorescent molecules in the most visible location for allowing the nanoparticles to be tracked optically through a microscope.

    “These are beautiful structures, they’re so clean,” Bawendi says. That uniformity arises, in part, because the starting material, fluorescent nanoparticles that Bawendi and his group have been perfecting for years, are themselves perfectly uniform in size. “You have to use very uniform material to produce such a uniform construction,” Chen says.

    Initially, at least, the particles might be used to probe basic biological functions within cells, Bawendi suggests. As the work continues, later experiments may add additional materials to the particles’ coating so that they interact in specific ways with molecules or structures within the cell, either for diagnosis or treatment.

    The ability to manipulate the particles with electromagnets is key to using them in biological research, Bawendi explains: The tiny particles could otherwise get lost in the jumble of molecules circulating within a cell. “Without a magnetic ‘handle,’ it’s like a needle in a haystack,” he says. “But with the magnetism, you can find it easily.”

    A silica coating on the particles allows additional molecules to attach, causing the particles to bind with specific structures within the cell. “Silica makes it completely flexible; it’s a well developed material that can bind to almost anything,” Bawendi says.

    For example, the coating could have a molecule that binds to a specific type of tumor cells; then, “You could use them to enhance the contrast of an MRI, so you could see the spatial macroscopic outlines of a tumor,” he says.

    The next step for the team is to test the new nanoparticles in a variety of biological settings. “We’ve made the material,” Chen says. “Now we’ve got to use it, and we’re working with a number of groups around the world for a variety of applications.”

    Christopher Murray, a professor of chemistry and materials science and engineering at the University of Pennsylvania who was not connected with this research, says, “This work exemplifies the power of using nanocrystals as building blocks for multiscale and multifunctional structures. We often use the term ‘artificial atoms’ in the community to describe how we are exploiting a new periodic table of fundamental building blocks to design materials, and this is a very elegant example.”

    The study included researchers at MIT; Massachusetts General Hospital; Institut Curie in Paris; the Heinrich-Pette Institute and the Bernhard-Nocht Institute for Tropical Medicine in Hamburg, Germany; Children’s Hospital Boston; and Cornell University. The work was supported by the National Institutes of Health, the Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and the Department of Energy.

    See the full article, with video, here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:20 am on October 17, 2014 Permalink | Reply
    Tags: , MIT News, ,   

    From MIT: “Superconducting circuits, simplified” 


    MIT News

    October 17, 2014
    Larry Hardesty | MIT News Office

    Computer chips with superconducting circuits — circuits with zero electrical resistance — would be 50 to 100 times as energy-efficient as today’s chips, an attractive trait given the increasing power consumption of the massive data centers that power the Internet’s most popular sites.

    chip
    Shown here is a square-centimeter chip containing the nTron adder, which performed the first computation using the researchers’ new superconducting circuit. Photo: Adam N. McCaughan

    Superconducting chips also promise greater processing power: Superconducting circuits that use so-called Josephson junctions have been clocked at 770 gigahertz, or 500 times the speed of the chip in the iPhone 6.

    But Josephson-junction chips are big and hard to make; most problematic of all, they use such minute currents that the results of their computations are difficult to detect. For the most part, they’ve been relegated to a few custom-engineered signal-detection applications.

    In the latest issue of the journal Nano Letters, MIT researchers present a new circuit design that could make simple superconducting devices much cheaper to manufacture. And while the circuits’ speed probably wouldn’t top that of today’s chips, they could solve the problem of reading out the results of calculations performed with Josephson junctions.

    The MIT researchers — Adam McCaughan, a graduate student in electrical engineering, and his advisor, professor of electrical engineering and computer science Karl Berggren — call their device the nanocryotron, after the cryotron, an experimental computing circuit developed in the 1950s by MIT professor Dudley Buck. The cryotron was briefly the object of a great deal of interest — and federal funding — as the possible basis for a new generation of computers, but it was eclipsed by the integrated circuit.

    “The superconducting-electronics community has seen a lot of devices come and go, without any real-world application,” McCaughan says. “But in our paper, we have already applied our device to applications that will be highly relevant to future work in superconducting computing and quantum communications.”

    Superconducting circuits are used in light detectors that can register the arrival of a single light particle, or photon; that’s one of the applications in which the researchers tested the nanocryotron. McCaughan also wired together several of the circuits to produce a fundamental digital-arithmetic component called a half-adder.

    Resistance is futile

    Superconductors have no electrical resistance, meaning that electrons can travel through them completely unimpeded. Even the best standard conductors — like the copper wires in phone lines or conventional computer chips — have some resistance; overcoming it requires operational voltages much higher than those that can induce current in a superconductor. Once electrons start moving through an ordinary conductor, they still collide occasionally with its atoms, releasing energy as heat.

    Superconductors are ordinary materials cooled to extremely low temperatures, which damps the vibrations of their atoms, letting electrons zip past without collision. Berggren’s lab focuses on superconducting circuits made from niobium nitride, which has the relatively high operating temperature of 16 Kelvin, or minus 257 degrees Celsius. That’s achievable with liquid helium, which, in a superconducting chip, would probably circulate through a system of pipes inside an insulated housing, like Freon in a refrigerator.

    A liquid-helium cooling system would of course increase the power consumption of a superconducting chip. But given that the starting point is about 1 percent of the energy required by a conventional chip, the savings could still be enormous. Moreover, superconducting computation would let data centers dispense with the cooling systems they currently use to keep their banks of servers from overheating.

    Cheap superconducting circuits could also make it much more cost-effective to build single-photon detectors, an essential component of any information system that exploits the computational speedups promised by quantum computing.

    Engineered to a T

    The nanocryotron — or nTron — consists of a single layer of niobium nitride deposited on an insulator in a pattern that looks roughly like a capital “T.” But where the base of the T joins the crossbar, it tapers to only about one-tenth its width. Electrons sailing unimpeded through the base of the T are suddenly crushed together, producing heat, which radiates out into the crossbar and destroys the niobium nitride’s superconductivity.

    A current applied to the base of the T can thus turn off a current flowing through the crossbar. That makes the circuit a switch, the basic component of a digital computer.

    After the current in the base is turned off, the current in the crossbar will resume only after the junction cools back down. Since the superconductor is cooled by liquid helium, that doesn’t take long. But the circuits are unlikely to top the 1 gigahertz typical of today’s chips. Still, they could be useful for some lower-end applications where speed isn’t as important as energy efficiency.

    Their most promising application, however, could be in making calculations performed by Josephson junctions accessible to the outside world. Josephson junctions use tiny currents that until now have required sensitive lab equipment to detect. They’re not strong enough to move data to a local memory chip, let alone to send a visual signal to a computer monitor.

    In experiments, McCaughan demonstrated that currents even smaller than those found in Josephson-junction devices were adequate to switch the nTron from a conductive to a nonconductive state. And while the current in the base of the T can be small, the current passing through the crossbar could be much larger — large enough to carry information to other devices on a computer motherboard.

    “I think this is a great device,” says Oleg Mukhanov, chief technology officer of Hypres, a superconducting-electronics company whose products rely on Josephson junctions. “We are currently looking very seriously at the nTron for use in memory.”

    “There are several attractions of this device,” Mukhanov says. “First, it’s very compact, because after all, it’s a nanowire. One of the problems with Josephson junctions is that they are big. If you compare them with CMOS transistors, they’re just physically bigger. The second is that Josephson junctions are two-terminal devices. Semiconductor transistors are three-terminal, and that’s a big advantage. Similarly, nTrons are three-terminal devices.”

    “As far as memory is concerned,” Mukhanov adds, “one of the features that also attracts us is that we plan to integrate it with magnetoresistive spintronic devices, mRAM, magnetic random-access memories, at room temperature. And one of the features of these devices is that they are high-impedance. They are in the kilo-ohms range, and if you look at Josephson junctions, they are just a few ohms. So there is a big mismatch, which makes it very difficult from an electrical-engineering standpoint to match these two devices. NTrons are nanowire devices, so they’re high-impedance, too. They’re naturally compatible with the magnetoresistive elements.”

    McCaughan and Berggren’s research was funded by the National Science Foundation and by the Director of National Intelligence’s Intelligence Advanced Research Projects Activity.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:33 pm on October 13, 2014 Permalink | Reply
    Tags: , MIT News,   

    From MIT: “Solid nanoparticles can deform like a liquid” 


    MIT News

    October 12, 2014
    David L. Chandler | MIT News Office

    Unexpected finding shows tiny particles keep their internal crystal structure while flexing like droplets.

    A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

    drops
    Image: Yan Liang

    The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits.

    The results, published in the journal Nature Materials, come from a combination of laboratory analysis and computer modeling, by an international team that included researchers in China, Japan, and Pittsburgh, as well as at MIT.

    The experiments were conducted at room temperature, with particles of pure silver less than 10 nanometers across — less than one-thousandth of the width of a human hair. But the results should apply to many different metals, says Li, senior author of the paper and the BEA Professor of Nuclear Science and Engineering.

    Silver has a relatively high melting point — 962 degrees Celsius, or 1763 degrees Fahrenheit — so observation of any liquidlike behavior in its nanoparticles was “quite unexpected,” Li says. Hints of the new phenomenon had been seen in earlier work with tin, which has a much lower melting point, he says.

    The use of nanoparticles in applications ranging from electronics to pharmaceuticals is a lively area of research; generally, Li says, these researchers “want to form shapes, and they want these shapes to be stable, in many cases over a period of years.” So the discovery of these deformations reveals a potentially serious barrier to many such applications: For example, if gold or silver nanoligaments are used in electronic circuits, these deformations could quickly cause electrical connections to fail.

    Only skin deep

    The researchers’ detailed imaging with a transmission electron microscope and atomistic modeling revealed that while the exterior of the metal nanoparticles appears to move like a liquid, only the outermost layers — one or two atoms thick — actually move at any given time. As these outer layers of atoms move across the surface and redeposit elsewhere, they give the impression of much greater movement — but inside each particle, the atoms stay perfectly lined up, like bricks in a wall.

    “The interior is crystalline, so the only mobile atoms are the first one or two monolayers,” Li says. “Everywhere except the first two layers is crystalline.”

    By contrast, if the droplets were to melt to a liquid state, the orderliness of the crystal structure would be eliminated entirely — like a wall tumbling into a heap of bricks.

    Technically, the particles’ deformation is pseudoelastic, meaning that the material returns to its original shape after the stresses are removed — like a squeezed rubber ball — as opposed to plasticity, as in a deformable lump of clay that retains a new shape.

    The phenomenon of plasticity by interfacial diffusion was first proposed by Robert L. Coble, a professor of ceramic engineering at MIT, and is known as “Coble creep.” “What we saw is aptly called Coble pseudoelasticity,” Li says.

    Now that the phenomenon has been understood, researchers working on nanocircuits or other nanodevices can quite easily compensate for it, Li says. If the nanoparticles are protected by even a vanishingly thin layer of oxide, the liquidlike behavior is almost completely eliminated, making stable circuits possible.

    Possible benefits

    On the other hand, for some applications this phenomenon might be useful: For example, in circuits where electrical contacts need to withstand rotational reconfiguration, particles designed to maximize this effect might prove useful, using noble metals or a reducing atmosphere, where the formation of an oxide layer is destabilized, Li says.

    The new finding flies in the face of expectations — in part, because of a well-understood relationship, in most materials, in which mechanical strength increases as size is reduced.

    “In general, the smaller the size, the higher the strength,” Li says, but “at very small sizes, a material component can get very much weaker. The transition from ‘smaller is stronger’ to ‘smaller is much weaker’ can be very sharp.”

    That crossover, he says, takes place at about 10 nanometers at room temperature — a size that microchip manufacturers are approaching as circuits shrink. When this threshold is reached, Li says, it causes “a very precipitous drop” in a nanocomponent’s strength.

    The findings could also help explain a number of anomalous results seen in other research on small particles, Li says.

    “The … work reported in this paper is first-class,” says Horacio Espinosa, a professor of manufacturing and entrepreneurship at Northwestern University who was not involved in this research. “These are very difficult experiments, which revealed for the first time shape recovery of silver nanocrystals in the absence of dislocation. … Li’s interpretation of the experiments using atomistic modeling illustrates recent progress in comparing experiments and simulations as it relates to spatial and time scales. This has implications to many aspects of mechanics of materials, so I expect this work to be highly cited.”

    The research team included Jun Sun, Longbing He, Tao Xu, Hengchang Bi, and Litao Sun, all of Southeast University in Nanjing, China; Yu-Chieh Lo of MIT and Kyoto University; Ze Zhang of Zhejiang University; and Scott Mao of the University of Pittsburgh. It was supported by the National Basic Research Program of China; the National Natural Science Foundation of China; the Chinese Ministry of Education; the National Science Foundation of Jiangsu Province, China; and the U.S. National Science Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:04 pm on September 29, 2014 Permalink | Reply
    Tags: , , MIT News   

    From MIT: “Modeling shockwaves through the brain” 


    MIT News

    September 29, 2014
    Jennifer Chu | MIT News Office

    New scaling law helps estimate humans’ risk of blast-induced traumatic brain injury.

    Since the start of the military conflicts in Iraq and Afghanistan, more than 300,000 soldiers have returned to the United States with traumatic brain injury (TBI) caused by exposure to bomb blasts — and in particular, exposure to improvised explosive devices, or IEDs. Symptoms of traumatic brain injury can range from the mild, such as lingering headaches and nausea, to more severe impairments in memory and cognition.

    brain
    Jose-Luis Olivares/MIT

    Since 2007, the U.S. Department of Defense has recognized the critical importance and complexity of this problem, and has made significant investments in traumatic brain injury research. Nevertheless, there remain many gaps in scientists’ understanding of the effects of blasts on the human brain; most new knowledge has come from experiments with animals.

    br
    MIT researchers have developed a model of the human head for use in simulations to predict the risk for blast-induced traumatic brain injury. Relevant tissue structures include the skull (green), brain (red), and flesh (blue). Courtesy of the researchers

    Now MIT researchers have developed a scaling law that predicts a human’s risk of brain injury, based on previous studies of blasts’ effects on animal brains. The method may help the military develop more protective helmets, as well as aid clinicians in diagnosing traumatic brain injury — often referred to as the “invisible wounds” of battle.

    “We’re really focusing on mild traumatic brain injury, where we know the least, but the problem is the largest,” says Raul Radovitzky, a professor of aeronautics and astronautics and associate director of the MIT Institute for Soldier Nanotechnologies (ISN). “It often remains undetected. And there’s wide consensus that this is clearly a big issue.”

    While previous scaling laws predicted that humans’ brains would be more resilient to blasts than animals’, Radovitzky’s team found the opposite: that in fact, humans are much more vulnerable, as they have thinner skulls to protect much larger brains.

    A group of ISN researchers led by Aurélie Jean, a postdoc in Radovitzky’s group, developed simulations of human, pig, and rat heads, and exposed each to blasts of different intensities. Their simulations predicted the effects of the blasts’ shockwaves as they propagated through the skulls and brains of each species. Based on the resulting differences in intracranial pressure, the team developed an equation, or scaling law, to estimate the risk of brain injury for each species.

    “The great thing about doing this on the computer is that it allows you to reduce and possibly eventually eliminate animal experiments,” Radovitzky says.

    The MIT team and co-author James Q. Zheng, chief scientist at the U.S. Army’s soldier protection and individual equipment program, detail their results this week in the Proceedings of the National Academy of Sciences.

    Air (through the) head

    A blast wave is the shockwave, or wall of compressed air, that rushes outward from the epicenter of an explosion. Aside from the physical fallout of shrapnel and other chemical elements, the blast wave alone can cause severe injuries to the lungs and brain. In the brain, a shockwave can slam through soft tissue, with potentially devastating effects.

    In 2010, Radovitzky’s group, working in concert with the Defense and Veterans Brain Injury Center, a part of the U.S. military health system, developed a highly sophisticated, image-based computational model of the human head that illustrates the ways in which pressurized air moves through its soft tissues. With this model, the researchers showed how the energy from a blast wave can easily reach the brain through openings such as the eyes and sinuses — and also how covering the face with a mask can prevent such injuries. Since then, the team has developed similar models for pigs and rats, capturing the mechanical response of brain tissue to shockwaves.

    In their current work, the researchers calculated the vulnerability of each species to brain injury by establishing a mathematical relationship between properties of the skull, brain, and surrounding flesh, and the propagation of incoming shockwaves. The group considered each brain structure’s volume, density, and celerity — how fast stress waves propagate through a tissue. They then simulated the brain’s response to blasts of different intensities.

    “What the simulation allows you to do is take what happens outside, which is the same across species, and look at how strong was the effect of the blast inside the brain,” Jean says.

    In general, they found that an animal’s skull and other fleshy structures act as a shield, blunting the effects of a blast wave: The thicker these structures are, the less vulnerable an animal is to injury. Compared with the more prominent skulls of rats and pigs, a human’s thinner skull increases the risk for traumatic brain injury.

    Shifting the problem

    This finding runs counter to previous theories, which held that an animal’s vulnerability to blasts depends on its overall mass, but which ignored the role of protective physical structures. According to these theories, humans, being more massive than pigs or rats, would be better protected against blast waves.

    Radovitzky says this reasoning stems from studies of “blast lung” — blast-induced injuries such as tearing, hemorrhaging, and swelling of the lungs, where it was found that mass matters: The larger an animal is, the more resilient it may be to lung damage. Informed by such studies, the military has since developed bulletproof vests that have dramatically decreased the number of blast-induced lung injuries in recent years.

    “There have essentially been no reported cases of blast lung in the last 10 years in Iraq or Afghanistan,” Radovitzky notes. “Now we’ve shifted that problem to traumatic brain injury.”

    In collaboration with Army colleagues, Radovitzky and his group are performing basic research to help the Army develop helmets that better protect soldiers. To this end, the team is extending the simulation approach they used for blast to other types of threats.

    His group is also collaborating with audiologists at Massachusetts General Hospital, where victims of the Boston Marathon bombing are being treated for ruptured eardrums.

    “They have an exact map of where each victim was, relative to the blast,” Radovitzky says. “In principle, we could simulate the event, find out the level of exposure of each of those victims, put it in our scaling law, and we could estimate their risk of developing a traumatic brain injury that may not be detected in an MRI.”

    Joe Rosen, a professor of surgery at Dartmouth Medical School, sees the group’s scaling law as a promising window into identifying a long-sought mechanism for blast-induced traumatic brain injury.

    “Eighty percent of the injuries coming off the battlefield are blast-induced, and mild TBIs may not have any evidence of injury, but they end up the rest of their lives impaired,” says Rosen, who was not involved in the research. “Maybe we can realize they’re getting doses of these blasts, and that a cumulative dose is what causes [TBI], and before that point, we can pull them off the field. I think this work will be important, because it puts a stake in the ground so we can start making some progress.”

    This work was supported by the U.S. Army through ISN.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:54 pm on September 28, 2014 Permalink | Reply
    Tags: , , , , MIT News   

    From MIT: “Biologists find an early sign of cancer” 


    MIT News

    September 28, 2014
    Anne Trafton | MIT News Office

    Patients show boost in certain amino acids years before diagnosis of pancreatic cancer.

    Years before they show any other signs of disease, pancreatic cancer patients have very high levels of certain amino acids in their bloodstream, according to a new study from MIT, Dana-Farber Cancer Institute, and the Broad Institute.

    cancer
    Christine Daniloff/MIT

    This finding, which suggests that muscle tissue is broken down in the disease’s earliest stages, could offer new insights into developing early diagnostics for pancreatic cancer, which kills about 40,000 Americans every year and is usually not caught until it is too late to treat.

    The study, which appears today in the journal Nature Medicine, is based on an analysis of blood samples from 1,500 people participating in long-term health studies. The researchers compared samples from people who were eventually diagnosed with pancreatic cancer and samples from those who were not. The findings were dramatic: People with a surge in amino acids known as branched chain amino acids were far more likely to be diagnosed with pancreatic cancer within one to 10 years.

    “Pancreatic cancer, even at its very earliest stages, causes breakdown of body protein and deregulated metabolism. What that means for the tumor, and what that means for the health of the patient — those are long-term questions still to be answered,” says Matthew Vander Heiden, an associate professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the paper’s senior authors.

    The paper’s other senior author is Brian Wolpin, an assistant professor of medical oncology at Dana-Farber. Wolpin, a clinical epidemiologist, assembled the patient sample from several large public-health studies. All patients had their blood drawn when they began participating in the studies and subsequently filled out annual health questionnaires.

    Working with researchers at the Broad Institute, the team analyzed blood samples for more than 100 different metabolites — molecules, such as proteins and sugars, produced as the byproducts of metabolic processes.

    “What we found was that this really interesting signature fell out as predicting pancreatic cancer diagnosis, which was elevation in these three branched chain amino acids: leucine, isoleucine, and valine,” Vander Heiden says. These are among the 20 amino acids — the building blocks for proteins — normally found in the human body.

    Some of the patients in the study were diagnosed with pancreatic cancer just one year after their blood samples were taken, while others were diagnosed two, five, or even 10 years later.

    “We found that higher levels of branched chain amino acids were present in people who went on to develop pancreatic cancer compared to those who did not develop the disease,” Wolpin says. “These findings led us to hypothesize that the increase in branched chain amino acids is due to the presence of an early pancreatic tumor.”

    Early protein breakdown

    Vander Heiden’s lab tested this hypothesis by studying mice that are genetically programmed to develop pancreatic cancer. “Using those mouse models, we found that we could perfectly recapitulate these exact metabolic changes during the earliest stages of cancer,” Vander Heiden says. “What happens is, as people or mice develop pancreatic cancer, at the very earliest stages, it causes the body to enter this altered metabolic state where it starts breaking down protein in distant tissues.”

    “This is a finding of fundamental importance in the biology of pancreatic cancer,” says David Tuveson, a professor at the Cancer Center at Cold Spring Harbor Laboratory who was not involved in the work. “It really opens a window of possibility for labs to try to determine the mechanism of this metabolic breakdown.”

    The researchers are now investigating why this protein breakdown, which has not been seen in other types of cancer, occurs in the early stages of pancreatic cancer. They suspect that pancreatic tumors may be trying to feed their own appetite for amino acids that they need to build cancerous cells. The researchers are also exploring possible links between this early protein breakdown and the wasting disease known as cachexia, which often occurs in the late stages of pancreatic cancer.

    Also to be answered is the question of whether this signature could be used for early detection. The findings need to be validated with more data, and it may be difficult to develop a reliable diagnostic based on this signature alone, Vander Heiden says. However, he believes that studying this metabolic dysfunction further may reveal additional markers, such as misregulated hormones, that could be combined to generate a more accurate test.

    The findings may also allow scientists to pursue new treatments that would work by targeting tumor metabolism and cutting off a tumor’s nutrient supply, Vander Heiden says.

    MIT’s contribution to this research was funded by the Lustgarten Foundation, the National Institutes of Health, the Burroughs Wellcome Fund, and the Damon Runyon Cancer Research Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 356 other followers

%d bloggers like this: