Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:28 pm on October 16, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From WCG: “Project Launch: Uncovering Genome Mysteries” 

    16 Oct 2014
    Summary
    To kick off World Community Grid’s 10th anniversary celebrations, we’re launching Uncovering Genome Mysteries to compare hundreds of millions of genes from many organisms that have never been studied before, helping scientists unearth some of the hidden superpowers of the natural world.

    From the realization that the Penicillium fungus kills germs, to the discovery of bacteria that eat oil spills and the identification of aspirin in the willow tree bark – a better understanding of the natural world has resulted in many improvements to human health, welfare, agriculture and industry.

    diver
    Diver collecting microbial samples from Australian seaweeds for Uncovering Genome Mysteries

    Our understanding of life on earth has grown enormously since the advent of genetic research. But the vast majority of life on this planet remains unstudied or unknown, because it’s microscopic, easy to overlook, and hard to study. Nevertheless, we know that tiny, diverse organisms are continually evolving in order to survive and thrive in the most extreme conditions. The study of these organisms can provide valuable insights on how to deal with some of the most pressing problems that human society faces, such as drug-resistant pathogens, pollution, and energy shortages.

    Inexpensive, rapid DNA sequencing technologies have enabled scientists to decode the genes of many organisms that previously received little attention, or were entirely unknown to science. However, making sense of all that genomic information is an enormous task. The first step is to compare unstudied genes to others that are already better understood. Similarities between genes point to similarities in function, and by making a large number of these comparisons, scientists can begin to sort out what each organism is and what it can do.

    In Uncovering Genome Mysteries, World Community Grid volunteers will run approximately 20 quadrillion comparisons to identify similarities between genes in a wide variety of organisms, including microorganisms found on seaweeds from Australian coastlines and in the Amazon River. This database of similarities will help researchers understand the diversity and capabilities that are hidden in the world all around us. For more on the project’s aims and methods, see here.

    Once published, these results should help scientists with the following goals:

    Discovering new protein functions and augmenting knowledge about biochemical processes in general
    Identifying how organisms interact with each other and the environment
    Documenting the current baseline microbial diversity, allowing a better understanding of how microorganisms change under environmental stresses, such as climate change
    Understanding and modeling complex microbial systems

    In addition, a better understanding of these organisms will likely be useful in developing new medicines, harnessing new sources of renewable energy, improving nutrition, cleaning the environment, creating green industrial processes and many other advances.

    The timing of this project launch is a perfect way to kick off celebrations of another important achievement – World Community Grid’s 10th anniversary. There’s much to celebrate and reflect upon from the past decade’s work, but it’s equally important to continue pushing forward and making new scientific discoveries. With your help – and the help of your colleagues and friends – we can continue to expand our global network of volunteers and achieve another 10 years of success. Here’s to another decade of discovery!

    To contribute to Uncovering Genome Mysteries, go to your My Projects page and make sure the box for this new project is checked.

    Please visit the following pages to learn more:

    Uncovering Genome Mysteries project overview
    Frequently Asked Questions

    See the full article here.

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BETCHA!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:12 pm on October 16, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From Caltech: “Improving The View Through Tissues and Organs” 

    Caltech Logo
    Caltech

    10/16/2014
    Kimm Fesenmaier

    This summer, several undergraduate students at Caltech had the opportunity to help optimize a promising technique that can make tissues and organs—even entire organisms—transparent for study. As part of the Summer Undergraduate Research Fellowship (SURF) program, these students worked in the lab of Assistant Professor of Biology Viviana Gradinaru, where researchers are developing such so-called clearing techniques that make it possible to peer straight through normally opaque tissues rather than seeing them only as thinly sectioned slices that have been pieced back together.

    tissue
    Credit: iStock

    Gradinaru’s group recently published a paper in the journal Cell describing a new approach to tissue clearing. The method they have created builds on a technique called CLARITY that Gradinaru helped develop while she was a research associate at Stanford. CLARITY allowed researchers to, for the first time, create a transparent whole-brain specimen that could then be imaged with its structural and genetic information intact.

    CLARITY was specifically developed for studying the brain. But the new approach developed in Gradinaru’s lab, which the team has dubbed PARS (perfusion-assisted agent release in situ), can also clear other organs, such as the kidney, as well as tissue samples, such as tumor biopsies. It can even be applied to entire organisms.

    Like CLARITY, PARS involves removing the light-scattering lipids in the tissue to make samples transparent without losing the structural integrity that lipids typically provide. First the sample is infused with acrylamide monomers that are then polymerized into a hydrogel that provides structural support. Next, this tissue–hydrogel hybrid is immersed in a detergent that removes the lipids. Then the sample can be stained, often with antibodies that specifically mark cells of interest, and then immersed in RIMS (refractive index matching solution) for imaging using various optical techniques such as confocal or lightsheet microscopy.

    Over the summer, Sam Wie, a junior biology major at Caltech, spent 10 weeks in the Gradinaru lab working to find a polymer that would perform better than acrylamide, which has been used in the CLARITY hydrogel. “One of the limitations of CLARITY is that when you put the hydrogel tissue into the detergent, the higher solute concentration in the tissue causes liquid to rush into the cell. That causes the sample to swell, which could potentially damage the structure of the tissue,” Wie explains. “So I tried different polymers to try to limit that swelling.”

    Wie was able to identify a polymer that produces, over a similar amount of time, about one-sixth of the swelling in the tissue.

    “The SURF experience has been very rewarding,” Wie says. “I’ve learned a lot of new techniques, and it’s really exciting to be part of, and to try to improve, CLARITY, a method that will probably change the way that we image tissues from now on.”

    At another bench in Gradinaru’s lab, sophomore bioengineering major Andy Kim spent the summer focusing on a different aspect of the PARS technique. While antibodies have been the most common markers used to tag cells of interest within cleared tissues, they are too large for some studies—for example, those that aim to image deeper parts of the brain, requiring them to cross the blood–brain barrier. Kim’s project involved identifying smaller proteins, such as nanobodies, which target and bind to specific parts of proteins in tissues.

    “While PARS is a huge improvement over CLARITY, using antibodies to stain is very expensive,” Kim says. “However, some of these nanobodies can be produced easily, so if we can get them to work, it would not only help image the interior of the brain, it would also be a lot less costly.”

    During his SURF, Kim worked with others in the lab to identify about 30 of these smaller candidate binding proteins and tested them on PARS-cleared samples.

    While Wie and Kim worked on improving the PARS technique itself, Donghun Ryu, a third SURFer in Gradinaru’s lab, investigated different methods for imaging the cleared samples. Ryu is a senior electrical engineering and computer science major at the Gwangju Institute of Science and Technology (GIST) in the Republic of Korea.

    Last summer Ryu completed a SURF as part of the Caltech–GIST Summer Undergraduate Research Exchange Program in the lab of Changhuei Yang, professor of electrical engineering, bioengineering, and medical engineering at Caltech. While completing that project, Ryu became interested in optogenetics, the use of light to control genes. Since optogenetics is one of Gradinaru’s specialties, Yang suggested that he try a SURF in Gradinaru’s lab.

    This summer, Ryu was able to work with both Yang and Gradinaru, investigating a technique called Talbot microscopy to see whether it would be better for imaging thick, cleared tissues than more common techniques. Ryu was able to work on the optical system in Yang’s lab while testing the samples cleared in Gradinaru’s lab.

    “It was a wonderful experience,” Ryu says. “It was special to have the opportunity to work for two labs this summer. I remember one day when I had a meeting with both Professor Yang and Professor Gradinaru; it was really amazing to get to meet with two Caltech professors.”

    Gradinaru says that the SURF projects provided a learning opportunity not only for the participating students but also for her lab. “For example,” she says, “Ryu strengthened the collaboration that we have with the Yang group for the BRAIN Initiative. And my lab members benefited from the chance to serve as mentors—to see what works and what can be improved when transferring scientific knowledge. These are very important skills in addition to the experimental know-how that they master.”

    See the full article here.

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:51 pm on October 16, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From UC Berkeley: “Earth’s magnetic field could flip within a human lifetime” 

    UC Berkeley

    UC Berkeley

    October 14, 2014
    Robert Sanders

    Imagine the world waking up one morning to discover that all compasses pointed south instead of north.

    It’s not as bizarre as it sounds. Earth’s magnetic field has flipped – though not overnight – many times throughout the planet’s history. Its dipole magnetic field, like that of a bar magnet, remains about the same intensity for thousands to millions of years, but for incompletely known reasons it occasionally weakens and, presumably over a few thousand years, reverses direction.

    team
    Left to right, Biaggio Giaccio, Gianluca Sotilli, Courtney Sprain and Sebastien Nomade sitting next to an outcrop in the Sulmona basin of the Apennine Mountains that contains the Matuyama-Brunhes magnetic reversal. A layer of volcanic ash interbedded with the lake sediments can be seen above their heads. Sotilli and Sprain are pointing to the sediment layer in which the magnetic reversal occurred. (Photo by Paul Renne)

    Now, a new study by a team of scientists from Italy, France, Columbia University and the University of California, Berkeley, demonstrates that the last magnetic reversal 786,000 years ago actually happened very quickly, in less than 100 years – roughly a human lifetime.

    “It’s amazing how rapidly we see that reversal,” said UC Berkeley graduate student Courtney Sprain. “The paleomagnetic data are very well done. This is one of the best records we have so far of what happens during a reversal and how quickly these reversals can happen.”

    Sprain and Paul Renne, director of the Berkeley Geochronology Center and a UC Berkeley professor-in- residence of earth and planetary science, are coauthors of the study, which will be published in the November issue of Geophysical Journal International and is now available online.

    Flip could affect electrical grid, cancer rates

    The discovery comes as new evidence indicates that the intensity of Earth’s magnetic field is decreasing 10 times faster than normal, leading some geophysicists to predict a reversal within a few thousand years.

    Though a magnetic reversal is a major planet-wide event driven by convection in Earth’s iron core, there are no documented catastrophes associated with past reversals, despite much searching in the geologic and biologic record. Today, however, such a reversal could potentially wreak havoc with our electrical grid, generating currents that might take it down.

    And since Earth’s magnetic field protects life from energetic particles from the sun and cosmic rays, both of which can cause genetic mutations, a weakening or temporary loss of the field before a permanent reversal could increase cancer rates. The danger to life would be even greater if flips were preceded by long periods of unstable magnetic behavior.

    “We should be thinking more about what the biologic effects would be,” Renne said.

    Dating ash deposits from windward volcanoes

    The new finding is based on measurements of the magnetic field alignment in layers of ancient lake sediments now exposed in the Sulmona basin of the Apennine Mountains east of Rome, Italy. The lake sediments are interbedded with ash layers erupted from the Roman volcanic province, a large area of volcanoes upwind of the former lake that includes periodically erupting volcanoes near Sabatini, Vesuvius and the Alban Hills.

    two
    Leonardo Sagnotti, standing, and coauthor Giancarlo Scardia collecting a sample for paleomagnetic analysis.

    Italian researchers led by Leonardo Sagnotti of Rome’s National Institute of Geophysics and Volcanology measured the magnetic field directions frozen into the sediments as they accumulated at the bottom of the ancient lake.

    Sprain and Renne used argon-argon dating, a method widely used to determine the ages of rocks, whether they’re thousands or billions of years old, to determine the age of ash layers above and below the sediment layer recording the last reversal. These dates were confirmed by their colleague and former UC Berkeley postdoctoral fellow Sebastien Nomade of the Laboratory of Environmental and Climate Sciences in Gif-Sur-Yvette, France.

    Because the lake sediments were deposited at a high and steady rate over a 10,000-year period, the team was able to interpolate the date of the layer showing the magnetic reversal, called the Matuyama-Brunhes transition, at approximately 786,000 years ago. This date is far more precise than that from previous studies, which placed the reversal between 770,000 and 795,000 years ago.

    “What’s incredible is that you go from reverse polarity to a field that is normal with essentially nothing in between, which means it had to have happened very quickly, probably in less than 100 years,” said Renne. “We don’t know whether the next reversal will occur as suddenly as this one did, but we also don’t know that it won’t.”

    Unstable magnetic field preceded 180-degree flip

    Whether or not the new finding spells trouble for modern civilization, it likely will help researchers understand how and why Earth’s magnetic field episodically reverses polarity, Renne said.
    the polar wanderingsThe ‘north pole’ — that is, the direction of magnetic north — was reversed a million years ago. This map shows how, starting about 789,000 years ago, the north pole wandered around Antarctica for several thousand years before flipping 786,000 years ago to the orientation we know today, with the pole somewhere in the Arctic.

    The magnetic record the Italian-led team obtained shows that the sudden 180-degree flip of the field was preceded by a period of instability that spanned more than 6,000 years. The instability included two intervals of low magnetic field strength that lasted about 2,000 years each. Rapid changes in field orientations may have occurred within the first interval of low strength. The full magnetic polarity reversal – that is, the final and very rapid flip to what the field is today – happened toward the end of the most recent interval of low field strength.

    Renne is continuing his collaboration with the Italian-French team to correlate the lake record with past climate change.

    Renne and Sprain’s work at the Berkeley Geochronology Center was supported by the Ann and Gordon Getty Foundation.

    See the full article here.

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:15 pm on October 14, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From ORNL: “New ORNL electric vehicle technology packs more punch in smaller package” 

    i1

    Oak Ridge National Laboratory

    Oct. 14, 2014
    Media Contact: Ron Walli
    Communications
    865.576.0226

    Using 3-D printing and novel semiconductors, researchers at the Department of Energy’s Oak Ridge National Laboratory have created a power inverter that could make electric vehicles lighter, more powerful and more efficient.

    At the core of this development is wide bandgap material made of silicon carbide with qualities superior to standard semiconductor materials. Power inverters convert direct current into the alternating current that powers the vehicle. The Oak Ridge inverter achieves much higher power density with a significant reduction in weight and volume.

    “Wide bandgap technology enables devices to perform more efficiently at a greater range of temperatures than conventional semiconductor materials,” said ORNL’s Madhu Chinthavali, who led the Power Electronics and Electric Machinery Group on this project. “This is especially useful in a power inverter, which is the heart of an electric vehicle.”

    Specific advantages of wide bandgap devices include: higher inherent reliability; higher overall efficiency; higher frequency operation; higher temperature capability and tolerance; lighter weight, enabling more compact systems; and higher power density.

    Additive manufacturing helped researchers explore complex geometries, increase power densities, and reduce weight and waste while building ORNL’s 30-kilowatt prototype inverter.

    thing
    ORNL’s 30-kilowatt power inverter offers greater reliability and power in a compact package.

    “With additive manufacturing, complexity is basically free, so any shape or grouping of shapes can be imagined and modeled for performance,” Chinthavali said. “We’re very excited about where we see this research headed.”

    Using additive manufacturing, researchers optimized the inverter’s heat sink, allowing for better heat transfer throughout the unit. This construction technique allowed them to place lower-temperature components close to the high-temperature devices, further reducing the electrical losses and reducing the volume and mass of the package.

    Another key to the success is a design that incorporates several small capacitors connected in parallel to ensure better cooling and lower cost compared to fewer, larger and more expensive “brick type” capacitors.

    The research group’s first prototype, a liquid-cooled all-silicon carbide traction drive inverter, features 50 percent printed parts. Initial evaluations confirmed an efficiency of nearly 99 percent, surpassing DOE’s power electronics target and setting the stage for building an inverter using entirely additive manufacturing techniques.

    Building on the success of this prototype, researchers are working on an inverter with an even greater percentage of 3-D printed parts that’s half the size of inverters in commercially available vehicles. Chinthavali, encouraged by the team’s results, envisions an inverter with four times the power density of their prototype.

    Others involved in this work, which was to be presented today at the Second Institute of Electrical and Electronics Engineers Workshop on Wide Bandgap Power Devices and Applications in Knoxville, were Curt Ayers, Steven Campbell, Randy Wiles and Burak Ozpineci.

    Research for this project was conducted at ORNL’s National Transportation Research Center and Manufacturing Demonstration Facility, DOE user facilities, with funding from DOE’s Office of Energy Efficiency and Renewable Energy.

    See the full article here.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:41 pm on October 14, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , ,   

    From BNL: “Unstoppable Magnetoresistance” 

    Brookhaven Lab

    October 14, 2014
    Tien Nguyen

    Mazhar Ali, a fifth-year graduate student in the laboratory of Bob Cava, the Russell Wellman Moore Professor of Chemistry at Princeton University, has spent his academic career discovering new superconductors, materials coveted for their ability to let electrons flow without resistance. While testing his latest candidate, the semimetal tungsten ditelluride (WTe2), he noticed a peculiar result.

    Ali applied a magnetic field to a sample of WTe2, one way to kill superconductivity if present, and saw that its resistance doubled. Intrigued, Ali worked with Jun Xiong, a student in the laboratory of Nai Phuan Ong, the Eugene Higgins Professor of Physics at Princeton, to re-measure the material’s magnetoresistance, which is the change in resistance as a material is exposed to stronger magnetic fields.

    two
    Mazhar Ali (left) and Steven Flynn (right), co-authors on the Nature article
    Photo credit: C. Todd Reichart

    “They have unique capabilities at Brookhaven. One is that they can measure diffraction at 10 Kelvin (-441 °F).”
    — Bob Cava, Princeton University

    “He noticed the magnetoresistance kept going up and up and up—that never happens.” said Cava. The researchers then exposed WTe2 to a 60-tesla magnetic field, close to the strongest magnetic field mankind can create, and observed a magnetoresistance of 13 million percent. The material’s magnetoresistance displayed unlimited growth, making it the only known material without a saturation point. The results were published on September 14 in the journal Nature.

    Electronic information storage is dependent on the use of magnetic fields to switch between distinct resistivity values that correlate to either a one or a zero. The larger the magnetoresistance, the smaller the magnetic field needed to change from one state to another, Ali said. Today’s devices use layered materials with so-called “giant magnetoresistance,” with changes in resistance of 20,000 to 30,000 percent when a magnetic field is applied. “Colossal magnetoresistance” is close to 100,000 percent, so for a magnetoresistance percentage in the millions, the researchers hoped to coin a new term.

    cry.
    Crystal Structure of WTe2. Image credit: Nature

    Their original choice was “ludicrous” magnetoresistance, which was inspired by “ludicrous speed,” the fictional form of fast-travel used in the comedy “Spaceballs.” They even included an acknowledgement to director Mel Brooks. After other lab members vetoed “ludicrous,” the researchers considered “titanic” before Nature editors ultimately steered them towards the term “large magnetoresistance.”

    Terminology aside, the fact remained that the magnetoresistance values were extraordinarily high, a phenomenon that might be understood through the structure of WTe2. To look at the structure with an electron microscope, the research team turned to Jing Tao, a researcher at Brookhaven National Laboratory.

    jt
    Jing Tao

    “Jing is a great microscopist. They have unique capabilities at Brookhaven,” Cava said. “One is that they can measure diffraction at 10 Kelvin (-441 °F). Not too many people on Earth can do that, but Jing can.”

    Electron microscopy experiments revealed the presence of tungsten dimers, paired tungsten atoms, arranged in chains responsible for the key distortion from the classic octahedral structure type. The research team proposed that WTe2 owes its lack of saturation to the nearly perfect balance of electrons and electron holes, which are empty docks for traveling electrons. Because of its structure, WTe2 only exhibits magnetoresistance when the magnetic field is applied in a certain direction. This could be very useful in scanners, where multiple WTe2 devices could be used to detect the position of magnetic fields, Ali said.

    “Aside from making devices from WTe2, the question to ask yourself as a scientist is: How can it be perfectly balanced, is there something more profound,” Cava said.

    See the full article here.

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:33 pm on October 13, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From MIT: “Solid nanoparticles can deform like a liquid” 


    MIT News

    October 12, 2014
    David L. Chandler | MIT News Office

    Unexpected finding shows tiny particles keep their internal crystal structure while flexing like droplets.

    A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

    drops
    Image: Yan Liang

    The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits.

    The results, published in the journal Nature Materials, come from a combination of laboratory analysis and computer modeling, by an international team that included researchers in China, Japan, and Pittsburgh, as well as at MIT.

    The experiments were conducted at room temperature, with particles of pure silver less than 10 nanometers across — less than one-thousandth of the width of a human hair. But the results should apply to many different metals, says Li, senior author of the paper and the BEA Professor of Nuclear Science and Engineering.

    Silver has a relatively high melting point — 962 degrees Celsius, or 1763 degrees Fahrenheit — so observation of any liquidlike behavior in its nanoparticles was “quite unexpected,” Li says. Hints of the new phenomenon had been seen in earlier work with tin, which has a much lower melting point, he says.

    The use of nanoparticles in applications ranging from electronics to pharmaceuticals is a lively area of research; generally, Li says, these researchers “want to form shapes, and they want these shapes to be stable, in many cases over a period of years.” So the discovery of these deformations reveals a potentially serious barrier to many such applications: For example, if gold or silver nanoligaments are used in electronic circuits, these deformations could quickly cause electrical connections to fail.

    Only skin deep

    The researchers’ detailed imaging with a transmission electron microscope and atomistic modeling revealed that while the exterior of the metal nanoparticles appears to move like a liquid, only the outermost layers — one or two atoms thick — actually move at any given time. As these outer layers of atoms move across the surface and redeposit elsewhere, they give the impression of much greater movement — but inside each particle, the atoms stay perfectly lined up, like bricks in a wall.

    “The interior is crystalline, so the only mobile atoms are the first one or two monolayers,” Li says. “Everywhere except the first two layers is crystalline.”

    By contrast, if the droplets were to melt to a liquid state, the orderliness of the crystal structure would be eliminated entirely — like a wall tumbling into a heap of bricks.

    Technically, the particles’ deformation is pseudoelastic, meaning that the material returns to its original shape after the stresses are removed — like a squeezed rubber ball — as opposed to plasticity, as in a deformable lump of clay that retains a new shape.

    The phenomenon of plasticity by interfacial diffusion was first proposed by Robert L. Coble, a professor of ceramic engineering at MIT, and is known as “Coble creep.” “What we saw is aptly called Coble pseudoelasticity,” Li says.

    Now that the phenomenon has been understood, researchers working on nanocircuits or other nanodevices can quite easily compensate for it, Li says. If the nanoparticles are protected by even a vanishingly thin layer of oxide, the liquidlike behavior is almost completely eliminated, making stable circuits possible.

    Possible benefits

    On the other hand, for some applications this phenomenon might be useful: For example, in circuits where electrical contacts need to withstand rotational reconfiguration, particles designed to maximize this effect might prove useful, using noble metals or a reducing atmosphere, where the formation of an oxide layer is destabilized, Li says.

    The new finding flies in the face of expectations — in part, because of a well-understood relationship, in most materials, in which mechanical strength increases as size is reduced.

    “In general, the smaller the size, the higher the strength,” Li says, but “at very small sizes, a material component can get very much weaker. The transition from ‘smaller is stronger’ to ‘smaller is much weaker’ can be very sharp.”

    That crossover, he says, takes place at about 10 nanometers at room temperature — a size that microchip manufacturers are approaching as circuits shrink. When this threshold is reached, Li says, it causes “a very precipitous drop” in a nanocomponent’s strength.

    The findings could also help explain a number of anomalous results seen in other research on small particles, Li says.

    “The … work reported in this paper is first-class,” says Horacio Espinosa, a professor of manufacturing and entrepreneurship at Northwestern University who was not involved in this research. “These are very difficult experiments, which revealed for the first time shape recovery of silver nanocrystals in the absence of dislocation. … Li’s interpretation of the experiments using atomistic modeling illustrates recent progress in comparing experiments and simulations as it relates to spatial and time scales. This has implications to many aspects of mechanics of materials, so I expect this work to be highly cited.”

    The research team included Jun Sun, Longbing He, Tao Xu, Hengchang Bi, and Litao Sun, all of Southeast University in Nanjing, China; Yu-Chieh Lo of MIT and Kyoto University; Ze Zhang of Zhejiang University; and Scott Mao of the University of Pittsburgh. It was supported by the National Basic Research Program of China; the National Natural Science Foundation of China; the Chinese Ministry of Education; the National Science Foundation of Jiangsu Province, China; and the U.S. National Science Foundation.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:55 pm on October 13, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From ORNL: “Unlocking enzyme synthesis of rare sugars to create drugs with fewer side effects” 

    i1

    Oak Ridge National Laboratory

    September 26, 2014
    Katie Bethea, 865.576.8039

    A team led by the U.S. Department of Energy’s Oak Ridge National Laboratory has unlocked the enzymatic synthesis process of rare sugars, which are useful in developing drugs with low side effects using a process more friendly to the environment.

    In a paper published in Structure, the research team reported the pioneering use of neutron and X-ray crystallography and high performance computing to study how the enzyme D-xylose isomerase, or XI, can cause a biochemical reaction in natural sugar to produce rare sugars. Unlike drugs made from natural sugar compounds, drugs made from rare sugars do not interfere with cellular processes. As a result, rare sugars have important commercial and biomedical applications as precursors for the synthesis of different antiviral and anti-cancer drugs with fewer side effects.

    poly
    An artist’s rendering of the enzyme D-xylose isomerase as it isomerizes L-arabinose into rare sugars not found in nature. The enzyme acts as a filter by capturing and performing catalysis only on the high-energy 5S1 conformation of L-arabinose, while remaining inactive on other more abundant sugar conformations. Neutron macromolecular crystallography has unequivocally demonstrated how this high-energy conformer of L-arabinose binds in the enzyme active site and is converted to the linear intermediate form. Simulations provide evidence for the experimental results. Image credit: Genevieve Martin/ORNL

    “The goal of this study is to dramatically improve the performance of enzymes that can be used by the pharmaceutical industry to synthesize drug precursors,” said ORNL’s Andrey Kovalevsky, the lead author of the study. “We’re trying to find a new way to do enzyme design – neutron studies combined with high performance computing could be an elegant means to do that.”

    Enzymes speed up reactions in organisms, ultimately making life itself possible, and are increasingly used by industry to synthesize value-added compounds. Biotechnological syntheses are “greener” than other techniques that use heavy metal chemical catalysts and large amounts of organic solvents. However, many natural enzymes are not very well suited for industrial processes. XI, for example, is used effectively for the production of high-fructose corn syrup from starch in the food industry, but its applications in the pharmaceutical industry are limited by its performance. Researchers in the pharmaceutical industry want to engineer mutations in enzymes to improve reactions. But first, they have to understand how the enzymes work.

    “We had no idea how the enzyme, D-xylose isomerase, binds its non-physiological substrate – natural sugar L-arabinose,” said Kovalevsky. “You have to know how an enzyme binds its substrate to engineer mutations to improve binding and reaction.”

    Using X-ray and neutron crystallography combined with theoretical calculations, the team figured out how the enzyme isomerizes L-arabinose into the rare sugar L-ribulose and then epimerizes the latter into another rare sugar L-ribose. Importantly, L-ribose is the enantiomer, a mirror image, of the ubiquitous D-ribose that is a building block of DNA and RNA.

    “We found, completely unexpectedly, that the enzyme binds the substrate L-arabinose –an abundant natural sugar found in plants– in a very high energy geometry in the active site, which explained the xylose isomerase’s poor efficiency with the substrate and provided us with clues on how we can re-engineer it to improve its activity,” said Kovalevsky.

    Combining crystallographic observations and computation, the team saw the XI enzyme isomerize the sugar L-arabinose when bound to the active site. is the process in which the sugar changes its configuration through a chemical reaction. An enzyme’s active site is the binding place where catalysis is performed on substrates or where inhibitors dock to hinder catalysis. Binding a substrate in a high energy geometry means the efficiency of catalysis would be low, something researchers would like to improve, explained Kovalevsky.

    This is the first time researchers have looked at enzymatic synthesis by combining neutrons, X-rays and high performance computing.

    “Neutron crystallography gives the location of hydrogen atoms, which is important in enzyme reactions where there’s a lot of shuffling of hydrogen around,” said Kovalevsky. “X-rays can’t see those reactions. But once you have the neutron structures and know the hydrogen positions, then your calculations and theoretical models are much more correct.”

    In the past, researchers had to infer the hydrogen atom location from chemical knowledge, which, as experience shows, may be wrong. Now, neutrons show the exact location of the hydrogen atoms so they do not have to guess.

    Calculations can be misleading if hydrogens are placed incorrectly, leading in many cases to the wrong inference from calculations about how enzymes function. Combining neutrons, calculations and simulations gives a more thorough view of the enzymes’ mechanisms and a complete look at how enzymes work.

    Kovalevsky said future simulations will explore the possibility of tailoring the XI active site to bind lower-energy conformations of L-arabinose to improve catalytic activity.

    This research was partially funded through a National Institutes of HealthNational Institute of General Medical Sciences consortium between ORNL and DOE’s Lawrence Berkeley National Laboratory (LBNL). The work was conducted in part at the Los Alamos Neutron Science Center, a National Nuclear Security Administration User Facility at DOE’s Los Alamos National Laboratory., and at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at LBNL.

    See the full article here.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:13 am on October 11, 2014 Permalink | Reply
    Tags: , Applied Research & Technology, ,   

    From AAAS: “Z machine makes progress toward nuclear fusion” 

    AAAS

    AAAS

    10 October 2014
    Daniel Clery

    Scientists are reporting a significant advance in the quest to develop an alternative approach to nuclear fusion. Researchers at Sandia National Laboratories in Albuquerque, New Mexico, using the lab’s Z machine, a colossal electric pulse generator capable of producing currents of tens of millions of amperes, say they have detected significant numbers of neutrons—byproducts of fusion reactions—coming from the experiment. This, they say, demonstrates the viability of their approach and marks progress toward the ultimate goal of producing more energy than the fusion device takes in.

    z
    Z machine at Sandia

    Fusion is a nuclear reaction that releases energy not by splitting heavy atomic nuclei apart—as happens in today’s nuclear power stations—but by fusing light nuclei together. The approach is appealing as an energy source because the fuel (hydrogen) is plentiful and cheap, and it doesn’t generate any pollution or long-lived nuclear waste. The problem is that atomic nuclei are positively charged and thus repel each other, so it is hard to get them close enough together to fuse. For enough reactions to take place, the hydrogen nuclei must collide at velocities of up to 1000 kilometers per second (km/s), and that requires heating them to more than 50 million degrees Celsius. At such temperatures, gas becomes plasma—nuclei and electrons knocking around separately—and containing it becomes a problem, because if it touches the side of its container it will instantly melt it.

    Fusion scientists have been laboring for more than 60 years to find a way to contain superhot plasma and heat it till it fuses. Today, most efforts are focused on one of two approaches: Tokamak reactors, such as the international ITER fusion project in France, hold a diffuse plasma steady for seconds or minutes at a time while heating it to fusion temperature; laser fusion devices, such as the National Ignition Facility in California, take a tiny quantity of frozen hydrogen and crush it with an intense laser pulse lasting a few tens of billionths of a second to heat and compress it. Neither technique has yet reached “breakeven,” the point at which the amount of energy produced by fusion reactions exceeds that needed to heat and contain the plasma in the first place.

    ITER Tokamak
    ITER Tokamak

    LLNL NIF
    NIF at LLNL

    Sandia’s technique is one of several that fall into the middle ground between the extremes of laser fusion and the magnetically confined fusion of tokamaks. It crushes fuel in a fast pulse, as in laser fusion, but not as fast and not to such high density. Known as magnetized liner inertial fusion (MagLIF), the approach involves putting some fusion fuel (a gas of the hydrogen isotope deuterium) inside a tiny metal can 5 millimeters across and 7.5 mm tall. Researchers then use the Z machine to pass a huge current pulse of 19 million amps, lasting just 100 nanoseconds, through the can from top to bottom. This creates a powerful magnetic field that crushes the can inward at a speed of 70 km/s.

    While this is happening, the researchers do two other things: They preheat the fuel with a short laser pulse, and they apply a steady magnetic field, which acts as a straitjacket to hold the fusion fuel in place. Crushing the plasma also boosts the constraining magnetic field, from about 10 tesla to 10,000 tesla. This constraining field is key, because without it there is nothing to hold the superheated plasma in place other than its own inward inertia. Once the compression stops, it would fly apart before it has time to react.

    The Sandia researchers reported this week in Physical Review Letters that they had heated the plasma to about 35 million degrees Celsius and detected about 2 trillion neutrons coming from each shot. (One reaction of fusing two deuteriums produces helium-3 and a neutron.) Although the result shows that a substantial number of reactions is taking place—100 times as many as the team achieved a year ago—the group will need to produce 10,000 times as many to achieve breakeven. “It is good progress but just a beginning,” says Sandia senior scientist Mike Campbell. “We need to get more energy into the gas and increase the initial magnetic field and see if it scales in the right direction.”

    One significant aspect of the results is that the researchers also detected neutrons coming from the fusion of deuterium and tritium, another hydrogen isotope. The main reaction, deuterium with deuterium, or D-D, produces either helium-3 or tritium. Those reaction products would normally be traveling fast enough to fly out of the plasma without reacting again. But the intense constraining magnetic field forces the tritium to follow a tight helical path in which it is much more likely to collide with a deuterium and fuse again. The researchers detected 10 billion neutrons from deuterium-tritium (D-T) fusions. “To me, the most interesting data was the secondary D-T neutrons, which is very highly suggestive that the original [10 tesla] field was frozen in the plasma and reached values of [about 9000 tesla] at stagnation,” Campbell says.

    “It is great news,” says Glen Wurden, the magnetized plasma team leader at Los Alamos National Laboratory in New Mexico. He is impressed by “the fact that secondary D-T neutrons are observed … which means that at least some D-D–produced [tritium nuclei] are slowing down and reacting.” Simulations suggest that the Z machine’s maximum current of 27 million amps should be enough to reach breakeven. But the researchers are already setting their sights much higher. A hoped-for upgrade to 60 million amps, they say, would boost the power output into a “high gain” realm of 1000 times input—a giant step toward commercial viability.

    See the full article here.

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:00 pm on October 10, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From Caltech: “Sensors to Simplify Diabetes Management” 

    Caltech Logo
    Caltech

    10/10/2014
    Jessica Stoller-Conrad

    For many patients diagnosed with diabetes, treating the disease can mean a burdensome and uncomfortable lifelong routine of monitoring blood sugar levels and injecting the insulin that their bodies don’t naturally produce. But, as part of their Summer Undergraduate Research Fellowship (SURF) projects at Caltech, several engineering students have contributed to the development of tiny biosensors that could one day eliminate the need for these manual blood sugar tests.

    two
    From left to right: Sagar Vaidyanathan, a visiting undergraduate researcher from UCLA, and Caltech sophomore Sophia Chen. Chen spent her summer in the laboratory of Hyuck Choo, assistant professor of electrical engineering, studying new ways to power tiny health-monitoring sensors and devices.
    Credit: Lance Hayashida/Caltech Marketing and Communications

    Because certain patients with diabetes are unable to make their own insulin—a hormone that helps transfer glucose, or sugar, from the blood into muscle and other tissues—they need to monitor frequently their blood glucose, manually injecting insulin when sugar levels surge after a meal. Most glucose monitors require that patients prick their fingertips to collect a drop of blood, sometimes up to 10 times a day for the rest of their lives.

    In their SURF projects, the students, all from Caltech’s Division of Engineering and Applied Science, looked for different ways to do these same tests but painlessly and automatically.

    man
    Senior applied physics major Mehmet Sencan has approached the problem with a tiny chip that can be implanted under the skin. The sensor, a square just 1.4 millimeters on each side, is designed to detect glucose levels from the interstitial fluid (fluid found in the spaces between cells) that is just under the skin. The glucose levels in this fluid directly relate to the blood glucose concentration.

    Sencan has been involved in optimizing the electrochemical method that the chip will use to detect glucose levels. Much like a traditional finger-stick glucose meter, the chip uses glucose oxidase, an enzyme that reacts in the presence of glucose, to create an electrical current. Higher levels of glucose result in a stronger current, allowing the device to measure glucose levels based on the charge that passes through the fluid.

    Once the glucose level is detected, the information is wirelessly transmitted via a radio wave frequency to a reader that uses the same frequency to power the device itself. Ultimately an external display will let the patient know if their levels are within range.

    Sencan, who works in the laboratory of Axel Scherer, the Bernard Neches Professor of Electrical Engineering, Applied Physics, and Physics, and who is co-mentored by postdoctoral researcher Muhammad Mujeeb-U-Rahman, started this project three years ago during his very first SURF.

    “When I started, we were just thinking about what kind of chemistry the sensor would use, and now we have a sensor that is actually designed to do that,” he says. Over the summer, he implanted the sensors in rat models, and he will continue the study over the fall and spring terms using both rat and mouse models—a first step in determining if the design is a clinically viable option.

    jun
    Junior electrical engineering major Sith Domrongkitchaiporn from the Scherer laboratory, also co-mentored by Mujeeb-U-Rahman, took a different approach to glucose detection, making tiny biosensors that are inconspicuously wearable on the surface of a contact lens. “It’s an interesting concept because instead of having to do a procedure to place something under the skin, you can use a less invasive method, placing a sensor on the eye to get the same information,” he says.

    He used the method optimized by Mehmet to determine blood glucose levels from interstitial fluid and adapted the chemistry to measure glucose in the eyes’ tears. This summer, he will be attempting to fabricate the lens itself and improve upon the process whereby radio waves are used to power the sensor and then transmit data from the sensor to an external computer.

    girl
    SURF student and sophomore electrical engineering major Jennifer Chih-Wen Lin wanted to incorporate a different kind of glucose sensor into a contact lens. “The concept—determining glucose readings from tears—is very similar to Sith’s, but the method is very different,” she says.

    Instead of determining the glucose level based on the amount of electrical current that passes through a sample, Lin, who works in the laboratory of Hyuck Choo, assistant professor of electrical engineering, worked on a sensor that detects glucose levels from the interaction between light and molecules.

    In her SURF project, she began optimizing the characterization of glucose molecules in a sample of glucose solution using a technique called Raman spectroscopy. When molecules encounter light, they vibrate differently based on their symmetry and the types of bonds that hold their atoms together. This vibrational information provides a unique fingerprint for each type of molecule, which is represented as peaks on the Raman spectrum—and the intensity of these peaks correlates to the concentration of that molecule within the sample.

    “This step is important because once I can determine the relationship between peak intensities and glucose concentrations, our sensor can just compare that known spectrum to the reading from a sample of tears to determine the amount of glucose in the sample,” she says.

    Lin’s project is in the very beginning stages, but if it is successful, it could provide a more accurate glucose measurement, and from a smaller volume of liquid, than is possible with the finger-stick method. Perhaps more importantly for patients, it can provide that measurement painlessly.

    girl12
    Also in Choo’s laboratory, sophomore electrical engineering major Sophia Chen’s SURF project involves a new way to power devices like these tiny sensors and other medical implants, using the vibrations from a patient’s vocal cords. These vibrations produce the sound of our voice, and also create vibrations in the skull.

    “We’re using these devices called energy harvesters that can extract energy from vibrations at specific frequencies. When the vibrations go from the vocal folds to the skull, a structure in the energy harvester vibrates at the same frequency, generating energy—energy that can be used to power batteries or charge things,” Chen says.

    Chen’s goal is to determine the frequency of these vibrations—and if the energy that they produce is actually enough to power a tiny device. The hope is that one day these vibrations could power, or at least supplement the power of, medical devices that need to be implanted near the head and that presently run on batteries with finite lifetimes.

    Chen and the other students acknowledge that health-monitoring sensors powered by the human body might be years away from entering the clinic. However, this opportunity to apply classroom knowledge to a real-life challenge—such as diabetes treatment—is an important part of their training as tomorrow’s scientists and engineers.

    See the full article here.

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:18 pm on October 10, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , ,   

    From BNL: “Researchers Pump Up Oil Accumulation in Plant Leaves” 

    Brookhaven Lab

    October 7, 2014
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Increasing the oil content of plant biomass could help fulfill the nation’s increasing demand for renewable energy feedstocks. But many of the details of how plant leaves make and break down oils have remained a mystery. Now a series of detailed genetic studies conducted at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and published in The Plant Cell reveals previously unknown biochemical details about those metabolic pathways—including new ways to increase the accumulation of oil in leaves, an abundant source of biomass for fuel production.

    Using these methods, the scientists grew experimental Arabidopsis plants whose leaves accumulated 9 percent oil by dry weight, which represents an approximately 150-fold increase in oil content compared to wild type leaves.

    “This is an unusually high level of oil accumulation for plant vegetative tissue,” said Brookhaven Lab biochemist Changcheng Xu, who led the research team. “In crop plants, whose growth time is longer, if the rate of oil accumulation is the same we could get much higher oil content—possibly as high as 40 percent by weight,” he said.

    And when it comes to growing plants for biofuels, packing on the calories is the goal, because energy-dense oils give more “bang per bushel” than less-energy-dense leaf carbohydrates.
    Deciphering biochemical pathways

    The key to increasing oil accumulation in these studies was to unravel the details of the biochemical pathways involved in the conversion of carbon into fatty acids, the storage of fatty acids as oil, and the breakdown of oil in leaves. Prior to this research, scientists did not know that these processes were so intimately related.

    “Our method resulted in an unusually high level of oil accumulation in plant vegetative tissue.”
    — Brookhaven Lab biochemist Changcheng Xu

    “We previously thought that oil storage and oil degradation were alternative fates for newly synthesized fatty acids—the building blocks of oils,” said Brookhaven biochemist John Shanklin, a collaborator on the studies.

    To reveal the connections, Brookhaven’s Jillian Fan and other team members used a series of genetic tricks to systematically disable an alphabet soup of enzymes—molecules that mediate a cell’s chemical reactions—to see whether and how each had an effect in regulating the various biochemical conversions. They also used radiolabeled versions of fatty acids to trace their paths and learn how quickly they move through the pathway. They then used the findings to map out how the processes take place inside different subcellular structures, some of which you might recognize from high school science classes: the chloroplast, endoplasmic reticulum, storage droplets, and the peroxisome.

    team
    Brookhaven researchers Jilian Fan, John Shanklin, and Changcheng Xu have developed a method for getting experimental plants to accumulate more leaf oil. Their strategy could have a significant impact on the production of biofuels.

    “Our goal was to test and understand all the components of the system to fully understand how fatty acids, which are produced in the chloroplasts, are broken down in the peroxisome,” Xu said.

    Key findings

    syn
    Details of the oil synthesis and breakdown pathways within plant leaf cells: Fatty acids (FA) synthesized within chloroplasts go through a series of reactions to be incorporated into lipids (TAG) within the endoplasmic reticulum (ER); lipid droplets (LD) store lipids such as oils until they are broken down to release fatty acids into the cytoplasm; the fatty acids are eventually transported into the peroxisome for oxidation. This detailed metabolic map pointed to a new way to dramatically increase the accumulation of oil in plant leaves — blocking the SDP1 enzyme that releases fatty acids from lipid droplets in plants with elevated fatty acid synthesis. If this strategy works in biofuel crops, it could dramatically increase the energy content of biomass used to make biofuels.

    The research revealed that there is no direct pathway for fatty acids to move from the chloroplasts to the peroxisome as had previously been assumed. Instead, many complex reactions occur within the endoplasmic reticulum to first convert the fatty acids through a series of intermediates into plant oils. These oils accumulate in storage droplets within the cytoplasm until another enzyme breaks them down to release the fatty acid building blocks. Yet another enzyme must transport the fatty acids into the peroxisome for the final stages of degradation via oxidation. The amount of oil that accumulates at any one time represents a balance between the pathways of synthesis and degradation.

    Some previous attempts to increase oil accumulation in leaves have focused on disrupting the breakdown of oils by blocking the action of the enzyme that transports fatty acids into the peroxisome. The reasoning was that the accumulation of fatty acids would have a negative feedback on oil droplet breakdown. High levels of fatty acids remaining in the cytoplasm would inhibit the further breakdown of oil droplets, resulting in higher oil accumulation.

    That idea works to some extent, Xu said, but the current research shows it has negative effects on the overall health of the plants. “Plants don’t grow as well and there can be other defects,” he said.

    Based on their new understanding of the detailed biochemical steps that lead to oil breakdown, Xu and his collaborators explored another approach—namely disabling the enzyme one step back in the metabolic process, the one that breaks down oil droplets to release fatty acids.

    “If we knock out this enzyme, known as SDP1, we get a large amount of oil accumulating in the leaves,” he said, “and without substantial detrimental effects on plant growth.”

    “This research points to a new and different way to accumulate oil in leaves from that being tried in other labs,” Xu said. “In addition, the strategy differs fundamentally from other strategies that are based on adding genes, whereas our strategy is based on disabling or inactivating genes through simple mutations. This work provides a very promising platform for engineering oil production in a non-genetically modified way.”

    “This work provides another example of how research into basic biochemical mechanisms can lead to knowledge that has great promise to help solve real world problems,” concluded Shanklin.

    This research was conducted by Xu in collaboration with Jilian Fan and Chengshi Yan and John Shanklin of Brookhaven’s Biosciences Department, and Rebecca Roston, now at the University of Nebraska, Lincoln. The work was funded by the DOE Office of Science and made use of a confocal microscope at Brookhaven Lab’s Center for Functional Nanomaterials, a DOE Office of Science user facility.

    See the full article here.

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 342 other followers

%d bloggers like this: