Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:08 pm on March 31, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , , , ,   

    From Argonne Lab via PPPL: “Plasma Turbulence Simulations Reveal Promising Insight for Fusion Energy” 

    March 31, 2014
    By Argonne National Laboratory

    With the potential to provide clean, safe, and abundant energy, nuclear fusion has been called the “holy grail” of energy production. But harnessing energy from fusion, the process that powers the sun, has proven to be an extremely difficult challenge.

    turb
    Simulation of microturbulence in a tokamak fusion device. (Credit: Chad Jones and Kwan-Liu Ma, University of California, Davis; Stephane Ethier, Princeton Plasma Physics Laboratory)

    Scientists have been working to accomplish efficient, self-sustaining fusion reactions for decades, and significant research and development efforts continue in several countries today.

    For one such effort, researchers from the Princeton Plasma Physics Laboratory (PPPL), a DOE collaborative national center for fusion and plasma research in New Jersey, are running large-scale simulations at the Argonne Leadership Computing Facility (ALCF) to shed light on the complex physics of fusion energy. Their most recent simulations on Mira, the ALCF’s 10-petaflops Blue Gene/Q supercomputer, revealed that turbulent losses in the plasma are not as large as previously estimated.

    MIRA

    Good news

    This is good news for the fusion research community as plasma turbulence presents a major obstacle to attaining an efficient fusion reactor in which light atomic nuclei fuse together and produce energy. The balance between fusion energy production and the heat losses associated with plasma turbulence can ultimately determine the size and cost of an actual reactor.

    “Understanding and possibly controlling the underlying physical processes is key to achieving the efficiency needed to ensure the practicality of future fusion reactors,” said William Tang, PPPL principal research physicist and project lead.

    Tang’s work at the ALCF is focused on advancing the development of magnetically confined fusion energy systems, especially ITER, a multi-billion dollar international burning plasma experiment supported by seven governments including the United States.

    Currently under construction in France, ITER will be the world’s largest tokamak system, a device that uses strong magnetic fields to contain the burning plasma in a doughnut-shaped vacuum vessel. In tokamaks, unavoidable variations in the plasma’s ion temperature drive microturbulence, which can significantly increase the transport rate of heat, particles, and momentum across the confining magnetic field.

    “Simulating tokamaks of ITER’s physical size could not be done with sufficient accuracy until supercomputers as powerful as Mira became available,” said Tang.

    To prepare for the architecture and scale of Mira, Tim Williams of the ALCF worked with Tang and colleagues to benchmark and optimize their Gyrokinetic Toroidal Code – Princeton (GTC-P) on the ALCF’s new supercomputer. This allowed the research team to perform the first simulations of multiscale tokamak plasmas with very high phase-space resolution and long temporal duration. They are simulating a sequence of tokamak sizes up to and beyond the scale of ITER to validate the turbulent losses for large-scale fusion energy systems.

    Decades of experiments

    Decades of experimental measurements and theoretical estimates have shown turbulent losses to increase as the size of the experiment increases; this phenomenon occurs in the so-called Bohm regime. However, when tokamaks reach a certain size, it has been predicted that there will be a turnover point into a Gyro-Bohm regime, where the losses level off and become independent of size. For ITER and other future burning plasma experiments, it is important that the systems operate in this Gyro-Bohm regime.

    The recent simulations on Mira led the PPPL researchers to discover that the magnitude of turbulent losses in the Gyro-Bohm regime is up to 50% lower than indicated by earlier simulations carried out at much lower resolution and significantly shorter duration. The team also found that transition from the Bohm regime to the Gyro-Bohm regime is much more gradual as the plasma size increases. With a clearer picture of the shape of the transition curve, scientists can better understand the basic plasma physics involved in this phenomenon.

    “Determining how turbulent transport and associated confinement characteristics will scale to the much larger ITER-scale plasmas is of great interest to the fusion research community,” said Tang. “The results will help accelerate progress in worldwide efforts to harness the power of nuclear fusion as an alternative to fossil fuels.”

    This project has received computing time at the ALCF through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The effort was also awarded pre-production time on Mira through the ALCF’s Early Science Program, which allowed researchers to pursue science goals while preparing their GTC-P code for Mira.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 1:11 pm on March 31, 2014 Permalink | Reply
    Tags: , Applied Research & Technology, ,   

    From Ames Lab: “Ultra-fast laser spectroscopy lights way to understanding new materials” 

    AmesLabII
    Ames Laboratory

    Feb. 28, 2014
    Jigang Wang, Material Sciences and Engineering, 515-294-5630
    Breehan Gerleman Lucchesi, Public Affairs, 515-294-9750

    Scientists at the U.S. Department of Energy’s Ames Laboratory are revealing the mysteries of new materials using ultra-fast laser spectroscopy, similar to high-speed photography where many quick images reveal subtle movements and changes inside the materials. Seeing these dynamics is one emerging strategy to better understanding how new materials work, so that we can use them to enable new energy technologies.

    Physicist Jigang Wang and his colleagues recently used ultra-fast laser spectroscopy to examine and explain the mysterious electronic properties of iron-based superconductors. Results appeared in Nature Communications this month.

    Superconductors are materials that, when cooled below a certain temperature, display zero electrical resistance, a property that could someday make possible lossless electrical distribution. Superconductors start in a “normal” often magnetic state and then transition to a superconducting state when they are cooled to a certain temperature.

    What is still a mystery is what goes on in materials as they transform from normal to superconducting. And this “messy middle” area of superconducting materials’ behavior holds richer information about the why and how of superconductivity than do the stable areas.

    fast
    Ames Laboratory scientists use ultra-fast laser spectroscopy to “see” tiny actions in real time in
    materials. Scientists apply a pulse laser to a sample to excite the material. Some of the laser light
    is absorbed by the material, but the light that passes through or reflected from the material can be
    used to take super-fast “snapshots” of what is going on in the material following the laser pulse.

    “The stable states of materials aren’t quite as interesting as the crossover region when comes to understanding materials’ mechanisms because everything is settled and there’s not a lot of action. But, in this crossover region to superconductivity, we can study the dynamics, see what goes where and when, and this information will tell us a lot about the interplay between superconductivity and magnetism,” said Wang, who is also an associate professor of physics and astronomy at Iowa State University.

    But the challenges is that in the crossover region, all the different sets of materials properties that scientists examine, like its magnetic order and electronic order, are all coupled. In other words, when there’s a change to one set of properties, it changes all the others. So, it’s really difficult to trace what individual changes and properties are dominant.

    The complexity of this coupled state has been studied by groundbreaking work by research groups at Ames Laboratory over the past five years. Paul Canfield, an Ames Laboratory scientist and expert in designing and developing iron-based superconductor materials, created and characterized a very high quality single crystal used in this investigation. These high-quality single crystals had been exceptionally well characterized by other techniques and were essentially “waiting for their close up” under Wang’s ultra-fast spot-light.

    Wang and the team used ultra-fast laser spectroscopy to “see” the tiny actions in materials. In ultra-fast laser spectroscopy, scientists apply a pulsed laser to a materials sample to excite particles within the sample. Some of the laser light is absorbed by the material, but the light that passes through the material can be used to take super-fast “snapshots” of what is going on in the material following the laser pulse and then replayed afterward like a stop-action movie.

    The technique is especially well suited to understanding the crossover region of iron-arsenide based superconductors materials because the laser excitation alters the material so that different properties of the material are distinguishable from each other in time, even the most subtle evolutions in the materials’ properties.

    “Ultra-fast laser spectroscopy is a new experimental tool to study dynamic, emergent behavior in complex materials such as these iron-based superconductors,” said Wang. “Specifically, we answered the pressing question of whether an electronically-driven nematic order exists as an independent phase in iron-based superconductors, as these materials go from a magnetic normal state to superconducting state. The answer is yes. This is important to our overall understanding of how superconductors emerge in this type of materials.”

    Aaron Patz and Tianqi Li collaborated on the laser spectroscopy work. Sheng Ran, Sergey L. Bud’ko and Paul Canfield collaborated on sample development at Ames Laboratory and Iowa State University. Rafael M. Fernandes at the University of Minnesota, Joerg Schmalian, formerly of Ames Laboratory and now at Karlsruhe Institute of Technology and Ilias E. Perakis at University of Crete, Greece collaborated on the simulation work.

    Wang, Patz, Li, Ran, Bud’ko and Canfield’s work at Ames Laboratory was supported by the U.S. Department of Energy’s Office of Science, (sample preparation and characterization). Wang’s work on pnictide superconductors is supported by Ames Laboratory’s Laboratory Directed Research and Development (LDRD) funding (femtosecond laser spectroscopy).

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website at science.energy.gov/.

    See the full article here.

    Ames Laboratory is a government-owned, contractor-operated research facility of the U.S. Department of Energy that is run by Iowa State University.

    For more than 60 years, the Ames Laboratory has sought solutions to energy-related problems through the exploration of chemical, engineering, materials, mathematical and physical sciences. Established in the 1940s with the successful development of the most efficient process to produce high-quality uranium metal for atomic energy, the Lab now pursues a broad range of scientific priorities.

    Ames Laboratory shares a close working relationship with Iowa State University’s Institute for Physical Research and Technology, or IPRT, a network of scientific research centers at Iowa State University, Ames, Iowa.

    DOE Banner


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:08 am on March 30, 2014 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From M.I.T.: “Researchers find that going with the flow makes bacteria stick” 

    February 23, 2014
    David L. Chandler, MIT News Office

    In surprising new discovery, scientists show that microbes are more likely to adhere to tube walls when water is moving.

    bactria

    In a surprising new finding, researchers have discovered that bacterial movement is impeded in flowing water, enhancing the likelihood that the microbes will attach to surfaces. The new work could have implications for the study of marine ecosystems, and for our understanding of how infections take hold in medical devices.

    The findings, the result of microscopic analysis of bacteria inside microfluidic devices, were made by MIT postdoc Roberto Rusconi, former MIT postdoc Jeffrey Guasto (now an assistant professor of mechanical engineering at Tufts University), and Roman Stocker, an associate professor of civil and environmental engineering at MIT. Their results are published in the journal Nature Physics.

    The study, which combined experimental observations with mathematical modeling, showed that the flow of liquid can have two significant effects on microbes: “It quenches the ability of microbes to chase food,” Stocker says, “and it helps microbes find surfaces.”

    That second finding could be particularly beneficial: Stocker says in some cases, that phenomenon could lead to new approaches to tuning flow rates to prevent fouling of surfaces by microbes — potentially averting everything from bacteria getting a toehold on medical equipment to biofilms causing drag on ship hulls.

    The effect of flowing water on bacterial swimming was “a complete surprise,” Stocker says.

    “My own earlier predictions of what would happen when microbes swim in flowing water had been: ‘Nothing too interesting,’” he adds. “It was only when Roberto and Jeff did the experiments that we found this very strong and robust phenomenon.”

    Even though most microorganisms live in flowing liquid, most studies of their behavior ignore flow, Stocker explains. The new findings show, he says, that “any study of microbes suspended in a liquid should not ignore that the motion of that liquid could have important repercussions on the microbes.”

    The novelty of this result owes partly to the divisions of academic specialties, and partly to advances in technology, Stocker says. “Microbiologists have rarely taken into account fluid flow as an ecological parameter, whereas physicists have just recently started to pay attention to microbes,” he says, adding: “The ability to directly watch microbes under the controlled flow conditions afforded by microfluidic technology — which is only about 15 years old — has made all the difference in allowing us to discover and understand this effect of flow on microbes.”

    The team found that swimming bacteria cluster in the “high shear zones” in a flow — the regions where the speed of the fluid changes most abruptly. Such high shear zones occur in most types of flows, and in many bacterial habitats. One prominent location is near the walls of tubes, where the result is a strong enhancement of the bacteria’s tendency to adhere to those walls and form biofilms.

    But this effect varies greatly depending on the speed of the flow, opening the possibility that the rate of biofilm formation can be tweaked by increasing or decreasing flow rates.

    Guasto says the new understanding could help in the design of medical equipment to reduce such infections: Since the phenomenon peaks at particular rates of shear, he says, “Our results might suggest additional design criteria for biomedical devices, which should operate outside this range of shear rates, when possible — either faster or slower.”

    Biofilms are found everywhere,” Rusconi says, adding that the majority of bacteria spend significant fractions of their lives adhering to surfaces. “They cause major problems in industrial settings,” such as by clogging pipes or reducing the efficiency of heat exchangers. Their adherence is also a major health issue: Bacteria concentrated in biofilms are up to 1,000 times more resistant to antibiotics than those suspended in liquid.

    The concentration of microbes in the shear zones is an effect that only happens with those that can control their movements. Nonliving particles of similar size and shape show no such effect, the team found, nor do nonmotile bacteria that are swept along passively by the water. “Without motility, bacteria are distributed everywhere and there is no preferential accumulation,” Rusconi says.

    The new findings could also be important for studies of microbial marine ecosystems, by affecting how bacteria move in search of nutrients when one accounts for the ubiquitous currents and turbulence, Stocker says. Though they only studied two types of bacteria, the researchers predict in their paper that “this phenomenon should apply very broadly to many different motile microbes.”

    In fact, the phenomenon has no inherent size limit, and could apply to a wide range of organisms, Guasto says. “There’s really nothing special about bacteria compared to many other swimming cells in this respect,” he says. “This phenomenon could easily apply to a wide range of plankton and sperm cells as well.”

    Howard A. Stone, a professor of mechanical and aerospace engineering at Princeton University, who was not involved in this research, calls this a “very interesting paper” and says “the observation of shear-induced trapping, which can impact the propensity for bacterial attachment on surfaces, is an important observation and idea, owing to the major importance of bacterial biofilms.”

    See the full article here.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 3:02 pm on March 27, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , ,   

    From SLAC Lab- “Science with Bling: Turning Graphite into Diamond” 

    March 27, 2014
    Manuel Gnida

    A research team led by SLAC scientists has uncovered a potential new route to produce thin diamond films for a variety of industrial applications, from cutting tools to electronic devices to electrochemical sensors.

    lead
    SLAC researchers have found a new way to transform graphite — a pure form of carbon most familiar as the lead in pencils — into a diamond-like film. (Fabricio Sousa/SLAC)

    lattice
    This illustration shows four layers of transformed graphene (single sheets of graphite, with carbon atoms represented as black spheres) on a platinum surface (blue spheres). The addition of hydrogen atoms (green spheres) to the top layer has set off a domino effect that transformed this graphite-like material into a diamond-like film. The film is stabilized by bonds between the platinum substrate and the bottom-most carbon layer. (Sarp Kaya and Frank Abild-Pedersen/SUNCAT)

    The scientists added a few layers of graphene – one-atom thick sheets of graphite – to a metal support and exposed the topmost layer to hydrogen. To their surprise, the reaction at the surface set off a domino effect that altered the structure of all the graphene layers from graphite-like to diamond-like.

    “We provide the first experimental evidence that hydrogenation can induce such a transition in graphene,” says Sarp Kaya, researcher at the SUNCAT Center for Interface Science and Catalysis and corresponding author of the recent study.

    From Pencil Lead to Diamond

    Graphite and diamond are two forms of the same chemical element, carbon. Yet, their properties could not be any more different. In graphite, carbon atoms are arranged in planar sheets that can easily glide against each other. This structure makes the material very soft and it can be used in products such as pencil lead.

    In diamond, on the other hand, the carbon atoms are strongly bonded in all directions; thus diamond is extremely hard. Besides mechanical strength, its extraordinary electrical, optical and chemical properties contribute to diamond’s great value for industrial applications.

    Scientists want to understand and control the structural transition between different carbon forms in order to selectively transform one into another. One way to turn graphite into diamond is by applying pressure. However, since graphite is the most stable form of carbon under normal conditions, it takes approximately 150,000 times the atmospheric pressure at the Earth’s surface to do so.

    Now, an alternative way that works on the nanoscale is within grasp. “Our study shows that hydrogenation of graphene could be a new route to synthesize ultrathin diamond-like films without applying pressure,” Kaya says.

    Domino Effect

    For their experiments, the researchers loaded a platinum support with up to four sheets of graphene and added hydrogen to the topmost layer. With the help of intense X-rays from SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL, Beam Line 13-2) and additional theoretical calculations performed by SUNCAT researcher Frank Abild-Pedersen, the team then determined how hydrogen impacted the layered structure.

    SLAC SSRL
    Inside SSRL

    They found that hydrogen binding initiated a domino effect, with structural changes propagating from the sample’s surface through all the carbon layers underneath, turning the initial graphite-like structure of planar carbon sheets into an arrangement of carbon atoms that resembles diamond.

    The discovery was unexpected. The original goal of the experiment was to see if adding hydrogen could alter graphene’s properties in a way that would make it useable in transistors, the fundamental building block of electronic devices. Instead, the scientists discovered that hydrogen binding resulted in the formation of chemical bonds between graphene and the platinum substrate.

    It turns out that these bonds are crucial for the domino effect. “For this process to be stable, the platinum substrate needs to bond to the carbon layer closest to it,” Kaya explains. “Platinum’s ability to form these bonds determines the overall stability of the diamond-like film.”

    Future research will explore the full potential of hydrogenated few-layer graphene for applications in the material sciences. It will be particularly interesting to determine if diamond-like films can be grown on other metal substrates, using graphene of various thicknesses.

    The research team included scientists from Stanford University, the Stanford Institute for Materials & Energy Sciences (SIMES), SUNCAT and SSRL.

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 5:23 am on March 27, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From Brookhaven Lab: “Scientists Track 3D Nanoscale Changes in Rechargeable Battery Material During Operation” 

    Brookhaven Lab

    March 26, 2014
    Contacts: Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    First 3D nanoscale observations of microstructural degradation during charge-discharge cycles could point to new ways to engineer battery electrode materials for better performance.

    Scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have made the first 3D observations of how the structure of a lithium-ion battery anode evolves at the nanoscale in a real battery cell as it discharges and recharges. The details of this research, described in a paper published in Angewandte Chemie, could point to new ways to engineer battery materials to increase the capacity and lifetime of rechargeable batteries.

    “For the first time, we have captured the microstructural details of an operating battery anode in 3D with nanoscale resolution.”
    — Brookhaven physicist Jun Wang

    “This work offers a direct way to look inside the electrochemical reaction of batteries at the nanoscale to better understand the mechanism of structural degradation that occurs during a battery’s charge/discharge cycles,” said Brookhaven physicist Jun Wang, who led the research. “These findings can be used to guide the engineering and processing of advanced electrode materials and improve theoretical simulations with accurate 3D parameters.”

    Chemical reactions in which lithium ions move from a negatively charged electrode to a positive one are what carry electric current from a lithium-ion battery to power devices such as laptops and cell phones. When an external current is applied—say, by plugging the device into an outlet—the reaction runs in reverse to recharge the battery.

    depict
    The top row shows how tin particles evolve in three dimensions during the first two lithiation–delithiation cycles in the model lithium-ion rechargeable battery cell. The bottom row shows “cross-sectional” images of a single tin particle during the first two cycles. Severe fracture and pulverization occur during the initial stage of cycling. The particle stays mechanically stable after the first cycle, while the electrochemical reaction proceeds reversibly.

    Scientists have long known that repeated charging/discharging (lithiation and delithiation) introduces microstructural changes in the electrode material, particularly in some high-capacity silicon and tin-based anode materials. These microstructural changes reduce the battery’s capacity—the energy the battery can store—and its cycle life—how many times the battery can be recharged over its lifetime. Understanding in detail how and when in the process the damage occurs could point to ways to avoid or minimize it.

    “It has been very challenging to directly visualize the microstructural evolution and chemical composition distribution changes in 3D within electrodes when a real battery cell is going through charge and discharge,” said Wang.

    A team led by Vanessa Wood of the university ETH Zurich, working at the Swiss Light Source, recently performed in situ 3D tomography at micrometer scale resolution during battery cell charge and discharge cycles.

    Achieving nanoscale resolution has been the ultimate goal.

    “For the first time,” said Wang, “we have captured the microstructural details of an operating battery anode in 3D with nanoscale resolution, using a new in-situ micro-battery-cell we developed for synchrotron x-ray nano-tomography—an invaluable tool for reaching this goal.” This advance provides a powerful new source of insight into microstructural degradation.
    Building a micro battery

    Developing a working micro battery cell for nanoscale x-ray 3D imaging was very challenging. Common coin-cell batteries aren’t small enough, plus they block the x-ray beam when it is rotated.

    “The whole micro cell has to be less than one millimeter in size but with all battery components—the electrode being studied, a liquid electrolyte, and the counter electrode—supported by relatively transparent materials to allow transmission of the x-rays, and properly sealed to ensure that the cell can work normally and be stable for repeated cycling,” Wang said. The paper explains in detail how Wang’s team built a fully functioning battery cell with all three battery components contained within a quartz capillary measuring one millimeter in diameter.

    By placing the cell in the path of high-intensity x-ray beams generated at beamline X8C of Brookhaven’s National Synchrotron Light Source (NSLS), the scientists produced more than 1400 two-dimensional x-ray images of the anode material with a resolution of approximately 30 nanometers. These 2D images were later reconstructed into 3D images, much like a medical CT scan but with nanometer-scale clarity. Because the x-rays pass through the material without destroying it, the scientists were able to capture and reconstruct how the material changed over time as the cell discharged and recharged, cycle after cycle.

    Brookhaven NSLS
    Brookhaven NSLS

    images
    These images show how the surface morphology and internal microstructure of an individual tin particle changes from the fresh state through the initial lithiation and delithiation cycle (charge/discharge). Most notable are the expansion in overall particle volume during lithiation, and reduction in volume and pulverization during delithiation. The cross-sectional images reveal that delithiation is incomplete, with the core of the particle retaining lithium surround by a layer of pure tin.

    Using this method, the scientists revealed that, “severe microstructural changes occur during the first delithiation and subsequent second lithiation, after which the particles reach structural equilibrium with no further significant morphological changes.”

    Specifically, the particles making up the tin-based anode developed significant curvatures during the early charge/discharge cycles leading to high stress. “We propose that this high stress led to fracture and pulverization of the anode material during the first delithiation,” Wang said. Additional concave features after the first delithiation further induced structural instability in the second lithiation, but no significant changes developed after that point.

    “After these initial two cycles, the tin anode shows a stable discharge capacity and reversibility,” Wang said.

    “Our results suggest that the substantial microstructural changes in the electrodes during the initial electrochemical cycle—called forming in the energy storage industry—are a critical factor affecting how a battery retains much of its current capacity after it is formed,” she said. “Typically a battery loses a substantial portion of its capacity during this initial forming process. Our study will improve understanding of how this happens and help us develop better controls of the forming process with the goal of improving the performance of energy storage devices.”

    three
    Jiajun Wang, Karen Chen and Jun Wang prepare a sample for study at NSLS beamline X8C.

    Wang pointed out that while the current study looked specifically at a battery with tin as the anode, the electrochemical cell her team developed and the x-ray nanotomography technique can be applied to studies of other anode and cathode materials. The general methodology for monitoring structural changes in three dimensions as materials operate also launches an opportunity to monitor chemical states and phase transformations in catalysts, other types of materials for energy storage, and biological molecules.

    The transmission x-ray microscope used for this study will soon move to a full-field x-ray imaging (FXI) beamline at NSLS-II, a world-class synchrotron facility now nearing completion at Brookhaven Lab. This new facility will produce x-ray beams 10,000 times brighter than those at NSLS, enabling dynamic studies of various materials as they perform their particular functions.

    Jiajun Wang and Yu-chen Karen Chen-Wiegart are research associates in Wang’s research group and performed the work together.

    This research was funded as a Laboratory Directed Research and Development project at Brookhaven Lab and by the DOE Office of Science. The transmission x-ray microscope used in this work was built with funding from the American Recovery and Reinvestment Act.

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:45 pm on March 24, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , ,   

    From Berkeley Lab: “New Technique for Identifying Gene-Enhancers” 


    Berkeley Lab

    Berkeley Lab-Led Research Team Unveils Powerful New Tool for Studying DNA Elements that Regulate Genes

    March 24, 2014
    Lynn Yarris (510) 486-5375 lcyarris@lbl.gov

    An international team led by researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab) has developed a new technique for identifying gene enhancers – sequences of DNA that act to amplify the expression of a specific gene – in the genomes of humans and other mammals. Called SIF-seq, for site-specific integration fluorescence-activated cell sorting followed by sequencing, this new technique complements existing genomic tools, such as ChIP-seq (chromatin immunoprecipitation followed by sequencing), and offers some additional benefits.

    “While ChIP-seq is very powerful in that it can query an entire genome for characteristics associated with enhancer activity in a single experiment, it can fail to identify some enhancers and identify some sites as being enhancers when they really aren’t,” says Diane Dickel, a geneticist with Berkeley Lab’s Genomics Division and member of the SIF-seq development team. “SIF-seq is currently capable of testing only hundreds to a few thousand sites for enhancer activity in a single experiment, but can determine enhancer activity more accurately than ChIP-seq and is therefore a very good validation assay for assessing ChIP-seq results.”

    Dickel is the lead author of a paper in Nature Methods describing this new technique. The paper is titled Function-based identification of mammalian enhancers using site-specific integration. The corresponding authors are Axel Visel and Len Pennacchio, also geneticists with Berkeley Lab’s Genomics Division. (See below for a complete list of authors.)

    With the increasing awareness of the important role that gene enhancers play in normal cell development as well as in disease, there is strong scientific interest in identifying and characterizing these enhancers. This is a challenging task because an enhancer does not have to be located directly adjacent to the gene whose expression it regulates, but can instead be located hundreds of thousands of DNA base pairs away. The challenge is made even more difficult because the activity of many enhancers is restricted to specific tissues or cell types.

    dd
    Diane Dickel is the lead author of Nature Methods paper describing a new technique for identifying gene enhancers in the genomes of humans and other mammals. (Photo by Roy Kaltschmidt)

    “For example, brain enhancers will not typically work in heart cells, which means that you must test your enhancer sequence in the correct cell type,” Dickel says.

    Currently, enhancers can be identified through chromatin-based assays, such as ChIP-seq, which predict enhancer elements indirectly based on the enhancer’s association with specific epigenomic marks, such as transcription factors or molecular tags on DNA-associated histone proteins. Visel, Pennacchio, Dickel and their colleagues developed SIF-seq in response to the need for a higher-throughput functional enhancer assay that can be used in a wide variety of cell types and developmental contexts.

    “We’ve shown that SIF-seq can be used to identify enhancers active in cardiomyocytes, neural progenitor cells, and embryonic stem cells, and we think that it has the potential to be expanded for use in a much wider variety of cell types,” Dickel says. “This means that many more types of enhancers could potentially be tested in vitro in cell culture.”

    In SIF-seq, hundreds or thousands of DNA fragments to be tested for enhancer activity are coupled to a reporter gene and targeted into a single, reproducible site in embryonic cell genomes. Every embryonic cell will have exactly one potential enhancer-reporter. Fluorescence-activated sorting is then used to identify and retrieve from this mix only those cells that display strong reporter gene expression, which represent the cells with the most active enhancers.

    “Unlike previous enhancer assays for mammals, SIF-seq includes the integration of putative enhancers into a single genomic locus,” says Visel. “Therefore, the activity of enhancers is assessed in a reproducible chromosomal context rather than from a transiently expressed plasmid. Furthermore, by making use of embryonic stem cells and in vitro differentiation, SIF-seq can be used to assess enhancer activity in a wide variety of disease-relevant cell types.”

    two
    Berkeley Lab’s Len Pennacchio (left) and Axel Visel led the development of a new technique for identifying gene enhancers called SIF-seq, for site-specific integration fluorescence-activated cell sorting followed by sequencing. (Photo by Roy Kaltschmidt)

    Adds Pennacchio, “The range of biologically or disease-relevant enhancers that SIF-seq can be used to identify is limited only by currently available stem cell differentiation methods. Although we did not explicitly test the activity of species-specific enhancers, such as those derived from certain classes of repetitive elements, our results strongly suggest that SIF-seq can be used to identify enhancers from other mammalian genomes where desired cell types are difficult or impossible to obtain.”

    The ability of SIF-seq to use reporter assays in mouse embryonic stem cells to identify human embryonic stem cell enhancers that are not present in the mouse genome opens the door to intriguing research possibilities as Dickel explains.

    “Human and chimpanzee genes differ very little, so one hypothesis in evolutionary genomics holds that humans and chimpanzees are so phenotypically different because of differences in the way they regulate gene expression. It is very difficult to carry out enhancer identification through ChIP-seq that would be useful in studying this hypothesis,” she says. “However, because SIF-seq only requires DNA sequence from a mammal and can be used in a variety of cell types, it should be possible to compare the neuronal enhancers present in a large genomic region from human to the neuronal enhancers present in the orthologous chimpanzee region. This could potentially tell us interesting things about the genetic differences that differentiate human brain development from that of other primates.”

    In addition to Dickel, Pennacchio and Visel, other co-authors of the Nature Methods paper were Yiwen Zhu, Alex Nord, John Wylie, Jennifer Akiyama, Veena Afzal, Ingrid Plajzer-Frick, Aileen Kirkpatrick, Berthold Göttgens and Benoit Bruneau.

    This research was primarily supported by the National Institutes of Health.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 4:16 pm on March 21, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , ,   

    From Argonne APS: “A Layered Nanostructure Held Together By DNA” 

    News from Argonne National Laboratory

    March 18, 2014
    David Lindley

    Dreaming up nanostructures that have desirable optical, electronic, or magnetic properties is one thing. Figuring out how to make them is another. A new strategy uses the binding properties of complementary strands of DNA to attach nanoparticles to each other and builds up a layered thin-film nanostructure through a series of controlled steps. Investigation at the U.S. Department of Energy Office of Science’s Advanced Photon Source has revealed the precise form that the structures adopted, and points to ways of exercising still greater control over the final arrangement.

    dna
    DNA

    The idea of using DNA to hold nanoparticles was devised more than 15 years ago by Chad Mirkin and his research team at Northwestern University. They attached short lengths of single-stranded DNA with a given sequence to some nanoparticles, and then attached DNA with the complementary sequence to others. When the particles were allowed to mix, the “sticky ends” of the DNA hooked up with each other, allowing for reversible aggregation and disaggregation depending on the hybridization properties of the DNA linkers.

    sticky
    Nanoparticles linked by complementary DNA strands form a bcc superlattice when added layer-by-layer to a DNA coated substrate. When the substrate DNA is all one type, the superlattice forms at a different orientation (top row) than if the substrate has both DNA linkers (bottom row). GISAXS scattering patterns (right) and scanning electron micrographs (inset) reveal the superlattice structure. No image credit.

    Recently, this DNA “smart glue” has been utilized to assemble nanoparticles into ordered arrangements resembling atomic crystal lattices, but on a larger scale. To date, nanoparticle superlattices have been synthesized in well over 100 crystal forms, including some that have never been observed in nature.

    However, these superlattices are typically polycrystalline, and the size, number, and orientation of the crystals within them is generally unpredictable. To be useful as metamaterials, photonic crystals, and the like, single superlattices with consistent size and fixed orientation are needed.

    Northwestern researchers and a colleague at Argonne National Laboratory have devised a variation on the DNA-linking procedure that allows a greater degree of control.

    The basic elements of the superlattice were gold nanoparticles, each 10 nanometers across. These particles were made in two distinct varieties, one adorned with approximately 60 DNA strands of a certain sequence, while the other carried the complementary sequence.

    The researchers built up thin-film superlattices on a silicon substrate that was also coated with DNA strands. In one set of experiments, the substrate DNA was all of one sequence – call it the “B” sequence – and it was first dipped into a suspension of nanoparticles with the complementary “A” sequence.

    When the A and B ends connected, the nanoparticles formed a single layer on the substrate. Then the process was repeated with a suspension of the B-type nanoparticles, to form a second layer. The whole cycle was repeated, as many as four more times, to create a multilayer nanoparticle superlattice in the form of a thin film.

    Grazing incidence small-angle x-ray scattering (GISAXS) studies carried out at the X-ray Science Division 12-ID-B beamline at the Argonne Advanced Photon Source revealed the symmetry and orientation of the superlattices as they formed. Even after just three half-cycles, the team found that the nanoparticles had arranged themselves into a well-defined, body-centered cubic (bcc) structure, which was maintained as more layers were added.

    In a second series of experiments, the researchers seeded the substrate with a mix of both the A and B types of DNA strand. Successive exposure to the two nanoparticle types produced the same bcc superlattice, but with a different vertical orientation. That is, in the first case, the substrate lay on a plane through the lattice containing only one type of nanoparticle, while in the second case, the plane contained an alternating pattern of both types (see the figure).

    To get orderly superlattice growth, the researchers had to conduct the process at the right temperature. Too cold, and the nanoparticles would stick to the substrate in an irregular fashion, and remain stuck. Too hot, and the DNA linkages would not hold together.

    But in a temperature range of a couple of degrees on either side of about 40° C (just below the temperature at which the DNA sticky ends detach from each other), the nanoparticles were able to continuously link and unlink from each other. Over a period of about an hour per half-cycle, they settled into the bcc superlattice, the most thermodynamically stable arrangement.

    GISAXS also revealed that although the substrate forced superlattices into specific vertical alignments, it allowed the nanoparticle crystals to form in any horizontal orientation. The researchers are now exploring the possibility that by patterning the substrate in a suitable way, they can control the orientation of the crystals in both dimensions, increasing the practical value of the technique.

    See the full article here.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security.

    Argonne Lab Campus
    Argonne APS Banner

    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 3:47 pm on March 21, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , , ,   

    From Brookhaven Lab: “Understanding the Initiation of Protein Synthesis in Mammals” 

    Brookhaven Lab

    March 18, 2014
    Chelsea Whyte

    Protein synthesis, the process by which cells generate new proteins, is the most important cellular function, requiring more than 70 percent of the total energy of a cell. The initiation of this process is the most regulated and most critical component, but it is still the least understood.

    protein
    Messenger RNA (in red) latches closed around a pre-initiation complex, and attaches to transfer RNA (in green), beginning a process of protein synthesis specific to eukaryotes — animals, plants, and fungi.

    Research by Ivan Lomakin and Thomas Steitz of Yale University has unlocked the genetic scanning mechanism that begins this crucial piece of cell machinery.

    They determined the structures of three complexes of the ribosome, a complex molecular machine that links together amino acids to form proteins according to an order specified by messenger RNA. These three structures represent distinct steps in protein translation in mammals – the recruitment and scanning of mRNA, the selection of initiator tRNA, and the joining of large and small ribosomal subunits.

    “Any small defect or disruption in the protein synthesis process can cause abnormalities or disease,” said Lomakin. “Understanding this process is critical for understanding how human life comes to be, and how some over-expressions or abnormalities in the initiation of protein synthesis may be connected to cancer or Alzheimer’s or other diseases.”

    Using x-ray crystallography on ribosomal subunits purified from rabbit cells, they were able to determine the positions and roles of the different pieces of cellular machinery, a bit like creating a playbook for a football game. They found that the tRNA and mRNA compete for position at the P site – one of three key sites on a ribosome – where short chains of amino acids are linked to form proteins.

    “Now, we have a low resolution structure, so we can’t yet talk about atomic details of the mechanism,” said Lomakin. “The next important step is to get higher resolution images. And we can change organisms to see if they behave differently, so we’re working on the structure of human ribosome initiation complexes, too.”

    For this higher resolution, Lomakin and his collaborators will use the National Synchrotron Light Source II, a new state-of-the-art light source that will begin early science at Brookhaven National Laboratory in 2014. “Our hope is to be able to look at very weak diffraction to get higher resolution structures of these important cellular mechanisms.”

    Brookhaven NSLS II Photo
    NSLS-II at Brookhaven Lab

    See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:33 pm on March 20, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , , , ,   

    From Oak Ridge via PPPL: “The Bleeding ‘Edge’ of Fusion Research” 

    March 20, 2014

    Few problems have vexed physicists like fusion, the process by which stars fuel themselves and by which researchers on Earth hope to create the energy source of the future.

    By heating the hydrogen isotopes tritium and deuterium to more than five times the temperature of the Sun’s surface, scientists create a reaction that could eventually produce electricity. Turns out, however, that confining the engine of a star to a manmade vessel and using it to produce energy is tricky business.

    Big problems, such as this one, require big solutions. Luckily, few solutions are bigger than Titan, the Department of Energy’s flagship Cray XK7 supercomputer managed by the Oak Ridge Leadership Computing Facility.

    tatan
    Titan

    is
    Inside Titan

    Titan allows advanced scientific applications to reach unprecedented speeds, enabling scientific breakthroughs faster than ever with only a marginal increase in power consumption. This unique marriage of number-crunching hardware enables Titan, located at Oak Ridge National Laboratory (ORNL), to reach a peak performance of 27 petaflops to claim the title of the world’s fastest computer dedicated solely to scientific research.

    PPPL fusion code

    And fusion is at the head of the research pack. In fact, a team led by Princeton Plasma Physics Laboratory’s (PPPL’s) C.S. Chang increased the performance of its fusion XGC1 code fourfold on Titan using its GPUs and CPUs, compared to its previous CPU-only incarnation after a 6-month performance engineering period during which the team tweaked its code to best take advantage of Titan’s revolutionary hybrid architecture.

    “In nature, there are two types of physics,” said Chang. The first is equilibrium, in which changes happen in a “closed” world toward a static state, making the calculations comparatively simple. “This science has been established for a couple hundred years,” he said. Unfortunately, plasma physics falls in the second category, in which a system has inputs and outputs that constantly drive the system to a nonequilibrium state, which Chang refers to as an “open” world.

    Most magnetic fusion research is centered on a tokamak, a donut-shaped vessel that shows the most promise for magnetically confining the extremely hot and fragile plasma. Because the plasma is constantly coming into contact with the vessel wall and losing mass and energy, which in turn introduces neutral particles back into the plasma, equilibrium physics generally don’t apply at the edge and simulating the environment is difficult using conventional computational fluid dynamics.

    tftr
    TFTR at PPPL Tokamak Fusion Test Reactor at Princeton Plasma Physics Laboratory Image Credit: Princeton.

    Another major reason the simulations are so complex is their multiscale nature. The distance scales involved range from millimeters (what’s going on among the gyrating particles and turbulence eddies inside the plasma itself) to meters (looking at the entire vessel that contains the plasma). The time scales introduce even more complexity, as researchers want to see how the edge plasma evolves from microseconds in particle motions and turbulence fluctuations to milliseconds and seconds in its full evolution. Furthermore, these two scales are coupled. “The simulation scale has to be very large, but still has to include the small-scale details,” said Chang.

    And few machines are as capable of delivering in that regard as is Titan. “The bigger the computer, the higher the fidelity,” he said, simply because researchers can incorporate more physics, and few problems require more physics than simulating a fusion plasma.

    On the hunt for blobs

    Studying the plasma edge is critical to understanding the plasma as a whole. “What happens at the edge is what determines the steady fusion performance at the core,” said Chang. But when it comes to studying the edge, “the effort hasn’t been very successful because of its complexity,” he added.

    Chang’s team is shedding light on a long-known and little-understood phenomenon known as “blobby” turbulence in which formations of strong plasma density fluctuations or clumps flow together and move around large amounts of edge plasma, greatly affecting edge and core performance in the DIII-D tokamak at General Atomics in San Diego, CA. DIII-D-based simulations are considered a critical stepping-stone for the full-scale, first principles simulation of the ITER plasma edge. ITER is a tokamak reactor to be built in France to test the science feasibility of fusion energy.

    iter
    ITER

    The phenomenon was discovered more than 10 years ago, and is one of the “most important things in understanding edge physics,” said Chang, adding that people have tried to model it using fluids (i.e., equilibrium physics quantities). However, because the plasma inhabits an open world, it requires first-principles, ab-initio simulations. Now, for the first time, researchers have verified the existence and modeled the behavior of these blobs using a gyrokinetic code (or one that uses the most fundamental plasma kinetic equations, with analytic treatment of the fast gyrating particle motions) and the DIII-D geometry.

    This same first-principles approach also revealed the divertor heat load footprint. The divertor will extract heat and helium ash from the plasma, acting as a vacuum system and ensuring that the plasma remains stable and the reaction ongoing.

    These discoveries were made possible because the team’s XGC1 code exhibited highly efficient weak and strong scalability on Titan’s hybrid architecture up to the full size of the machine. Collaborating with Ed D’Azevedo, supported by the OLCF and by the DOE Scientific Discovery through Advanced Computing (SciDAC) project Center for Edge Physics Simulation (EPSi), along with Pat Worley (ORNL), Jianying Liand (PPPL) and Seung-Hoe Ku (PPPL) also supported by EPSi, this team optimized its XGC1 code for Titan’s GPUs using the maximum number of nodes, boosting performance fourfold over the previous CPU-only code. This performance increase has enormous implications for predicting fusion energy efficiency in ITER.

    Full-scale simulations

    “We can now use both the CPUs and GPUs efficiently in full-scale production simulations of the tokamak plasma,” said Chang.

    Furthermore, added Chang, Titan is beginning to allow the researchers to model physics that were just a year ago out of reach altogether, such as electron-scale turbulence, that were out of reach altogether as little as a year ago. Jaguar—Titan’s CPU-only predecessor— was fine for ion-scale edge turbulence because ions are both slower and heavier than electrons (for which the computing requirement is 60 times greater), but fell seriously short when it came to calculating electron-scale turbulence. While Titan is still not quite powerful enough to model electrons as accurately as Chang would like, the team has developed a technique that allows them to simulate electron physics approximately 10 times faster than on Jaguar.

    And they are just getting started. The researchers plan on eventually simulating the full volume plasma with electron-scale turbulence to understand how these newly modeled blobs affect the fusion core, because whatever happens at the edge determines conditions in the core. “We think this blob phenomenon will be a key to understanding the core,” said Chang, adding, “All of these are critical physics elements that must be understood to raise the confidence level of successful ITER operation. These phenomena have been observed experimentally for a long time, but have not been understood theoretically at a predictable confidence level.”

    Given the team can currently use all of Titan’s more that 18,000 nodes, a better understanding of fusion is certainly in the works. A better understanding of blobby turbulence and its effects on plasma performance is a significant step toward that goal, proving yet again that few tools are more critical than simulation if mankind is to use the engines of stars to solve its most pressing dilemma: clean, abundant energy.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 12:53 pm on March 20, 2014 Permalink | Reply
    Tags: Applied Research & Technology, , , , , , ,   

    From SLAC Lab: “Scientists Discover Potential Way to Make Graphene Superconducting” 

    March 20, 2014
    Press Office Contact:
    Andy Freeberg, afreeberg@slac.stanford.edu, (650) 926-4359

    Scientist Contact:
    Shuolong Yang, syang2@stanford.edu, (650) 725-0440

    Scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have discovered a potential way to make graphene – a single layer of carbon atoms with great promise for future electronics – superconducting, a state in which it would carry electricity with 100 percent efficiency.

    graph

    Researchers used a beam of intense ultraviolet light to look deep into the electronic structure of a material made of alternating layers of graphene and calcium. While it’s been known for nearly a decade that this combined material is superconducting, the new study offers the first compelling evidence that the graphene layers are instrumental in this process, a discovery that could transform the engineering of materials for nanoscale electronic devices.

    “Our work points to a pathway to make graphene superconducting – something the scientific community has dreamed about for a long time, but failed to achieve,” said Shuolong Yang, a graduate student at the Stanford Institute of Materials and Energy Sciences (SIMES) who led the research at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL).

    ssrl
    Stanford University / SLAC professor Zhi Xun Shen with a spectrometer at Stanford Synchrotron Radiation Lightsource (SSRL) Beamline 5-4.

    The researchers saw how electrons scatter back and forth between graphene and calcium, interact with natural vibrations in the material’s atomic structure and pair up to conduct electricity without resistance. They reported their findings March 20 in Nature Communications.

    Graphite Meets Calcium

    Graphene, a single layer of carbon atoms arranged in a honeycomb pattern, is the thinnest and strongest known material and a great conductor of electricity, among other remarkable properties. Scientists hope to eventually use it to make very fast transistors, sensors and even transparent electrodes.

    The classic way to make graphene is by peeling atomically thin sheets from a block of graphite, a form of pure carbon that’s familiar as the lead in pencils. But scientists can also isolate these carbon sheets by chemically interweaving graphite with crystals of pure calcium. The result, known as calcium intercalated graphite or CaC6, consists of alternating one-atom-thick layers of graphene and calcium.

    The discovery that CaC6 is superconducting set off a wave of excitement: Did this mean graphene could add superconductivity to its list of accomplishments? But in nearly a decade of trying, researchers were unable to tell whether CaC6’s superconductivity came from the calcium layer, the graphene layer or both.

    Observing Superconducting Electrons

    For this study, samples of CaC6 were made at University College London and brought to SSRL for analysis.

    “These are extremely difficult experiments,” said Patrick Kirchmann, a staff scientist at SLAC and SIMES. But the purity of the sample combined with the high quality of the ultraviolet light beam allowed them to see deep into the material and distinguish what the electrons in each layer were doing, he said, revealing details of their behavior that had not been seen before.

    “With this technique, we can show for the first time how the electrons living on the graphene planes actually superconduct,” said SIMES graduate student Jonathan Sobota, who carried out the experiments with Yang. “The calcium layer also makes crucial contributions. Finally we think we understand the superconducting mechanism in this material.”

    Although applications of superconducting graphene are speculative and far in the future, the scientists said, they could include ultra-high frequency analog transistors, nanoscale sensors and electromechanical devices and quantum computing devices.

    The research team was supervised by Zhi-Xun Shen, a professor at SLAC and Stanford and SLAC’s advisor for science and technology, and included other researchers from SLAC, Stanford, Lawrence Berkeley National Laboratory and University College London. The work was supported by the DOE’s Office of Science, the Engineering and Physical Sciences Research Council of UK and the Stanford Graduate Fellowship program.

    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, Calif., SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science.

    The Stanford Institute for Materials and Energy Sciences (SIMES) is a joint institute of SLAC National Accelerator Laboratory and Stanford University. SIMES studies the nature, properties and synthesis of complex and novel materials in the effort to create clean, renewable energy technologies. For more information, visit simes.slac.stanford.edu.

    SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) is a third-generation light source producing extremely bright X-rays for basic and applied science. A DOE national user facility, SSRL attracts and supports scientists from around the world who use its state-of-the-art capabilities to make discoveries that benefit society. For more information, visit ssrl.slac.stanford.edu.

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

    See the full article here.

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 248 other followers

%d bloggers like this: