Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:53 am on May 22, 2015 Permalink | Reply
    Tags: , Applied Research & Technology,   

    From AAAS: “The new shape of fusion” 

    AAAS

    AAAS

    21 May 2015
    Daniel Clery

    1
    A plasma glows inside MAST, a spherical tokamak.

    ITER, the international fusion reactor being built in France, will stand 10 stories tall, weigh three times as much as the Eiffel Tower, and cost its seven international partners $18 billion or more. The result of decades of planning, ITER will not produce fusion energy until 2027 at the earliest. And it will be decades before an ITER-like plant pumps electricity into the grid. Surely there is a quicker and cheaper route to fusion energy.

    Fusion enthusiasts have a slew of schemes for achieving the starlike temperatures or crushing pressures needed to get hydrogen nuclei to come together in an energy-spawning union. Some are mainstream, such as lasers, some unorthodox. Yet the doughnut-shaped vessels called tokamaks, designed to cage a superheated plasma using magnetic fields, remain the leading fusion strategy and are the basis of ITER. Even among tokamaks, however, a nimbler alternative has emerged: a spherical tokamak.

    Imagine the doughnut shape of a conventional tokamak plumped up into a shape more like a cored apple. That simple change, say the idea’s advocates, could open the way to a fusion power plant that would match ITER’s promise, without the massive scale. “The aim is to make tokamaks smaller, cheaper, and faster—to reduce the eventual cost of electricity,” says Ian Chapman, head of tokamak science at the Culham Centre for Fusion Energy in Abingdon, U.K.


    Download mp4 here.

    Culham is one of two labs about to give these portly tokamaks a major test. The world’s two front-rank machines—the National Spherical Torus Experiment (NSTX) at the Princeton Plasma Physics Laboratory in New Jersey and the Mega Amp Spherical Tokamak (MAST) in Culham—are both being upgraded with stronger magnets and more powerful heating systems. Soon they will switch on and heat hydrogen to temperatures much closer to those needed for generating fusion energy. If they perform well, then the next major tokamak to be built—a machine that would run in parallel with ITER and test technology for commercial reactors—will likely be a spherical tokamak.

    PPPL NSTX
    NSTX

    Mega Amp Spherical Tokamak
    MAST

    A small company spun off from Culham is even making a long-shot bet that it can have a spherical tokamak reactor capable of generating more energy than it consumes—one of ITER’s goals—up and running within the decade. If it succeeds, spherical tokamaks could change the shape of fusion’s future. “It’s going to be exciting,” says Howard Wilson, director of the York Plasma Institute at the University of York in the United Kingdom. “Spherical tokamaks are the new kids on the block. But there are still important questions we’re trying to get to the bottom of.”

    TOKAMAKS ARE AN INGENIOUS WAY to cage one of the most unruly substances humans have ever grappled with: plasma hot enough to sustain fusion. To get nuclei to slam together and fuse, fusion reactors must reach temperatures 10 times hotter than the core of the sun, about 150 million degrees Celsius. The result is a tenuous ionized gas that would vaporize any material it touches—and yet must be held in place long enough for fusion to generate useful amounts of energy.

    Tokamaks attempt this seemingly impossible task using magnets, which can hold and manipulate plasma because it is made of charged particles. A complex set of electromagnets encircle the doughnut-shaped vessel, some horizontal and some vertical, while one tightly wound coil of wire, called a solenoid, runs down the doughnut hole. Their combined magnetic field squeezes the plasma toward the center of the tube and drives it around the ring while also twisting in a slow corkscrew motion.

    But plasma is not easy to master. Confining it is like trying to squeeze a balloon with your hands: It likes to bulge out between your fingers. The hotter a plasma gets, the more the magnetically confined gas bulges and wriggles and tries to escape. Much of the past 60 years of fusion research has focused on how to control plasma.

    Generating and maintaining enough heat for fusion has been another challenge. Friction generated as the plasma surges around the tokamak supplies some of the heat, but modern tokamaks also beam in microwaves and high-energy particles. As fast as the heat is supplied, it bleeds away, as the hottest, fastest moving particles in the turbulent plasma swirl away from the hot core toward the cooler edge. “Any confinement system is going to be slightly leaky and will lose particles,” Wilson says.

    Studies of tokamaks of different sizes and configurations have always pointed to the same message: To contain a plasma and keep it hot, bigger is better. In a bigger volume, hot particles have to travel farther to escape. Today’s biggest tokamak, the 8-meter-wide Joint European Torus (JET) at Culham, set a record for fusion energy in 1997, generating 16 megawatts for a few seconds.

    Joint European Torus
    JET

    (That was still slightly less than the heating power pumped into the plasma.) For most of the fusion community, ITER is the logical next step. It is expected to be the first machine to achieve energy gain—more fusion energy out than heating power in.

    In the 1980s, a team at Oak Ridge National Laboratory in Tennessee explored how a simple shape change could affect tokamak performance. They focused on the aspect ratio—the radius of the whole tokamak compared to the radius of the vacuum tube. (A Hula-Hoop has a very high aspect ratio, a bagel a lower one.) Their calculations suggested that making the aspect ratio very low, so that the tokamak was essentially a sphere with narrow hole through the middle, could have many advantages.

    Near a spherical tokamak’s central hole, the Oak Ridge researchers predicted, particles would enjoy unusual stability. Instead of corkscrewing lazily around the tube as in a conventional tokamak, the magnetic field lines wind tightly around the central column, holding particles there for extended periods before they return to the outside surface. The D-shaped cross section of the plasma would also help suppress turbulence, improving energy confinement. And they reckoned that the new shape would use magnetic fields more efficiently—achieving more plasma pressure for a given magnetic pressure, a ratio known as beta. Higher beta means more bang for your magnetic buck. “The general idea of spherical tokamaks was to produce electricity on a smaller scale, and more cheaply,” Culham’s Chapman says.

    But such a design posed a practical problem. The narrow central hole in a spherical tokamak didn’t leave enough room for the equipment that needs to fit there: part of each vertical magnet plus the central solenoid. In 1984, Martin Peng of Oak Ridge came up with an elegant, space-saving solution: replace the multitude of vertical ring magnets with C-shaped rings that share a single conductor down the center of the reactor (see graphic, below).

    3
    JAMES PROVOST

    U.S. fusion funding was in short supply at that time, so Oak Ridge could not build a spherical machine to test Peng’s design. A few labs overseas converted some small devices designed for other purposes into spherical tokamaks, but the first true example was built at the Culham lab in 1990. “It was put together on a shoestring with parts from other machines,” Chapman says. Known as the Small Tight Aspect Ratio Tokamak (START), the device soon achieved a beta of 40%, more than three times that of any conventional tokamak.

    It also bested traditional machines in terms of stability. “It smashed the world record at the time,” Chapman says. “People got more interested.” Other labs rushed to build small spherical tokamaks, some in countries not known for their fusion research, including Australia, Brazil, Egypt, Kazakhstan, Pakistan, and Turkey.

    The next question, Chapman says, was “can we build a bigger machine and get similar performance?” Princeton and Culham’s machines were meant to answer that question. Completed in 1999, NSTX and MAST both hold plasmas about 3 meters across, roughly three times bigger than START’s but a third the size of JET’s. The performance of the pair showed that START wasn’t a one-off: again they achieved a beta of about 40%, reduced instabilities, and good confinement.

    Now, both machines are moving to the next stage: more heating power to make a hotter plasma and stronger magnets to hold it in place. MAST is now in pieces, the empty vacuum vessel looking like a giant tin can adorned with portholes, while its €30 million worth of new magnets, pumps, power supplies, and heating systems are prepared. At Princeton, technicians are putting the finishing touches to a similar $94 million upgrade of NSTX’s magnets and neutral beam heating. Like most experimental tokamaks, the two machines are not aiming to produce lots of energy, just learning how to control and confine plasma under fusionlike conditions. “It’s a big step,” Chapman says. “NSTX-U will have really high injected power in a small plasma volume. Can you control that plasma? This is a necessary step before you could make a spherical tokamak power plant.”

    4
    Engineers lift out MAST’s vacuum vessel for modifications during the €30 million upgrade. © CCFE

    The upgraded machines will each have a different emphasis. NSTX-U, with the greater heating power, will focus on controlling instabilities and improving confinement when it restarts this summer.

    PPPL NSTX-U
    NSTX-U

    “If we can get reasonable beta values, [NSTXU] will reach plasma [properties] similar to conventional tokamaks,” says NSTX chief Masayuki Ono. MAST-Upgrade, due to fire up in 2017, will address a different problem: capturing the fusion energy that would build up in a full-scale plant.

    Fusion reactions generate most of their energy in the form of high-energy neutrons, which, being neutral, are immune to magnetic fields and can shoot straight out of the reactor. In a future power plant, a neutron-absorbing material will capture them, converting their energy to heat that will drive a steam turbine and generate electricity. But 20% of the reaction energy heats the plasma directly and must somehow be tapped. Modern tokamaks remove heat by shaping the magnetic field into a kind of exhaust pipe, called a divertor, which siphons off some of the outermost layer of plasma and pipes it away. But fusion heat will build up even faster in a spherical tokamak because of its compact size. MAST-Upgrade has a flexible magnet system so that researchers can try out various divertor designs, looking for one that can cope with the heat.

    Researchers know from experience that when a tokamak steps up in size or power, plasma can start misbehaving in new ways. “We need MAST and NSTX to make sure there are no surprises at low aspect ratio,” says Dennis Whyte, director of the Plasma Science and Fusion Center at the Massachusetts Institute of Technology in Cambridge. Once NSTX and MAST have shown what they are capable of, Wilson says, “we can pin down what a [power-producing] spherical tokamak will look like. If confinement is good, we can make a very compact machine, around MAST size.”

    BUT GENERATING ELECTRICITY isn’t the only potential goal. The fusion community will soon have to build a reactor to test how components for a future power plant would hold up under years of bombardment by high-energy neutrons. That’s the goal of a proposed machine known in Europe as the Component Test Facility (CTF), which could run stably around the clock, generating as much heat from fusion as it consumes. A CTF is “absolutely necessary,” Chapman says. “It’s very important to test materials to make reactors out of.” The design of CTF hasn’t been settled, but spherical tokamak proponents argue their design offers an efficient route to such a testbed—one that “would be relatively compact and cheap to build and run,” Ono says.

    With ITER construction consuming much of the world’s fusion budget, that promise won’t be tested anytime soon. But one company hopes to go from a standing start to a small power-producing spherical tokamak in a decade. In 2009, a couple of researchers from Culham created a spinoff company—Tokamak Solutions—to build small spherical tokamaks as neutron sources for research. Later, one of the company’s suppliers showed them a new multilayered conducting tape, made with the high-temperature superconductor yttrium-barium-copper-oxide, that promised a major performance boost.

    Lacking electrical resistance, superconductors can be wound into electromagnets that produce much stronger fields than conventional copper magnets. ITER will use low-temperature superconductors for its magnets, but they require massive and expensive cooling. High-temperature materials are cheaper to use but were thought to be unable to withstand the strong magnetic fields around a tokamak—until the new superconducting tape came along. The company changed direction, was renamed Tokamak Energy, and is now testing a first-generation superconducting spherical tokamak no taller than a person.

    Superconductors allow a tokamak to confine a plasma for longer. Whereas NSTX and MAST can run for only a few seconds, the team at Tokamak Energy this year ran their machine—albeit at low temperature and pressure—for more than 15 minutes. In the coming months, they will attempt a 24-hour pulse—smashing the tokamak record of slightly over 5 hours.

    Next year, the company will put together a slightly larger machine able to produce twice the magnetic field of NSTX-U. The next step—investors permitting—will be a machine slightly smaller than Princeton’s but with three times the magnetic field. Company CEO David Kingham thinks that will be enough to beat ITER to the prize: a net gain of energy. “We want to get fusion gain in 5 years. That’s the challenge,” he says.

    “It’s a high-risk approach,” Wilson says. “They’re buying their lottery ticket. If they win, it’ll be great. If they don’t, they’ll likely disappear. Even if it doesn’t work, we’ll learn from it; it will accelerate the fusion program.”

    It’s a spirit familiar to everyone trying to reshape the future of fusion.

    See the full article here.

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

     
  • richardmitnick 8:26 am on May 15, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , , ,   

    From BNL: “Intense Lasers Cook Up Complex, Self-Assembled Nanomaterials” 

    Brookhaven Lab

    May 13, 2015
    Justin Eure

    New technique developed at Brookhaven Lab makes self-assembly 1,000 times faster and could be used for industrial-scale solar panels and electronics

    1
    Brookhaven Lab scientist Kevin Yager (left) and postdoctoral researcher Pawel Majewski with the new Laser Zone Annealing instrument at the Center for Functional Nanomaterials.

    Nanoscale materials feature extraordinary, billionth-of-a-meter qualities that transform everything from energy generation to data storage. But while a nanostructured solar cell may be fantastically efficient, that precision is notoriously difficult to achieve on industrial scales. The solution may be self-assembly, or training molecules to stitch themselves together into high-performing configurations.

    Now, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have developed a laser-based technique to execute nanoscale self-assembly with unprecedented ease and efficiency.

    “We design materials that build themselves,” said Kevin Yager, a scientist at Brookhaven’s Center for Functional Nanomaterials (CFN). “Under the right conditions, molecules will naturally snap into a perfect configuration. The challenge is giving these nanomaterials the kick they need: the hotter they are, the faster they move around and settle into the desired formation. We used lasers to crank up the heat.”

    Yager and Brookhaven Lab postdoctoral researcher Pawel Majewski built a one-of-a-kind machine that sweeps a focused laser-line across a sample to generate intense and instantaneous spikes in temperature. This new technique, called Laser Zone Annealing (LZA), drives self-assembly at rates more than 1,000 times faster than traditional industrial ovens. The results are described in the journal ACS Nano.

    “We created extremely uniform self-assembled structures in less than a second,” Majewski said. “Beyond the extraordinary speed, our laser also reduced the defects and degradations present in oven-heated materials. That combination makes LZA perfect for carrying small-scale laboratory breakthroughs into industry.”

    The scientists prepared the materials and built the LZA instrument at the CFN. They then analyzed samples using advanced electron microscopy at CFN and x-ray scattering at Brookhaven’s now-retired National Synchrotron Light Source (NSLS)—both DOE Office of Science User Facilities.

    “It was enormously gratifying to see that our predictions were accurate—the enormous thermal gradients led to a correspondingly enormous acceleration!” Yager said.

    2
    Illustration of the Lazer Zone Annealing instrument showing the precise laser (green) striking the un-assembled polymer (purple). The extreme thermal gradients produced by the laser sweeping across the sample cause rapid and pristine self-assembly.

    Ovens versus lasers

    Imagine preparing a complex cake, but instead of baking it in the oven, a barrage of lasers heats it to perfection in an instant. Beyond that, the right cooking conditions will make the ingredients mix themselves into a picture-perfect dish. This nanoscale recipe achieves something equally extraordinary and much more impactful.

    The researchers focused on so-called block copolymers, molecules containing two linked blocks with different chemical structures and properties. These blocks tend to repel each other, which can drive the spontaneous formation of complex and rigid nanoscale structures.

    “The price of their excellent mechanical properties is the slow kinetics of their self-assembly,” Majewski said. “They need energy and time to explore possibilities until they find the right configuration.”

    In traditional block copolymer self-assembly, materials are heated in a vacuum-sealed oven. The sample is typically “baked” for a period of 24 hours or longer to provide enough kinetic energy for the molecules to snap into place—much too long for commercial viability. The long exposure to high heat also causes inevitable thermal degradation, leaving cracks and imperfections throughout the sample.

    The LZA process, however, offers sharp spikes of heat to rapidly excite the polymers without the sustained energy that damages the material.

    “Within milliseconds, the entire sample is beautifully aligned,” Yager said. “As the laser sweeps across the material, the localized thermal spikes actually remove defects in the nanostructured film. LZA isn’t just faster, it produces superior results.”

    LZA generates temperatures greater than 500 degrees Celsius, but the thermal gradients—temperature variations tied to direction and location in a material—can reach more than 4,000 degrees per millimeter. While scientists know that higher temperatures can accelerate self-assembly, this is the first proof of dramatic enhancement by extreme gradients.

    Built from scratch

    “Years ago, we observed a subtle hint that thermal gradients could improve self-assembly,” Yager said. “I became obsessed with the idea of creating more and more extreme gradients, which ultimately led to building this laser setup, and pioneering a new technique.”

    The researchers needed a high concentration of technical expertise and world-class facilities to move the LZA from proposal to execution.

    “Only at the CFN could we develop this technique so quickly,” Majewski said. “We could do rapid instrument prototyping and sample preparation with the on-site clean room, machine shop, and polymer processing lab. We then combined CFN electron microscopy with x-ray studies at NSLS for an unbeatable evaluation of the LZA in action.”

    Added Yager, “The ability to make new samples at the CFN and then walk across the street to characterize them in seconds at NSLS was key to this discovery. The synergy between these two facilities is what allowed us to rapidly iterate to an optimized design.”

    The scientists also developed a new microscale surface thermometry technique called melt-mark analysis to track the exact heat generated by the laser pulses and tune the instrument accordingly.

    “We burned a few films initially before we learned the right operating conditions,” Majewski said. “It was really exciting to see the first samples being rastered by the laser and then using NSLS to discover exactly what happened.”

    Future of the technique

    The LZA is the first machine of its kind in the world, but it signals a dramatic step forward in scaling up meticulously designed nanotechnology. The laser can even be used to “draw” structures across the surface, meaning the nanostructures can assemble in well-defined patterns. This unparalleled synthesis control opens the door to complex applications, including electronics.

    “There’s really no limit to the size of a sample this technique could handle,” Yager said. “In fact, you could run it in a roll-to-roll mode—one of the leading manufacturing technologies.”

    The scientists plan to further develop the new technique to create multi-layer structures that could have immediate impacts on anti-reflective coatings, improved solar cells, and advanced electronics.

    This research and operations at CFN and NSLS were funded by the DOE Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 7:01 pm on May 14, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , Outsmart Ebola Together,   

    From WCG “A milestone and a roadmap: progress in the fight against Ebola” 

    New WCG Logo

    5 May 2015
    The Outsmart Ebola Together research team

    Summary
    Thanks to the huge level of support from World Community Grid, the team at the Scripps Research Institute has already received most of the matching data for the first target protein of the Ebola virus. While this data is being analyzed, the search now moves to another related protein with potential to help the fight against hemorrhagic fevers.

    Outsmart Ebola Together, a long-term scientific project whose goal is to find new drugs for curing Ebola and related life-threatening viral hemorrhagic fevers, is still in its early stages, but we’ve already reached a major milestone. Our first target was the newly revealed receptor-binding site of the Ebola surface protein, GP. GP is the molecule Ebola virus uses to fuse with a human cell and force its way inside. Armed with a new model of the binding site, and with the vast resources of World Community Grid, we set out to test this site against drugs that could potentially bond with it and prevent Ebola infection. This stage of work is now close to complete: we have received back from World Community Grid most of the data for the planned matchings of the Ebola surface protein against 5.4 million candidate chemical compounds.

    We are now analyzing this data. Drugs that simulations predict will bind well with the Ebola surface protein will go on to a next round of experiments, conducted in the lab with actual proteins and actual drug molecules. Our analysis may also yield general insights about how classes of drugs interact with viral proteins.

    Moreover, we are excited to announce that we are beginning work on a second target protein, the Lassa virus nucleoprotein.

    Like Ebola, Lassa is a “Group V” virus: in other words, both are viruses that have at their core a genome composed of “negative-sense”, single-stranded RNA. Both viruses produce a deadly hemorrhagic fever. While Lassa has received less publicity than Ebola, it is a more consistent killer. There are hundreds of thousands of cases of Lassa Fever every year in Western Africa, with tens of thousands of deaths. It is also the viral hemorrhagic fever most frequently transported out of Africa to the United States and Europe. There are no treatments approved for use in Lassa virus infection. Identification of a potent inhibitor of Lassa virus is imperative for public health.

    The Lassa virus’s nucleoprotein (NP) is so named because its first discovered function is to bind with, and so enclose and protect, the virus’s central strand of RNA. However, Lassa NP is a complex beast that has other functions as well. In particular, our lab discovered that the NP (almost paradoxically) is also responsible for digesting double-stranded RNA (dsRNA) created by the virus itself. Having gained entry to a human cell, the Lassa virus must copy its single-stranded RNA in order to produce viral proteins and replicate itself. This requires creating double-stranded RNA. However, the virus must keep this work secret. The presence of double-stranded RNA in the cytoplasm is a clear sign of a viral infection, and human cells are smart enough to detect this, triggering an effective immune response. Hence the importance of the Lassa NP, which rips apart the virus’s own dsRNA byproducts in order to keep its activities secret.

    We approach Lassa NP armed with our lab’s crystallographic structures, which clearly identify the shape of the NP and the site where the NP carries out its function of destroying double-stranded RNA. This site is a large cavity in the side of the protein; it is negatively charged, but is also bordered by a positively charged, protruding protein “arm”. These distinctive features are key to the site’s binding with dsRNA, and, we believe, should make it a good candidate for screenings against possible drugs.

    2
    Figure: Our lab’s structure for the Lassa NP protein. Portions important to the protein’s function of digesting double-stranded RNA include the “cavity” (glowing, particularly a manganese atom that helps bond RNA) and the adjacent “arm” (yellow).

    We will now prepare this target protein for matchings against millions of drugs using the resources of the World Community Grid. As with our previous matchings against the Ebola surface protein, drugs that do well in this “virtual screening” will go on to further tests with actual proteins in the lab. While this work is difficult and carries no guarantees, we hope that it will lead to the discovery of a drug that can prevent the Lassa NP from hiding the virus’s double-stranded RNA. We have already determined that doing this would allow human cells to detect and act against the Lassa virus more promptly and effectively, potentially saving lives.

    It’s amazing to us that we’ve been able to receive so many results so quickly, and we want to say thank you to everyone in the World Community Grid family who helped make this possible. There is much work ahead, but it’s immensely encouraging to know that we have the resources available to carry it out.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    World Community Grid (WCG) brings people together from across the globe to create the largest non-profit computing grid benefiting humanity. It does this by pooling surplus computer processing power. We believe that innovation combined with visionary scientific research and large-scale volunteerism can help make the planet smarter. Our success depends on like-minded individuals – like you.”

    WCG projects run on BOINC software from UC Berkeley.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing.

    CAN ONE PERSON MAKE A DIFFERENCE? YOU BET!!

    “Download and install secure, free software that captures your computer’s spare power when it is on, but idle. You will then be a World Community Grid volunteer. It’s that simple!” You can download the software at either WCG or BOINC.

    Please visit the project pages-
    Outsmart Ebola together

    Outsmart Ebola Together

    Mapping Cancer Markers
    mappingcancermarkers2

    Uncovering Genome Mysteries
    Uncovering Genome Mysteries

    Say No to Schistosoma

    GO Fight Against Malaria

    Drug Search for Leishmaniasis

    Computing for Clean Water

    The Clean Energy Project

    Discovering Dengue Drugs – Together

    Help Cure Muscular Dystrophy

    Help Fight Childhood Cancer

    Help Conquer Cancer

    Human Proteome Folding

    FightAIDS@Home

    Computing for Sustainable Water

    World Community Grid is a social initiative of IBM Corporation
    IBM Corporation
    ibm

    IBM – Smarter Planet
    sp

     
  • richardmitnick 2:23 pm on May 14, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From LBL: “CLAIRE Brings Electron Microscopy to Soft Materials” 

    Berkeley Logo

    Berkeley Lab

    May 14, 2015
    Lynn Yarris (510) 486-5375

    Berkeley Researchers Develop Breakthrough Technique for Non-invasive Nano-scale Imaging

    1
    CLAIRE image of Al nanostructures with an inset that shows a cluster of six Al nanostructures.

    Soft matter encompasses a broad swath of materials, including liquids, polymers, gels, foam and – most importantly – biomolecules. At the heart of soft materials, governing their overall properties and capabilities, are the interactions of nano-sized components. Observing the dynamics behind these interactions is critical to understanding key biological processes, such as protein crystallization and metabolism, and could help accelerate the development of important new technologies, such as artificial photosynthesis or high-efficiency photovoltaic cells. Observing these dynamics at sufficient resolution has been a major challenge, but this challenge is now being met with a new non-invasive nanoscale imaging technique that goes by the acronym of CLAIRE.

    CLAIRE stands for “cathodoluminescence activated imaging by resonant energy transfer.” Invented by researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, CLAIRE extends the incredible resolution of electron microscopy to the dynamic imaging of soft matter.

    “Traditional electron microscopy damages soft materials and has therefore mainly been used to provide topographical or compositional information about robust inorganic solids or fixed sections of biological specimens,” says chemist Naomi Ginsberg, who leads CLAIRE’s development. “CLAIRE allows us to convert electron microscopy into a new non-invasive imaging modality for studying soft materials and providing spectrally specific information about them on the nanoscale.”

    2
    Naomi Ginsberg

    Ginsberg holds appointments with Berkeley Lab’s Physical Biosciences Division and its Materials Sciences Division, as well as UC Berkeley’s departments of chemistry and physics. She is also a member of the Kavli Energy NanoScience Institute (Kavli-ENSI) at Berkeley. She and her research group recently demonstrated CLAIRE’s imaging capabilities by applying the technique to aluminum nanostructures and polymer films that could not have been directly imaged with electron microscopy.

    “What microscopic defects in molecular solids give rise to their functional optical and electronic properties? By what potentially controllable process do such solids form from their individual microscopic components, initially in the solution phase? The answers require observing the dynamics of electronic excitations or of molecules themselves as they explore spatially heterogeneous landscapes in condensed phase systems,” Ginsberg says. “In our demonstration, we obtained optical images of aluminum nanostructures with 46 nanometer resolution, then validated the non-invasiveness of CLAIRE by imaging a conjugated polymer film. The high resolution, speed and non-invasiveness we demonstrated with CLAIRE positions us to transform our current understanding of key biomolecular interactions.”

    3
    CLAIRE imaging chip consists of a YAlO3:Ce scintillator film supported by LaAlO3 and SrTiO3 buffer layers and a Si frame. Al nanostructures embedded in SiO2 are positioned below and directly against the scintillator film. ProTEK B3 serves as a protective layer for etching.

    CLAIRE works by essentially combining the best attributes of optical and scanning electron microscopy into a single imaging platform. Scanning electron microscopes use beams of electrons rather than light for illumination and magnification. With much shorter wavelengths than photons of visible light, electron beams can be used to observe objects hundreds of times smaller than those that can be resolved with an optical microscope. However, these electron beams destroy most forms of soft matter and are incapable of spectrally specific molecular excitation.

    Ginsberg and her colleagues get around these problems by employing a process called “cathodoluminescence,” in which an ultrathin scintillating film, about 20 nanometers thick, composed of cerium-doped yttrium aluminum perovskite, is inserted between the electron beam and the sample. When the scintillating film is excited by a low-energy electron beam (about 1 KeV), it emits energy that is transferred to the sample, causing the sample to radiate. This luminescence is recorded and correlated to the electron beam position to form an image that is not restricted by the optical diffraction limit.

    Developing the scintillating film and integrating it into a microchip imaging device was an enormous undertaking, Ginsberg says, and she credits the “talent and dedication” of her research group for the success. She also gives much credit to the staff and capabilities of the Molecular Foundry, a DOE Office of Science User Facility, where the CLAIRE imaging demonstration was carried out.

    “The Molecular Foundry truly enabled CLAIRE imaging to come to life,” she says. “We collaborated with staff scientists there to design and install a high efficiency light collection apparatus in one of the Foundry’s scanning electron microscopes and their advice and input were fantastic. That we can work with Foundry scientists to modify the instrumentation and enhance its capabilities not only for our own experiments but also for other users is unique.”

    While there is still more work to do to make CLAIRE widely accessible, Ginsberg and her group are moving forward with further refinements for several specific applications.

    “We’re interested in non-invasively imaging soft functional materials like the active layers in solar cells and light-emitting devices,” she says. “It is especially true in organics and organic/inorganic hybrids that the morphology of these materials is complex and requires nanoscale resolution to correlate morphological features to functions.”

    Ginsberg and her group are also working on the creation of liquid cells for observing biomolecular interactions under physiological conditions. Since electron microscopes can only operate in a high vacuum, as molecules in the air disrupt the electron beam, and since liquids evaporate in high vacuum, aqueous samples must either be freeze-dried or hermetically sealed in special cells.

    “We need liquid cells for CLAIRE to study the dynamic organization of light-harvesting proteins in photosynthetic membranes,” Ginsberg says. “We should also be able to perform other studies in membrane biophysics to see how molecules diffuse in complex environments, and we’d like to be able to study molecular recognition at the single molecule level.”

    In addition, Ginsberg and her group will be using CLAIRE to study the dynamics of nanoscale systems for soft materials in general.

    “We would love to be able to observe crystallization processes or to watch a material made of nanoscale components anneal or undergo a phase transition,” she says. “We would also love to be able to watch the electric double layer at a charged surface as it evolves, as this phenomenon is crucial to battery science.”

    A paper describing the most recent work on CLAIRE has been published in the journal Nano Letters. The paper is titled Cathodoluminescence-Activated Nanoimaging: Noninvasive Near-Field Optical Microscopy in an Electron Microscope. Ginsberg is the corresponding author. Other authors are Connor Bischak, Craig Hetherington, Zhe Wang, Jake Precht, David Kaz and Darrell Schlom.

    This research was primarily supported by the DOE Office of Science and by the National Science Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 9:09 am on May 14, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , India Science,   

    From Nature: “India: The fight to become a science superpower” 

    Nature Mag
    Nature

    13 May 2015
    T. V. Padma

    Despite great strides in some areas of research and development, the nation still has a long way to go.

    1
    India is one of the leading nations in wind power and it has ambitious goals for increasing solar power over the next decade.

    With her jeans, T-shirt and spirited attitude, Tapasya Srivastava could pass for a student as she works in her brightly lit cancer-biology lab on the University of Delhi South Campus. Srivastava, who oversees a team of eight researchers, is thrilled that she earned “a small research space of my own” in 2010, while still in her thirties. “With a decent list of publications under my belt, I am one of the few who have studied and undergone training entirely in India,” she says.

    Eight kilometres away, in the chemical-engineering department of the Indian Institute of Technology Delhi, Shalini Gupta’s team is developing sensors to detect early-stage sepsis and typhoid. Gupta did her doctorate in the United States but returned to India to focus on its needs: “I am more connected to society and its challenges,” she says.

    Srivastava and Gupta are part of a wave of young Indian scientists convinced that they can do high-quality research at home rather than having to move abroad. Such optimism reaches all the way to the top: in January, Indian Prime Minister Narendra Modi told an assembly of scientists to “dream, imagine and explore. You will have no better supporter than me.”

    India has much to be proud of. Last year, it became the first to reach Mars on its initial attempt. It boasts a thriving pharmaceutical industry that produces low-cost medications that are desperately needed by the developing world. And in his first year in office, Modi launched an ambitious plan to make India a leader in solar power.

    Such successes cannot hide the huge challenges facing this country of 1.3 billion people, which leads the world in tuberculosis incidence and maternal deaths, and lacks electricity for one-quarter of its citizens. India is expected to become the world’s most populous nation within a generation, and it will require a robust science and technology sector to supply the needed energy, food, health care, jobs and growth. Yet researchers in India and abroad say that the country has a relatively weak foundation in science and engineering.

    Indian research is hampered by stifling bureaucracy, poor-quality education at most universities and insufficient funding. Successive governments have pledged to increase support for research and development to 2% of India’s gross domestic product (GDP), but it has remained static at less than 0.9% of GDP since 2005. Despite its huge size, India has a relatively tiny number of researchers, and many of its budding scientists leave for other countries, never to return. Only by tackling its systemic problems can India compete with other emerging powerhouses such as Brazil and China.

    “The density of scientists and engineers in India is one of the lowest in the world,” says Sunil Mani, an economist at the Centre for Development Studies in Trivandrum, who is assessing Indian science and engineering for an upcoming report by the United Nations Educational, Scientific and Cultural Organization. “There are very many important areas where we are not able to do research.”

    Space to grow

    In one of the cleanest rooms in India, Mylswamy Annadurai is busy conducting fitness tests on a 750-kilogram patient — a gleaming satellite called ASTROSAT.

    ISRO ASTROSAT
    ASTROSAT

    The probe is strapped to a table, where it is being shaken at six times the strength of gravity to simulate the intense forces of lift-off. ASTROSAT must also pass tests in extreme high and low temperatures and vacuum conditions, followed by checks on its solar arrays and antennas. If all goes well, the satellite will blast into orbit by September, armed with two telescopes and four other instruments to study both nearby and distant stars.

    Annadurai, who is head of the satellite centre of the Indian Space Research Organisation (ISRO) in Bangalore, says that ASTROSAT will be India’s “first full-fledged science mission” in space. It will carry instruments ten times heavier than those on India’s first mission to the Moon, 2008’s Chandrayaan-1, and its 2014 Mars Orbiter Mission, nicknamed Mangalyaan.

    With its run of recent accomplishments, India has earned international acclaim for its ambitious space programme, which includes launch vehicles, communication satellites and one of the world’s largest constellation of remote sensing satellites, as well as its science missions. Since ISRO was founded in 1969, the government has invested heavily in it, and even established a dedicated university in 2007 to train personnel. “The ISRO technical test, assembly and launch facilities are first class,” says Paul Spudis, senior staff scientist at the Lunar and Planetary Institute in Houston, Texas, who was the principal investigator for one of Chandrayaan-1’s experiments.

    Chandrayaan-1 carried an orbiter and a 35-kilogram probe that took images as it smashed into the Moon at high speed. ISRO plans to follow it in 2017 with Chandrayaan-2, which will gently set down a lander and a six-wheeled rover; together with an orbiter, they will study the composition of the Moon’s surface. Up next after that is the Aditya mission to study the Sun’s corona, in 2018.

    Spudis is critical of last year’s Mars mission, calling it “largely irrelevant” and saying that it would have been better to return quickly to the moon. ISRO, he says, “seems to lack a strategic vision of what it wants to accomplish in space”. But the agency counters that it is pursuing several missions in parallel; the Mars mission just proceeded faster than Chandrayaan-2.

    And the success in reaching Mars has convinced others at ISRO that they can carry out world-class space-science missions, says Annadurai. “The Mars mission experience has once again strengthened our belief that we can.”

    Biotech bonanza

    In Genome Valley, a biotechnology park in Hyderabad, entrepreneur Krishna Ella is confounding expectations. Ella returned home from the United States in 1996 with a 12-metre shipping container filled with vaccine-making equipment to support his grand plan of producing a US$1 vaccine for hepatitis B. That goal, which would make his vaccine at least an order of magnitude cheaper than the available one, struck investors as crazy, he says. But within three years, Ella’s company Bharat Biotech International Limited (BBIL) succeeded in producing the Revac-B+ hepatitis vaccine at $3 a dose, which has since dropped to 30 cents per dose. It went on to develop vaccines against Japanese encephalitis, rabies, haemophilus influenza virus B and, most recently, rotavirus. Each costs barely a dollar per dose.

    Affordable medicines are the cornerstone of India’s health-care sector, where publicly funded hospitals struggle to provide treatment. The country has long battled infectious diseases such as tuberculosis, malaria and dengue, but is now facing rising numbers of non-communicable illnesses, including diabetes and coronary heart disease. A 2014 report from the World Economic Forum and Harvard School of Public Health estimates that non-communicable diseases and mental illness could cost India $4.58 trillion by 2030.

    Low-price vaccines and generic drugs have helped India to carve out a niche in the international pharmaceutical industry. The global medical charity Médecins Sans Frontières (also known as Doctors Without Borders), which relies on Indian generics for 80% of its anti-HIV drugs, hails the country as the “pharmacy of the developing world”. Other international organizations, including the UN children’s charity UNICEF and the Global Fund to Fight AIDs, Tuberculosis and Malaria, routinely use Indian vaccines and generic drugs to treat infectious diseases (see Nature 468, 143; 2010).

    But India is battling criticism over the quality of some of its pharmaceuticals. In 2012, for example, the World Health Organization took BBIL’s hepatitis B vaccine and oral polio vaccine off the list of drugs preapproved for use by the UN. Ella says that the issues related to documentation submission and that they have since been sorted out. The vaccines are now back on the list.

    In 2014, the US Food and Drug Administration (FDA) sent warning letters to seven Indian firms over various concerns relating to pharmaceutical production there. An FDA spokesperson told Nature: “While some Indian companies meet US product quality standards, others have been found to lack sufficient controls and systems to assure drug quality, both of finished product and active ingredients.” The FDA has an India office to work closely with Indian drug regulators to solve those problems.

    And some in the biotech sector warn that India has a long way to go to create a thriving enterprise in developing new drugs. The country’s success in the generics industry relies on a different set of skills: reverse-engineering pharmaceuticals created elsewhere by breaking them into their components and remaking them through cheaper routes.

    “The challenge for the sector will be to graduate from reverse engineering to new-drug discovery,” says Pallu Reddanna, a biotechnologist at the University of Hyderabad. “There is need for incentives and promotion of academy–industry interactions.”

    The government and private sector are trying to jump-start such efforts by setting up incubators that help transfer university and lab know-how to industry, and provide infrastructure and financial support to start-ups. Such incubators are the “greatest changer in the drug-discovery sector in India”, says P. Yogeeswari, a chemist at the Hyderabad campus of Birla Institute of Technology and Science.

    Krishnaswamy VijayRaghavan, secretary of the government’s Department of Biotechnology, commends “incredible growth” in India’s biotech entrepreneurship — despite the lack of big drug companies and the relatively low domestic investment in drug discovery. International and industry collaborations with academia are helping to advance the sector, he says (see page 148).

    In 2013, the department started two major projects seeking drugs for drug-resistant tuberculosis and chronic disorders such as heart disease. In early leads, scientists have zeroed in on some human proteins that are crucial for the survival of multidrug-resistant tuberculosis strains. Proof-of-concept studies in mice have demonstrated that targeting such host proteins could help to kill the drug-resistant strains, says VijayRaghavan (S. Jayaswal et al. PLoS Pathog. 6, e1000839; 2010). “We are at an exciting early applied stage,” he says.

    Power hungry

    Nearly 2,000 kilometres north of Genome Valley, 9.7 hectares of solar panels cover a building in Punjab state, generating 7.5 megawatts of electricity. This project is India’s largest roof-top solar installation that is connected to an electrical power grid, and it signals India’s outsize ambitions in renewable energy.

    Coal supplies two-thirds of the electricity in India and will remain king for some time. But the government has set aggressive goals for installing solar-energy capacity. In 2014, Modi’s government announced that it would develop 100 gigawatts of solar-energy capacity by 2022. This is a huge leap from the existing 3.7 gigawatts of solar capacity — just 1.4% of India’s total electricity generation today.

    “India is one of the most attractive markets in the world,” says Pashupathy Gopalan, Asia Pacific head of SunEdison, a global solar-energy company based in Maryland Heights, Missouri, which is joining Adani Enterprises of Ahmedabad to build India’s largest solar-panel-manufacturing facility. “We are entering a new era where solar electricity is competitive and has achieved ‘socket parity’ with other sources of energy in India.”

    There are other big international collaborations. The Solar Energy Research Institute for India and the United States was established in 2012 to target emerging research areas. In one project, researchers are trying to generate solar thermal power by using sunlight to heat up a highly compressed fluid form of carbon dioxide so that it turns electricity-generating turbines. This could be used in much smaller plants than conventional steam-driven turbines.

    But some analysts say that India suffers from “gigawatt obsession”. The focus on giant solar plants comes at the expense of smaller facilities that do not require large parcels of land, but could provide electricity to isolated towns, even without being connected to the grid.

    “The gigawatt rush must pay attention to the pace with which the capacity is to be built in India,” says Satish Agnihotri, former secretary of India’s Ministry of New and Renewable Energy. Plans to build large plants could run into opposition in densely populated or heavily farmed areas, and in remote areas it can be difficult to hook gigawatt projects up to the electrical system.

    News and debates about the government’s current focus on solar power have overshadowed past successes in wind energy. India has more than 23 gigawatts of installed wind-power capacity, which puts it roughly even with Spain as the world’s fourth biggest producer. And Mumbai-based Suzlon is the world’s seventh-largest turbine manufacturer.

    India has been able to develop its wind power in part because of long-term government policies and financial incentives, as well as a growing interest from independent power producers and financiers, says Shantanu Jaiswal, lead analyst at Bloomberg New Energy Finance in New Delhi. But some of the concerns about solar power also hamper wind projects, which face difficulty acquiring land, encounter lengthy permitting processes and often have trouble connecting to the electrical power grid.

    Education outlook

    Back on her leafy campus in Delhi, Srivastava and her fellow young faculty members are less concerned about big national projects than about producing their own high-quality research. They are lucky, they acknowledge, to work in one of India’s top federally funded universities, which has superior faculty members and equipment.

    Others are not so fortunate. India has some 700 universities of varying quality, from the elite institutions funded by the central government to more than 300 state universities and about 200 private ones. “The landscape of science education is uneven,” says Sri Krishna Joshi, former director-general of India’s Council of Scientific and Industrial Research (CSIR) and former chair of the advisory committee of the University Grants Commission, which funds and oversees university education in India.

    In the top institutions, he says, “science students are doing world-class research, publishing in leading journals and boosting the global reputation of our country”. National scientific research institutes and leading universities have all contributed to India’s growing strengths in research: the country’s output of scientific publications quadrupled between 2000 and 2013.

    Even so, India is not keeping pace with some other emerging nations, which have increased their scientific output more quickly (see page 142). And the advances in India’s global science metrics mask some signs of declining quality in university science education, especially at the cash-starved universities funded by state governments that account for the majority of India’s science undergraduates, says Joshi. Publicly supported universities depend on the Ministry of Human Resource Development for funds, and the higher-education budget was hit by a 3% cut in the 2014–15 budget cycle.

    “Lack of even bare, minimal and sustainable funds for teaching, let alone research, has seriously plagued the quality and standards of science education,” says Krishna Ganesh, a chemist and director of the Indian Institute of Science Education and Research in Pune, one of five top universities set up in India since 2006.

    Many students at state universities are receiving a substandard education, says Joshi. “Here, there are no good science teachers, no good Indian textbooks, and most of the science laboratories are poorly equipped,” he says.

    “We are caught in a vicious circle of mediocrity,” says geneticist Deepak Pental, former vice-chancellor of the University of Delhi.

    Most analysts are concerned over the plight of science departments in state universities. At the University of Calcutta, for example, even procuring a laptop involves endless red tape, says physicist Amitava Raychaudhuri. At some other institutions, support from funding agencies helps to purchase equipment, but there is a shortage of qualified faculty members to train the students.

    Beyond questions of quality, the quantity of available university spots is a persistent problem. India has gone through a university building boom, but there still is a huge shortage of slots for students (see Nature 472, 24–26; 2011).

    “There is a rise in the number of students going for higher education in India, which reflects the rising aspirations of its society. But this rise should be matched by better infrastructure and financial support,” says Joshi.
    Research investments

    Investments in science have also dragged. India’s research intensity — the share of its gross domestic product devoted to research and development (GERD) — remains lower than those of many other nations, including Brazil and Russia. Twenty years ago, India’s GERD exceeded China’s. Now, it is less than half.

    But those numbers do not tell the whole story, says Ashutosh Sharma, secretary of the government’s Department of Science and Technology — one of India’s largest research-funding agencies. “The total funding is, perhaps, not as poor as it seems in terms of absolute numbers, because the number of full-time scientists doing research is also low.”

    India averages about 4 full-time researchers per 10,000 people in the labour force, whereas China boasts 18 and nations with advanced science and technology sectors have around 80. “India spends about $150,000 per scientist per year, which is probably not too far from the optimal levels,” says Sharma.

    India’s notorious bureaucracy deserves part of the blame for the problems afflicting science education and research. The administrators of several state universities are political appointees rather than leading academics. “Often the appointed person has never been exposed to a good university in India or abroad,” says Kizhakeyil Sebastian, chair of the science-education panel of the Indian Academy of Sciences in Bangalore.

    “There is over-bureaucratization within the universities and their controlling bodies,” says Pental. It often takes two years to recruit an academic after announcing an open post, which means that the best applicants can slip away, says Raychaudhuri.

    The governmental quagmire has begun to affect some elite national research institutes, too. Of the 38 national laboratories that are part of the CSIR, only 25 have full-time directors. The rest are making do with acting directors, or temporary arrangements.

    Even the CSIR headquarters in New Delhi has been without a full-time leader since January 2014. Interim director-general Madhukar Garg says that “the current situation is indeed challenging. CSIR is the backbone of scientific and technological research in the country. In case the prevailing scenario continues, it will affect the national innovation system as a whole.”

    Sharma acknowledges that red tape is “all-pervasive”, but he says that the challenges are not bogging down Indian science. “In terms of output indicators such as the number of papers per dollar spent, Indian science is among the very top performers in the world,” he says.

    And there are some signs that India might be slowing its debilitating brain drain. Although the vast majority of Indians who obtain science doctorates in the United States remain there for at least 5 years after graduation, the proportion has declined: from 89% in 2001 to 82% in 2011, the most recent year for which data are available.

    Kaustuv Datta, a geneticist at Delhi University South Campus, is one of those who returned. Datta may “hate the red-tapism” at universities in India, but he still prefers doing research back home. “My parents are here, in India. And academics have a strong, positive influence on the next generation of students,” says Datta. “I wanted to make that contribution in India”.

    Nature 521, 144–147 (14 May 2015) doi:10.1038/521144a

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 7:56 am on May 14, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , , , , ,   

    From MIT: “Researchers build new fermion microscope” 


    MIT News

    May 13, 2015
    Jennifer Chu

    1
    Graduate student Lawrence Cheuk adjusts the optics setup for laser cooling of sodium atoms. Photo: Jose-Luis Olivares/MIT

    2
    Laser beams are precisely aligned before being sent into the vacuum chamber. Photo: Jose-Luis Olivares/MIT

    3
    Sodium atoms diffuse out of an oven to form an atomic beam, which is then slowed and trapped using laser light. Photo: Jose-Luis Olivares/MIT

    4
    A Quantum gas microscope for fermionic atoms. The atoms, potassium-40, are cooled during imaging by laser light, allowing thousands of photons to be collected by the microscope. Credit: Lawrence Cheuk/MIT

    5
    The Fermi gas microscope group: (from left) graduate students Katherine Lawrence and Melih Okan, postdoc Thomas Lompe, graduate student Matt Nichols, Professor Martin Zwierlein, and graduate student Lawrence Cheuk. Photo: Jose-Luis Olivares/MIT

    Instrument freezes and images 1,000 individual fermionic atoms at once.

    Fermions are the building blocks of matter, interacting in a multitude of permutations to give rise to the elements of the periodic table. Without fermions, the physical world would not exist.

    Examples of fermions are electrons, protons, neutrons, quarks, and atoms consisting of an odd number of these elementary particles. Because of their fermionic nature, electrons and nuclear matter are difficult to understand theoretically, so researchers are trying to use ultracold gases of fermionic atoms as stand-ins for other fermions.

    But atoms are extremely sensitive to light: When a single photon hits an atom, it can knock the particle out of place — an effect that has made imaging individual fermionic atoms devilishly hard.

    Now a team of MIT physicists has built a microscope that is able to see up to 1,000 individual fermionic atoms. The researchers devised a laser-based technique to trap and freeze fermions in place, and image the particles simultaneously.

    The new imaging technique uses two laser beams trained on a cloud of fermionic atoms in an optical lattice. The two beams, each of a different wavelength, cool the cloud, causing individual fermions to drop down an energy level, eventually bringing them to their lowest energy states — cool and stable enough to stay in place. At the same time, each fermion releases light, which is captured by the microscope and used to image the fermion’s exact position in the lattice — to an accuracy better than the wavelength of light.

    With the new technique, the researchers are able to cool and image over 95 percent of the fermionic atoms making up a cloud of potassium gas. Martin Zwierlein, a professor of physics at MIT, says an intriguing result from the technique appears to be that it can keep fermions cold even after imaging.

    “That means I know where they are, and I can maybe move them around with a little tweezer to any location, and arrange them in any pattern I’d like,” Zwierlein says.

    Zwierlein and his colleagues, including first author and graduate student Lawrence Cheuk, have published their results today in the journal Physical Review Letters.

    Seeing fermions from bosons

    For the past two decades, experimental physicists have studied ultracold atomic gases of the two classes of particles: fermions and bosons — particles such as photons that, unlike fermions, can occupy the same quantum state in limitless numbers. In 2009, physicist Marcus Greiner at Harvard University devised a microscope that successfully imaged individual bosons in a tightly spaced optical lattice. This milestone was followed, in 2010, by a second boson microscope, developed by Immanuel Bloch’s group at the Max Planck Institute of Quantum Optics.

    These microscopes revealed, in unprecedented detail, the behavior of bosons under strong interactions. However, no one had yet developed a comparable microscope for fermionic atoms.

    “We wanted to do what these groups had done for bosons, but for fermions,” Zwierlein says. “And it turned out it was much harder for fermions, because the atoms we use are not so easily cooled. So we had to find a new way to cool them while looking at them.”

    Techniques to cool atoms ever closer to absolute zero have been devised in recent decades. Carl Wieman, Eric Cornell, and MIT’s Wolfgang Ketterle were able to achieve Bose-Einstein condensation in 1995, a milestone for which they were awarded the 2001 Nobel Prize in physics. Other techniques include a process using lasers to cool atoms from 300 degrees Celsius to a few ten-thousandths of a degree above absolute zero.

    A clever cooling technique

    And yet, to see individual fermionic atoms, the particles need to be cooled further still. To do this, Zwierlein’s group created an optical lattice using laser beams, forming a structure resembling an egg carton, each well of which could potentially trap a single fermion. Through various stages of laser cooling, magnetic trapping, and further evaporative cooling of the gas, the atoms were prepared at temperatures just above absolute zero — cold enough for individual fermions to settle onto the underlying optical lattice. The team placed the lattice a mere 7 microns from an imaging lens, through which they hoped to see individual fermions.

    However, seeing fermions requires shining light on them, causing a photon to essentially knock a fermionic atom out of its well, and potentially out of the system entirely.

    “We needed a clever technique to keep the atoms cool while looking at them,” Zwierlein says.

    His team decided to use a two-laser approach to further cool the atoms; the technique manipulates an atom’s particular energy level, or vibrational energy. Each atom occupies a certain energy state — the higher that state, the more active the particle is. The team shone two laser beams of differing frequencies at the lattice. The difference in frequencies corresponded to the energy between a fermion’s energy levels. As a result, when both beams were directed at a fermion, the particle would absorb the smaller frequency, and emit a photon from the larger-frequency beam, in turn dropping one energy level to a cooler, more inert state. The lens above the lattice collects the emitted photon, recording its precise position, and that of the fermion.

    Zwierlein says such high-resolution imaging of more than 1,000 fermionic atoms simultaneously would enhance our understanding of the behavior of other fermions in nature — particularly the behavior of electrons. This knowledge may one day advance our understanding of high-temperature superconductors, which enable lossless energy transport, as well as quantum systems such as solid-state systems or nuclear matter.

    “The Fermi gas microscope, together with the ability to position atoms at will, might be an important step toward the realization of a quantum computer based on fermions,” Zwierlein says. “One would thus harness the power of the very same intricate quantum rules that so far hamper our understanding of electronic systems.”

    Zwierlein says it is a good time for Fermi gas microscopists: Around the same time his group first reported its results, teams from Harvard and the University of Strathclyde in Glasgow also reported imaging individual fermionic atoms in optical lattices, indicating a promising future for such microscopes.

    Zoran Hadzibabic, a professor of physics at Trinity College, says the group’s microscope is able to detect individual atoms “with almost perfect fidelity.”

    “They detect them reliably, and do so without affecting their positions — that’s all you want,” says Hadzibabic, who did not contribute to the research. “So far they demonstrated the technique, but we know from the experience with bosons that that’s the hardest step, and I expect the scientific results to start pouring out.”

    This research was funded in part by the National Science Foundation, the Air Force Office of Scientific Research, the Office of Naval Research, the Army Research Office, and the David and Lucile Packard Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:44 am on May 13, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From Sandia: “Starving cancer instead of feeding it poison” 


    Sandia Lab

    May 13, 2015
    Neal Singer
    nsinger@sandia.gov
    (505) 845-7078

    1
    Sandia National Laboratories researcher Susan Rempe says a new approach to treating cancer is being tested on laboratory mice. If successful, human testing will follow. (Photo by Randy Montoya)

    A patent application for a drug that could destroy the deadly childhood disease known as acute lymphoblastic leukemia — and potentially other cancers as well — has been submitted by researchers at Sandia National Laboratories, the University of Maryland and the MD Anderson Cancer Center in Houston.

    “Most drugs have to go inside a cell to kill it,” said Sandia researcher Susan Rempe. “Instead, our method withholds an essential nutrient from the cell, essentially starving it until it self-destructs.”

    The removed nutrient is called asparagine, which cancer cells can’t produce on their own. But there’s more to the story.

    It’s well-known that chemical attempts using drugs to kill cancers often sicken the patient. In the case of the cancer drug L-asparaginase type 2 (L-ASN2), whose primary effect is depleting asparagine, side effects are generally attributed to the corresponding depletion of a chemically similar molecule called glutamine. All human cells need asparagine and glutamine to survive because each is essential to key biological processes. While most normal cells can synthesize their own asparagine, certain cancer cells cannot. So the ideal nutrient-deprivation strategy for cancers requires a difficult balancing act: Remove enough asparagine from the blood to cripple the cancer, but leave enough glutamine that the patient can tolerate chemotherapy.

    The researchers at Sandia and the university did molecular computer simulations to predict what mutations would produce that desirable result when introduced into the enzyme-drug L-ASN2, commonly used to treat certain types of leukemia. The scientists’ simulations succeeded in identifying a point in that enzyme’s chain of amino acids where a mutation theoretically would eliminate the drug’s unwanted attack on glutamine.

    “Technically,” said Rempe, “we simulated which parts of the two molecules came in contact with the enzyme. Then we realized that by substituting a single amino acid in the enzyme’s chain, we might avoid glutamine degradation by removing it from contact with the enzyme.”

    In computer simulations, the change looked promising because the most notable difference between asparagine and glutamine was the way they interacted with that specific amino acid.

    “That made us feel that a chemical change at that single location was the key,” said Rempe.

    It required a mutation to change the amino acid’s chemistry. The mutation was achieved by collaborators at MD Anderson who used DNA substitutions to effect the change.

    “Most researchers agree that removing glutamine from a patient’s blood was the problem in previous use of this enzyme-drug,” said Rempe. “Our simulations, as it turned out, showed how to avoid that.”

    In test tube experiments, the new drug left glutamine untouched. Follow-up tests in petri dishes showed that the mutated enzyme killed a variety of cancers.

    Tests underway on laboratory mice at MD Anderson should be completed by early 2016, and if they are successful, Rempe said, human testing will follow.

    2
    A simulation by researchers at Sandia National Laboratories and the University of Maryland demonstrates that a mutated enzyme will degrade asparagine – food for some cancers — but leave glutamine, necessary for all proteins, untouched. (Graphic by Juan Vanegas)

    “If we’re wrong, and keeping glutamine intact is not the answer to the cancer problem, we’ll continue investigating because we think we’re onto something,” she said.

    That’s because, she said, “we used high-resolution computational methods to redesign the cancer drug to act differently, in this case to act only on asparagine. Laboratory tests showed that the predictions worked and that the new drug kills a variety of leukemias. We hope our method can do that in a patient, and for many more cancers. But if it doesn’t, then we’ll test the opposite strategy: redesign the enzyme to destroy glutamine and keep asparagine intact. Or fine-tune the enzyme to degrade the two molecules in a chosen ratio. We’re learning to control this enzyme.”

    The joint work among Sandia, the University of Maryland and MD Anderson began in 2009. Sandia managers Wahid Hermina and Steve Casalnuovo spearheaded the collaboration to use Sandia’s computational and biochemical expertise developed in national defense to help cure cancer.

    Sandia’s cancer-fighting research also can be applied to building enzymes that can assist with bio defense.

    Said Rempe, “If we could redesign an enzyme to break down specific small molecules, and not get diverted by interactions with non-toxic molecules, then we could apply our technique to develop safer and more effective enzymes.”

    Classical modeling was performed at the University of Maryland by Andriy Anishkin and Sergei Sukharev; at Sandia, post-doctoral researcher David Rogers (now at the University of South Florida) also carried out modeling studies. Sandia post-doctoral researcher Juan Vanegas is performing quantum modeling to map out the chemical degradation process to better understand how to optimize the enzyme, said Rempe. The experiments at MD Anderson were carried out by Wai Kin Chan, Phil Lorenzi, and colleagues in John Weinstein’s group. Earlier results have been published in the journal Blood.

    The work is supported by Sandia’s Laboratory Directed Research and Development office.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.
    i1
    i2
    i3

     
  • richardmitnick 1:15 pm on May 12, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From U Washington: “Dying cells can protect their stem cells from destruction” 

    U Washington

    University of Washington

    05.11.2015
    Michael McCarthy

    1
    At a University of Washington stem cell research lab, Hannele Ruohola-Baker (right) examines a lab specimen with Julie Mathieu.

    Cells dying as the result of radiation exposure or chemotherapy can send a warning to nearby stem cells. The chemical signal allows the stem cells to escape the same fate, University of Washington researchers report in the May 11 issue of the journal Nature Communications.

    Read the Nature Communications paper.

    The discovery may explain why many cancers return after initially responding to treatment and could lead to new, more effective cancer drugs, says Yalan Xing, a postdoctoral fellow in the UW Department of Biochemistry and lead author of the study.

    Most human tissues harbor adult stem cells. These cells regenerate endlessly and produce daughter cells that can mature into a wide variety of cell types. This makes it possible for our bodies to grow and to replace damaged or aging cells. Many cancers, however, harbor similar cells, called “tumor-initiating cells” or “cancer stem cells.” They, too, have can give rise to new tumor cells.

    Both cell types share another property: they often can survive radiation and chemotherapy. In the case of stem cells, this ability allows our bodies to recover from these treatments. In the case of tumor-initiating cells, however, this survival strategy makes it possible for tumors to resist treatment and return.

    To find out how these cells survive exposure to what should be lethal doses of radiation and chemotherapy, Yalan Xing and Hannele Ruohola-Baker, professor of biochemistry, associate director of the UW Institute for Stem Cell and Regenerative Medicine and the senior author of the study, looked at adult stem cells in the fruit fly and the natural weeding of unneeded cells called apoptosis.

    Apoptosis, also known as programmed cell death, is an orderly clean-up method that our bodies follow to eliminate cells that are damaged, aged or no longer needed. In the case of radiation exposure and chemotherapy, damage to a cell’s DNA from these treatments triggers apoptosis and cell death.

    In their study, the researchers show that, after exposure to radiation, dying daughter cells release a protein, called Pvf1, which is like a human protein called angiopoietin and which binds to receptors on nearby mother stem cells, called Tie receptors. This protein-receptor binding causes the stem cells to produce a short piece of RNA, called microRNA bantam, that represses the generation of a key protein, called Hid/Diablo/Smac, needed to trigger apoptosis.

    The upshot is that while the stem cells, like their daughter cells, may have suffered severe DNA damage, with Hid production repressed, they are prevented from self-destructing.

    “Essentially, the children tell the mom to protect herself,” said Xing.

    Because they are prevented from immediately destroying themselves, the stem cells survive until it is time for them to reproduce and regenerate tissue. The research team proposes that, as the stem cells get ready to reproduce, they activate a battery of mechanisms that repair damaged DNA. By blocking apoptosis, the dying daughter cells have enabled their mother stem cells to survive until they have best chance of recovering from the treatment-induced DNA damage and of regenerating damaged tissue.

    This system makes evolutionary sense, Ruohola-Baker said. By protecting the mother stem cells, the martriarchs, the dying daughter cells preserve the only cell type that can regenerate the tissue, the adult stem cell.

    “There are very similar genes and proteins in human cancers that are likely playing the same role of protecting the tumor-initiating cells from destruction. As a result, the tumor-initiating cells survive and the cancers return. By targeting these factors, perhaps by blocking Tie receptors, it may be possible to block the protective signal from the daughter cells, and thereby allow programmed cell death to proceed in the mother stem cells and prevent cancer.

    This work is supported in part by the Western State Affiliates Postdoctoral Fellowship from the American Heart Association, a gift from the Hahn Family and grants from the National Institutes of Health.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.

    So what defines us — the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

     
  • richardmitnick 9:29 am on May 11, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From Rutgers: “Stroke Centers Reduce Risk of Dying, Rutgers Study Finds” 

    Rutgers University
    Rutgers University

    May 10, 2015
    Jennifer Forbes at 732-235-6356
    jenn.forbes@rwjms.rutgers.edu.

    1

    You are more likely to survive a hemorrhagic stroke if you are treated at a comprehensive stroke center, according to a new Rutgers and Robert Wood Johnson University Hospital study.

    The research published in the Journal of the American Heart Association indicates that patients – including those transferred within 24 hours from other hospitals – from a brain bleed were more likely to survive if they were cared for at such a facility.

    “Hemorrhagic stroke is complex and requires skilled medical interventions to improve a patient’s outcome,” said James S. McKinney, assistant professor of neurology at Rutgers Robert Wood Johnson Medical School and medical director of the Comprehensive Stroke Center at Robert Wood Johnson University Hospital and lead author of the study. “Based on the evidence presented in our study, we believe that more patients can survive hemorrhagic stroke with better utilization of the state’s comprehensive stroke centers.”

    Stroke is a leading cause of death and disability in the United States, according to the American Heart Association and American Stroke Association. Previous research had shown that comprehensive stroke centers improved clinical outcomes and reduced disparities in ischemic stroke caused by a blockage in blood vessels. The same may be true for patients who experience hemorrhagic stroke which causes bleeding in or around the brain and has a mortality rate of 40 to 50 percent.

    The researchers reviewed more than 36,000 anonymous patient records from 1996 to 2012, including admissions and discharge data for 87 New Jersey hospitals, each designated as a comprehensive stroke center, primary stroke center or non-stroke center, by the New Jersey Department of Health and Human Services. Their findings indicate that the neurosurgical and endovascular treatments that are available at state-designated comprehensive stroke centers are associated with lower mortality rates in patients with hemorrhagic stroke.

    There are 13 designated comprehensive stroke centers in New Jersey which must be staffed 24-hours-a-day, seven-days-a-week, with a neurosurgical team including diagnostic and interventional neuroradiologists. However, despite this availability, the study noted that only 40 percent of patients were admitted to a comprehensive stroke center during the study time period from 1996 to 2012, while the remaining 60 percent were admitted to either a primary stroke center or non-stroke center said McKinney.

    According to McKinney, variables other than comprehensive treatment contributed to improved outcomes, including age. “In our analysis, patients admitted to comprehensive stroke centers were, on average, five years younger than patients admitted to other hospitals,” he said. “In addition, patients transferred to comprehensive stroke centers were significantly younger in age than patients who remained in primary stroke or non-stroke centers.”

    The research team, all members of the Cardiovascular Institute of New Jersey, included Jerry Q. Cheng, assistant professor of medicine; Igor Rybinnik, assistant professor of neurology; and John B. Kostis, John G. Detwiler Professor of Cardiology, associate dean for Cardiovascular Research and director, Cardiovascular Institute of New Jersey. The study was funded, in part, by the Robert Wood Johnson Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rutgers, The State University of New Jersey, is a leading national research university and the state’s preeminent, comprehensive public institution of higher education. Rutgers is dedicated to teaching that meets the highest standards of excellence; to conducting research that breaks new ground; and to providing services, solutions, and clinical care that help individuals and the local, national, and global communities where they live.

    Founded in 1766, Rutgers teaches across the full educational spectrum: preschool to precollege; undergraduate to graduate; postdoctoral fellowships to residencies; and continuing education for professional and personal advancement.

    Rutgers Seal

     
  • richardmitnick 9:15 am on May 11, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From DOE: Women in STEM for Mothers Day 

    DOE Seal Huge

    US Department of Energy

    May 8, 2015
    AnneMarie Horowitz

    Pam Richmond, Internet Project Manager at Argonne

    1
    Argonne Internet project manager Pam Richmond designs websites and e-learning to support programs in environmental remediation.

    Pam Richmond is an Internet project manager in the Environmental Science Division at Argonne National Laboratory. For over 15 years, Pam has managed, designed, and developed websites and online learning to support programs in the areas of environmental remediation, decision support, and public outreach. She designs web-based tools with the end user in mind, striving to communicate technical information to a range of audiences in a compelling way. She received her Bachelor of Science degree in Computer Science and her Master of Science degree in Computer Information Systems.

    Dr. Cynthia Jenks, Research Lead at Ames Lab

    2
    Dr. Cynthia Jenks is the Assistant Director for Scientific Planning and the Division Director of Chemical and Biological Science at the U.S. Department of Energy’s Ames Laboratory in Ames, Iowa.

    Dr. Cynthia Jenks is the Assistant Director for Scientific Planning and the Division Director of Chemical and Biological Science at the U.S. Department of Energy’s Ames Laboratory in Ames, Iowa. She received her B.S. in Chemical Engineering in 1986 from the University of California, Los Angeles. She received a M.S. degree in Chemical Engineering in 1988 and a M.Phil and Ph.D. in Chemistry from Columbia University in 1991 and 1992, respectively. Jenks did her postdoctoral work at Iowa State University and the U.S Department of Energy’s Ames Laboratory, joining the scientific staff of the Ames Laboratory in 1995. Her research interests are in the areas of surface structure and reactivity, surface structure-property relationships, catalysis, and thin film growth. She is a Fellow of the American Association for the Advancement of Science.

    Dr. Paula Gant, Deputy Assistant Secretary for Oil and Natural Gas

    3
    Dr. Paula Gant is the Deputy Assistant Secretary for Oil and Natural Gas in the Department of Energy’s (DOE) Office of Fossil Energy.

    Dr. Paula Gant is the Deputy Assistant Secretary for Oil and Natural Gas in the Department of Energy’s (DOE) Office of Fossil Energy. As Deputy Assistant Secretary, Dr. Gant administers domestic and international oil and gas programs, including policy analysis, and liquefied natural gas (LNG) import and export authorization. Dr. Gant’s work at DOE is focused on realizing the promise presented by America’s abundant natural gas and oil resources which hinges on prudent production, environmental stewardship and efficient use. The Department of Energy’s research efforts seek to deploy the best available science, analysis and technologies to ensure a more secure energy future by leveraging our domestic natural gas and oil resources and protecting our air, land, and communities. Dr. Gant previously worked for the American Gas Association and Duke Energy. She has also served on the faculties of Louisiana State University and the University of Louisville. Paula is a native of Louisiana. She received a Bachelor of Arts degree in Economics from McNeese State University and a Ph.D. in Economics from Auburn University. She lives in Washington, D.C. with her family.

    Alina Deshpande, Biomedical Researcher at Los Alamos

    4
    Los Alamos biomedical researcher Alina Deshpande is dedicated to strengthening the world’s fight against infectious diseases by providing new tools for early detection and mitigation of disease outbreaks.

    Hailing from the world’s second most populated country, microbiologist Alina Deshpande understands how our highly mobile and connected world may be susceptible to pandemics. A researcher at Los Alamos, Deshpande left India with her husband to become a student at the Lab and pursue a doctorate in biomedicine. Her graduate research focused on cervical cancer genetics. She received a post-doctoral fellowship in 2004 to pursue research on host-pathogen interactions, studying cholera and anthrax.

    Always fascinated by infectious diseases, Deshpande leads a global disease surveillance project at the Lab, a multi-million dollar effort that explores the most critical aspects of international disease awareness. Deshpande’s team believes it is essential that nations are able to quickly detect and characterize a biological threat affecting human, animal or agricultural health. Detection and characterization enables lives to be saved and offers improved outcomes in various scenarios such as the purposeful release of a biothreat agent, an emerging infectious disease outbreak, pandemic, environmental disaster, or food-borne illness.

    Lara Leininger, Engineer at Lawrence Livermore National Lab

    8
    Lara D. Leininger, Ph.D. has been a full-time Engineer at Lawrence Livermore National Laboratory (LLNL) for over 14 years with experience as a Computational Analyst, Principal Investigator, and Program Manager of the Joint DoD / DOE Munitions Technology Development Program.

    Lara D. Leininger, Ph.D. has been a full-time Engineer at Lawrence Livermore National Laboratory (LLNL) for over 14 years with experience as a Computational Analyst, Principal Investigator, and Program Manager of the Joint DoD / DOE Munitions Technology Development Program. She has contributed to a variety of projects, in various roles and levels of responsibility, to solve technical problems that often have a great deal of uncertainty, and a high risk of failure.

    Lara was also employed for over 2 years as a full-time Managing Engineer at Hinman Consulting Engineers in San Francisco. Hinman is a small consulting firm that specializes in Anti-Terrorism/Force Protection (AT/FP) and blast consulting for a range of projects. Lara’s responsibilities included military facility design and upgrade, high threat/high risk developments for international clients, and federally-owned facilities), managing technical innovation, and staff management. Lara earned her Ph.D. from the University of California, Davis, in Computational Engineering Mechanics, her M.S. from Stanford University in Thermosciences and Microdevices, and her B.S. from the University of California, Santa Barbara in Mechanical Engineering.

    Simerjeet Gill, Materials Scientist at Brookhaven National Lab

    6
    Simerjeet Gill of the U.S. Department of Energy’s Brookhaven National Laboratory works on materials in extreme environments, including supercritical CO2, which is an initiative in the President’s fiscal year 2015 budget proposal.

    Simerjeet Gill of the U.S. Department of Energy’s Brookhaven National Laboratory works on materials in extreme environments, including supercritical CO2, which is an initiative in the President’s fiscal year 2015 budget proposal. Her work is focused on utilizing synchrotron X-ray techniques and microscopy to investigate materials in extreme environments to advance applied energy technologies for nuclear and enhanced geothermal systems. She has developed in-situ reaction cell and is the lead scientist on a “first-light” experiment to study corrosion in nuclear materials when operations begin at the National Synchrotron Light Source II in 2015. Before joining the Lab in 2010, Simerjeet was a research assistant at Texas Tech University, where she earned her Ph.D in inorganic chemistry. She earned BS and MS from Honours School in Chemistry at Panjab University in India.

    Check out other profiles in the Women @ Energy series

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 442 other followers

%d bloggers like this: