Tagged: Clean Energy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:29 pm on September 29, 2014 Permalink | Reply
    Tags: , Clean Energy, , ,   

    From PPPL: “PPPL successfully tests system for mitigating instabilities called ‘ELMs’ “ 


    PPPL

    September 29, 2014
    John Greenwald

    PPPL has successfully tested a Laboratory-designed device to be used to diminish the size of instabilities known as “edge localized modes (ELMs)” on the DIII–D tokamak that General Atomics operates for the U.S. Department of Energy in San Diego. Such instabilities can damage the interior of fusion facilities.

    DIII-D
    DIII–D

    The PPPL device injects granular lithium particles into tokamak plasmas to increase the frequency of the ELMs. The method aims to make the ELMs smaller and reduce the amount of heat that strikes the divertor that exhausts heat in fusion facilities.

    The system could serve as a possible model for mitigating ELMs on ITER, the fusion facility under construction in France to demonstrate the feasibility of fusion energy.

    iter tok
    ITER Tokamak

    “ELMs are a big issue for ITER,” said Mickey Wade, director of the DIII-D national fusion program at General Atomics. Large-scale ELMs, he noted, could melt plasma-facing components inside the ITER tokamak.

    General Atomics plans to install the PPPL-designed device, developed by physicist Dennis Mansfield and engineer Lane Roquemore, on DIII-D this fall. Previous experiments using deuterium-injection rather than lithium-injection have demonstrated the ability to increase the ELMs frequency on DIII-D, the ASDEX-Upgrade in Germany and the Joint European Torus in the United Kingdom.

    jet
    Joint European Torus

    Researchers at DIII-D now want to see how the results for lithium-injection compare with those obtained in the deuterium experiments on the San Diego facility. “We want to put them side-by-side,” Wade said.

    PPPL-designed systems have proven successful in mitigating ELMs on the EAST tokamak in Hefei, China, and have been used on a facility operated by the Italian National Agency for New Technologies in Frascati, Italy. A system also is planned for PPPL’s National Spherical Torus Experiment (NSTX), the Laboratory’s major fusion experiment, which is undergoing a $94 million upgrade.

    PPPL NSTX
    PPPL NSTX

    PPPL used salt grain-sized plastic pellets as proxies for lithium granules in testing the system for DIII-D. The pellets fell through a pinhole-sized opening inside a dropper to a rotating high-speed propeller that projected them onto a target precisely as planned.

    Joining Mansfield and Roquemore for the tests were physicists Erik Gilson and Alessandro Bortolon, a former University of Tennessee researcher now at PPPL who will begin an assignment to the DIII-D tokamak at General Atomics this fall. Also participating were Rajesh Maingi, the head of research on edge physics and plasma-facing components at PPPL, and engineer Alexander Nagy, who is on assignment to DIII-D.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:59 pm on September 26, 2014 Permalink | Reply
    Tags: , Clean Energy, ,   

    From PNNL: “Off-shore Power Potential Floating in the Wind” 


    PNNL Lab

    September 2014
    Web Publishing Services

    Results
    : Two bright yellow buoys – each worth $1.3 million – are being deployed by Pacific Northwest National Laboratory in Washington State’s Sequim Bay. The massive, 20,000-pound buoys are decked out with the latest in meteorological and oceanographic equipment to enable more accurate predictions of the power-producing potential of winds that blow off U.S. shores. Starting in November, they will be commissioned for up to a year at two offshore wind demonstration projects: one near Coos Bay, Oregon, and another near Virginia Beach, Virginia.

    off
    PNNL staff conduct tests in Sequim Bay, Washington, while aboard one of two new research buoys being commissioned to more accurately predict offshore wind’s power-producing potential.

    “We know offshore winds are powerful, but these buoys will allow us to better understand exactly how strong they really are at the heights of wind turbines,” said PNNL atmospheric scientist Dr. William J. Shaw. “Data provided by the buoys will give us a much clearer picture of how much power can be generated at specific sites along the American coastline – and enable us to generate that clean, renewable power sooner.”

    Why It Matters: Offshore wind is a new frontier for U.S. renewable energy developers. There’s tremendous power-producing potential, but limited information is available about ocean-based wind resources. A recent report estimated the U.S. could power nearly 17 million homes by generating more than 54 gigawatts of offshore wind energy, but more information is needed.

    See the full article here.

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:14 pm on September 21, 2014 Permalink | Reply
    Tags: , Clean Energy, ,   

    From M.I.T.: “Magnetic fields make the excitons go ’round” 


    MIT News

    September 21, 2014
    David L. Chandler | MIT News Office

    A major limitation in the performance of solar cells happens within the photovoltaic material itself: When photons strike the molecules of a solar cell, they transfer their energy, producing quasi-particles called excitons — an energized state of molecules. That energized state can hop from one molecule to the next until it’s transferred to electrons in a wire, which can light up a bulb or turn a motor.

    temp

    But as the excitons hop through the material, they are prone to getting stuck in minuscule defects, or traps — causing them to release their energy as wasted light.

    Now a team of researchers at MIT and Harvard University has found a way of rendering excitons immune to these traps, possibly improving photovoltaic devices’ efficiency. The work is described in a paper in the journal Nature Materials.

    Their approach is based on recent research on exotic electronic states known as topological insulators, in which the bulk of a material is an electrical insulator — that is, it does not allow electrons to move freely — while its surface is a good conductor.

    The MIT-Harvard team used this underlying principle, called topological protection, but applied it to excitons instead of electrons, explains lead author Joel Yuen, a postdoc in MIT’s Center for Excitonics, part of the Research Laboratory of Electronics. Topological protection, he says, “has been a very popular idea in the physics and materials communities in the last few years,” and has been successfully applied to both electronic and photonic materials.

    Moving on the surface

    Topological excitons would move only at the surface of a material, Yuen explains, with the direction of their motion determined by the direction of an applied magnetic field. In that respect, their behavior is similar to that of topological electrons or photons.

    In its theoretical analysis, the team studied the behavior of excitons in an organic material, a porphyrin thin film, and determined that their motion through the material would be immune to the kind of defects that tend to trap excitons in conventional solar cells.

    The choice of porphyrin for this analysis was based on the fact that it is a well-known and widely studied family of materials, says co-author Semion Saikin, a postdoc at Harvard and an affiliate of the Center for Excitonics. The next step, he says, will be to extend the analysis to other kinds of materials.

    por
    Structure of porphine, the simplest porphyrin

    While the work so far has been theoretical, experimentalists are eager to pursue the concept. Ultimately, this approach could lead to novel circuits that are similar to electronic devices but based on controlling the flow of excitons rather that electrons, Yuen says. “If there are ever excitonic circuits,” he says, “this could be the mechanism” that governs their functioning. But the likely first application of the work would be in creating solar cells that are less vulnerable to the trapping of excitons.

    Eric Bittner, a professor of chemistry at the University of Houston who was not associated with this work, says, “The work is interesting on both the fundamental and practical levels. On the fundamental side, it is intriguing that one may be able to create excitonic materials with topological properties. This opens a new avenue for both theoretical and experimental work. … On the practical side, the interesting properties of these materials and the fact that we’re talking about pretty simple starting components — porphyrin thin films — makes them novel materials for new devices.”

    The work received support from the U.S. Department of Energy and the Defense Threat Reduction Agency. Norman Yao, a graduate student at Harvard, was also a co-author.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:53 pm on September 21, 2014 Permalink | Reply
    Tags: , Clean Energy, ,   

    From M.I.T.: “New formulation leads to improved liquid battery” 


    MIT News

    September 21, 2014
    David L. Chandler | MIT News Office

    Cheaper, longer-lasting materials could enable batteries that make wind and solar energy more competitive.

    temp

    Researchers at MIT have improved a proposed liquid battery system that could enable renewable energy sources to compete with conventional power plants.

    Donald Sadoway and colleagues have already started a company to produce electrical-grid-scale liquid batteries, whose layers of molten material automatically separate due to their differing densities. But the new formula — published in the journal Nature by Sadoway, former postdocs Kangli Wang and Kai Jiang, and seven others — substitutes different metals for the molten layers used in a battery previously developed by the team.

    Sadoway, the John F. Elliott Professor of Materials Chemistry, says the new formula allows the battery to work at a temperature more than 200 degrees Celsius lower than the previous formulation. In addition to the lower operating temperature, which should simplify the battery’s design and extend its working life, the new formulation will be less expensive to make, he says.

    The battery uses two layers of molten metal, separated by a layer of molten salt that acts as the battery’s electrolyte (the layer that charged particles pass through as the battery is charged or discharged). Because each of the three materials has a different density, they naturally separate into layers, like oil floating on water.

    The original system, using magnesium for one of the battery’s electrodes and antimony for the other, required an operating temperature of 700 C. But with the new formulation, with one electrode made of lithium and the other a mixture of lead and antimony, the battery can operate at temperatures of 450 to 500 C.

    Extensive testing has shown that even after 10 years of daily charging and discharging, the system should retain about 85 percent of its initial efficiency — a key factor in making such a technology an attractive investment for electric utilities.

    Currently, the only widely used system for utility-scale storage of electricity is pumped hydro, in which water is pumped uphill to a storage reservoir when excess power is available, and then flows back down through a turbine to generate power when it is needed. Such systems can be used to match the intermittent production of power from irregular sources, such as wind and solar power, with variations in demand. Because of inevitable losses from the friction in pumps and turbines, such systems return about 70 percent of the power that is put into them (which is called the “round-trip efficiency”).

    Sadoway says his team’s new liquid-battery system can already deliver the same 70 percent efficiency, and with further refinements may be able to do better. And unlike pumped hydro systems — which are only feasible in locations with sufficient water and an available hillside — the liquid batteries could be built virtually anywhere, and at virtually any size. “The fact that we don’t need a mountain, and we don’t need lots of water, could give us a decisive advantage,” Sadoway says.

    The biggest surprise for the researchers was that the antimony-lead electrode performed so well. They found that while antimony could produce a high operating voltage, and lead gave a low melting point, a mixture of the two combined both advantages, with a voltage as high as antimony alone, and a melting point between that of the two constituents — contrary to expectations that lowering the melting point would come at the expense of also reducing the voltage.

    “We hoped [the characteristics of the two metals] would be nonlinear,” Sadoway says — that is, that the operating voltage would not end up halfway between that of the two individual metals. “They proved to be [nonlinear], but beyond our imagination. There was no decline in the voltage. That was a stunner for us.”

    Not only did that provide significantly improved materials for the group’s battery system, but it opens up whole new avenues of research, Sadoway says. Going forward, the team will continue to search for other combinations of metals that might provide even lower-temperature, lower-cost, and higher-performance systems. “Now we understand that liquid metals bond in ways that we didn’t understand before,” he says.

    With this fortuitous finding, Sadoway says, “Nature tapped us on the shoulder and said, ‘You know, there’s a better way!’” And because there has been little commercial interest in exploring the properties and potential uses of liquid metals and alloys of the type that are most attractive as electrodes for liquid metal batteries, he says, “I think there’s still room for major discoveries in this field.”

    Robert Metcalfe, professor of innovation at the University of Texas at Austin, who was not involved in this work, says, “The Internet gave us cheap and clean connectivity using many kinds of digital storage. Similarly, we will solve cheap and clean energy with many kinds of storage. Energy storage will absorb the increasing randomness of energy supply and demand, shaving peaks, increasing availability, improving efficiency, lowering costs.”

    Metcalfe adds that Sadoway’s approach to storage using liquid metals “is very promising.”

    The research was supported by the U.S. Department of Energy’s Advanced Research Projects Agency-Energy and by French energy company Total.

    See the full article here.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:11 am on July 22, 2014 Permalink | Reply
    Tags: , Clean Energy, ,   

    From ESO: “Solar Farm to be Installed at La Silla” 


    European Southern Observatory

    21 July 2014
    Roberto Tamai
    E-ELT Programme Manager
    Garching bei München, Germany
    Tel: +49 89 3200 6367
    Email: rtamai@eso.org

    Lars Lindberg Christensen
    Head of ESO ePOD
    ESO ePOD, Garching, Germany
    Tel: +49 89 3200 6761
    Cellular: +49 173 3872 621
    E-mail: lars@eso.org

    As part of its green initiatives, ESO has signed an agreement with the Chilean company, Astronomy and Energy (a subsidiary of the Spanish LKS Group), to install a solar farm at the La Silla Observatory. ESO has been working on green solutions for supplying energy to its sites for several years, and these are now coming to fruition. Looking to the future, renewables are considered vital to satisfy energy needs in a sustainable manner.

    ESO LaSilla
    ESO at LaSilla

    solar

    ESO’s ambitious programme is focused on achieving the highest quality of astronomical research. This requires the design, construction and operation of the most powerful ground-based observing facilities in the world. However, the operations at ESO’s observatories present significant challenges in terms of their energy usage.

    Despite the abundance of sunshine at the ESO sites, it has not been possible up to now to make efficient use of this natural source of power. Astronomy and Energy will supply a means of effectively exploiting solar energy using crystalline photovoltaic modules (solar panels), which will be installed at La Silla.

    The installation will cover an area of more than 100 000 square metres, with the aim of being ready to supply the site by end of the year.

    The global landscape for energy has changed considerably over the last 20 years. As energy prices are increasing and vary unpredictably, ESO has been keen to look into ways to control its energy costs and also limit its ecological impact. The organisation has already managed to successfully reduce its power consumption at La Silla, and despite the additions of the VISTA and VST survey telescopes, power use has remained stable over the past few years at the Paranal Observatory, site of the VLT.

    ESO Vista Telescope
    ESO VISTA Telescope

    ESO VST telescope
    ESO VST Telescope

    The much-improved efficiency of solar cells has meant they have become a viable alternative to exploit solar energy. Solar cells of the latest generation are considered to be very reliable and almost maintenance-free, characteristics that contribute to a high availability of electric power, as required at astronomical observatories.

    As ESO looks to the future, it seeks further sustainable energy sources to be compatible across all its sites, including Cerro Armazones — close to Cerro Paranal and the site of the future European Extremely Large Telescope (E-ELT). This goal will be pursued not only by installing primary sources of renewable energy, as at La Silla, but also by realising connections to the Chilean interconnected power systems, where non-conventional renewable energy sources are going to constitute an ever-growing share of the power and energy mixes.

    The installation of a solar farm at La Silla is one of a series of initiatives ESO is taking to tackle the environmental impacts of its operations, as can be viewed here. Green energy is strongly supported by the Chilean government, which aims to increase the Chilean green energy share to 25% in 2020, with a possible target of 30% by 2030.

    See the full article, with note, here.

    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Main

    ESO, European Southern Observatory, builds and operates a suite of the world’s most advanced ground-based astronomical telescopes.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 12:19 pm on July 15, 2014 Permalink | Reply
    Tags: , , Clean Energy, , ,   

    From PPPL: “Experts assemble at PPPL to discuss mitigation of tokamak disruptions” 


    PPPL

    July 15, 2014
    John Greenwald

    Some 35 physicists from around the world gathered at PPPL last week for the second annual Laboratory-led workshop on improving ways to predict and mitigate disruptions in tokamaks. Avoiding or mitigating such disruptions, which occur when heat or electric current are suddenly reduced during fusion experiments, will be crucial for ITER the international experiment under construction in France to demonstrate the feasibility of fusion power.

    two
    Amitava Bhattacharjee, left, and John Mandrekas, a program manager in the U.S. Department of Energy’s office of Fusion Energy Sciences.(Photo by Elle Starkman/Princeton Office of
    Communications )

    PPPL Tokamak
    Tokamak at PPPL

    Presentations at the three-day session, titled “Theory and Simulation of Disruptions Workshop,” focused on the development of models that can be validated by experiment. “This is a really urgent task for ITER,” said Amitava Bhattacharjee, who heads the PPPL Theory Department and organized the workshop. The United States is responsible for designing disruption-mitigation systems for ITER, he noted, and faces a deadline of 2017.

    Speakers at the workshop included theorists and experimentalists from the ITER Organization, PPPL, General Atomics and several U.S. Universities, and from fusion facilities in the United Kingdom, China, Italy and India. Topics ranged from coping with the currents and forces that strike tokamak walls to suppressing runaway electrons that can be unleashed during experiments.

    Bringing together theorists and experimentalists is essential for developing solutions to disruptions, Bhattacharjee said. “I already see that major fusion facilities in the United States, as well as international tokamaks, are embarking on experiments that are ideal validation tools for theory and simulation,” he said. “And it is very important that theory and simulation ideas that can be validated with experimental results are presented and discussed in detail in focused workshops such as this one.”

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 4:47 am on July 14, 2014 Permalink | Reply
    Tags: , , , Clean Energy,   

    Friom NASA/ESA HUbble: “The oldest cluster in its cloud” 

    NASA Hubble Telescope

    Hubble

    14 July 2014
    No Writer Credit

    This image shows NGC 121, a globular cluster in the constellation of Tucana (The Toucan). Globular clusters are big balls of old stars that orbit the centres of their galaxies like satellites — the Milky Way, for example, has around 150.

    ngc
    Credit: NASA/ESA Hubble Acknowlegement: Stefano Campani
    Hubble ACS
    NASA Hubble ACS

    NGC 121 belongs to one of our neighbouring galaxies, the Small Magellanic Cloud (SMC). It was discovered in 1835 by English astronomer John Herschel, and in recent years it has been studied in detail by astronomers wishing to learn more about how stars form and evolve.

    Stars do not live forever — they develop differently depending on their original mass. In many clusters, all the stars seem to have formed at the same time, although in others we see distinct populations of stars that are different ages. By studying old stellar populations in globular clusters, astronomers can effectively use them as tracers for the stellar population of their host galaxies. With an object like NGC 121, which lies close to the Milky Way, Hubble is able to resolve individual stars and get a very detailed insight.

    NGC 121 is around 10 billion years old, making it the oldest cluster in its galaxy; all of the SMC’s other globular clusters are 8 billion years old or younger. However, NGC 121 is still several billions of years younger than its counterparts in the Milky Way and in other nearby galaxies like the Large Magellanic Cloud. The reason for this age gap is not completely clear, but it could indicate that cluster formation was initially delayed for some reason in the SMC, or that NGC 121 is the sole survivor of an older group of star clusters.

    This image was taken using Hubble’s Advanced Camera for Surveys (ACS). A version of this image was submitted to the Hubble’s Hidden Treasures image processing competition by contestant Stefano Campani.

    See the full article here.

    The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI), is a free-standing science center, located on the campus of The Johns Hopkins University and operated by the Association of Universities for Research in Astronomy (AURA) for NASA, conducts Hubble science operations.

    ESA50 Logo large

    AURA Icon


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:25 pm on July 13, 2014 Permalink | Reply
    Tags: , , Clean Energy, Clean Energy Project, ,   

    Clean Energy Project Hello everybody We’ve been overdue… 

    Clean Energy

    Clean Energy Project

    Hello everybody,

    We’ve been overdue for another progress report on the Clean Energy Project for some time, so here it finally comes. We hope you’ll enjoy this summary of the things that have happened since our last full report in April.

    Let’s start with the news on the CEP team again: Last time we reported that Alan was promoted to Full Professor and Sule got a job as an Assistant Professor in Ankara (Turkey). In the meantime, Johannes also landed an Assistant Professorship in the Department of Chemical and Biological Engineering at the University at Buffalo, The State University of New York, with an affiliation to the New York State Center of Excellence in Materials Informatics.
    http://www.cbe.buffalo.edu/people/full_time/j_hachmann.php

    Johannes will, however, stay involved in the Clean Energy Project and has already recruited students at Buffalo who will strengthen the CEP research efforts. Laszlo is gearing up to go out into the world as well and he will start graduate school next summer.
    http://aspuru.chem.harvard.edu/laszlo-seress/

    To compensate these losses, Ed Pyzer-Knapp from the Day Group at the University of Cambridge (UK) will join the CEP team in January 2014.
    http://www-day.ch.cam.ac.uk/epk.html

    Prof. Carlos Amador-Bedolla from UNAM in Mexico, who was active in the project a few years ago, has also started to be more active again.
    http://www.quimica.unam.mx/ficha_investigador.php?ID=77&tipo=2

    Continuity is always a big concern in a large-scale project such as the CEP, but we hope that we’ll manage the transition without too much trouble. Having the additional project branch in Buffalo will hopefully put our work on a broader foundation in the long run.

    Our work in the CEP was again recognized, e.g., by winning the 2013 Computerworld Data+ Award and the RSC Scholarship Award for Scientific Excellence of the ACS Division of Chemical Information for Johannes. CEP work has been presented on many conferences, webcasts, seminars, and talks over the last half year. It is by now a fairly well known effort in the materials science community and it has taken its place amongst the other big virtual screening projects such as the Materials Project, the Computational Materials Repository, and AFLOWLIB.

    Now to the progress on the research front: After a number of the other WCG projects have concluded, the CEP has seen a dramatic increase in computing time and returned results since the spring. These days we average between 24 and 28 y/d (that’s an increase of about 50% to our previous averages) and we have passed the mark of 21,000 years of harvested CPU time. By now we have performed over 200 million density functional theory calculations on over 3 million compounds, accumulating well over half a petabyte of data. We are currently in the process of expanding our storage capacity towards the 1PB mark by building Jabba 7 and 8. Thanks again to HGST for their generous sponsorship and Harvard FAS Research Computing for their support.

    Over the summer we have finally released the CEPDB on our new platform http://www.molecularspace.org. The launch made quite a splash. We received a lot of positive feedback in the news and from the community, and it was also nicely synchronized with the two year anniversary of the Materials Genome Initiative. We used the CEPDB release to also launched our new project webpage.

    Our latest research results and data analysis was published in “Energy and Environmental Science” and you can read all the details in this paper.

    There is still a lot of exciting research waiting to be done, and we are looking forward to tackling all this work together with you. Thanks so much for all your generous support, hard work, and enthusiasm – you guys and gals are the best! CEP would not be possible without you. CRUNCH ON!

    Best wishes from

    Your Harvard Clean Energy Project team

    The Harvard Clean Energy Project Database contains data and analyses on 2.3 million candidate compounds for organic photovoltaics. It is an open resource designed to give researchers in the field of organic electronics access to promising leads for new material developments.

    Would you like to help find new compounds for organic solar cells? By participating in the Harvard Clean Energy Project you can donate idle computer time on your PC for the discovery and design of new materials. Visit WorldCommunityGrid to get the BOINC software on which the project runs.

    CleanEnergyProjectPartners


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 7:43 am on July 12, 2014 Permalink | Reply
    Tags: , Clean Energy, , ,   

    From NERSC: “Hot Plasma Partial to Bootstrap Current” 

    NERSC Logo
    NERSC

    July 9, 2014
    Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov

    Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy.

    In a fusion reaction, energy is released when two hydrogen isotopes are fused together to form a heavier nucleus, helium. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures—more than 100 million degrees Celsius—which transforms it into hot plasma. Another key requirement of this process is magnetic confinement, the use of strong magnetic fields to keep the plasma from touching the vessel walls (and cooling) and compressing the plasma to fuse the isotopes.

    react
    A calculation of the self-generated plasma current in the W7-X reactor, performed using the SFINCS code on Edison. The colors represent the amount of electric current along the magnetic field, and the black lines show magnetic field lines. Image: Matt Landreman

    So there’s a lot going on inside the plasma as it heats up, not all of it good. Driven by electric and magnetic forces, charged particles swirl around and collide into one another, and the central temperature and density are constantly evolving. In addition, plasma instabilities disrupt the reactor’s ability to produce sustainable energy by increasing the rate of heat loss.

    Fortunately, research has shown that other, more beneficial forces are also at play within the plasma. For example, if the pressure of the plasma varies across the radius of the vessel, a self-generated current will spontaneously arise within the plasma—a phenomenon known as the “bootstrap” current.

    Now an international team of researchers has used NERSC supercomputers to further study the bootstrap current, which could help reduce or eliminate the need for an external current driver and pave the way to a more cost-effective fusion reactor. Matt Landreman, research associate at the University of Maryland’s Institute for Research in Electronics and Applied Physics, collaborated with two research groups to develop and run new codes at NERSC that more accurately calculate this self-generated current. Their findings appear in Plasma Physics and Controlled Fusion and Physics of Plasmas.

    “The codes in these two papers are looking at the average plasma flow and average rate at which particles escape from the confinement, and it turns out that plasma in a curved magnetic field will generate some average electric current on its own,” Landreman said. “Even if you aren’t trying to drive a current, if you take the hydrogen and heat it up and confine it in a curved magnetic field, it creates this current that turns out to be very important. If we ever want to make a tokamak fusion plant down the road, for economic reasons the plasma will have to supply a lot of its own current.”

    One of the unique things about plasmas is that there is often a complicated interaction between where particles are in space and their velocity, Landreman added.

    “To understand some of their interesting and complex behaviors, we have to solve an equation that takes into account both the position and the velocity of the particle,” he said. “That is the core of what these computations are designed to do.”

    Evolving Plasma Behavior

    int
    Interior of the Alcator C-Mod tokamak at the Massachusetts Institute of Technology’s Plasma Science and Fusion Center. Image: Mike Garrett

    The Plasma Physics and Controlled Fusion paper focuses on plasma behavior in tokamak reactors using PERFECT, a code Landreman wrote. Tokamak reactors, first introduced in the 1950s, are today considered by many to be the best candidate for producing controlled thermonuclear fusion power. A tokamak features a torus (doughnut-shaped) vessel and a combination of external magnets and a current driven in the plasma required to create a stable confinement system.

    In particular, PERFECT was designed to examine the plasma edge, a region of the tokamak where “lots of interesting things happen,” Landreman said. Before PERFECT, other codes were used to predict the flows and bootstrap current in the central plasma and solve equations that assume the gradients of density and temperature are gradual.

    “The problem with the plasma edge is that the gradients are very strong, so these previous codes are not necessarily valid in the edge, where we must solve a more complicated equation,” he said. “PERFECT was built to solve such an equation.”

    For example, in most of the inner part of the tokamak there is a fairly gradual gradient of the density and temperature. “But at the edge there is a fairly big jump in density and temperature—what people call the edge pedestal. What is different about PERFECT is that we are trying to account for some of this very strong radial variation,” Landreman explained.

    These findings are important because researchers are concerned that the bootstrap current may affect edge stability. PERFECT is also used to calculate plasma flow, which also may affect edge stability.

    “My co-authors had previously done some analytic calculations to predict how the plasma flow and heat flux would change in the pedestal region compared to places where radial gradients aren’t as strong,” Landreman said. “We used PERFECT to test these calculations with a brute force numerical calculation at NERSC and found that they agreed really well. The analytic calculations provide insight into how the plasma flow and heat flux will be affected by these strong radial gradients.”

    From Tokamak to Stellarator

    In the Physics of Plasmas study, the researchers used a second code, SFINCS, to focus on related calculations in a different kind of confinement concept: a stellarator. In a stellarator the magnetic field is not axisymmetric, meaning that it looks different as you circle around the donut hole. As Landreman put it, “A tokamak is to a stellarator as a standard donut is to a cruller.”

    hxt
    HSX stellarator

    First introduced in the 1950s, stellarators have played a central role in the German and Japanese fusion programs and were popular in the U.S. until the 1970s when many fusion scientists began favoring the tokamak design. In recent years several new stellarators have appeared, including the Wendelstein 7-X (W7-X) in Germany, the Helically Symmetric Experiment in the U.S. and the Large Helical Device in Japan. Two of Landreman’s coauthors on the Physics of Plasmas paper are physicists from the Max Planck Institute for Plasma Physics, where W7-X is being constructed.

    “In the W7-X design, the amount of plasma current has a strong effect on where the heat is exhausted to the wall,” Landreman explained. “So at Max Planck they are very concerned about exactly how much self-generated current there will be when they turn on their machine. Based on a prediction for this current, a set of components called the ‘divertor’ was located inside the vacuum vessel to accept the large heat exhaust. But if the plasma makes more current than expected, the heat will come out in a different location, and you don’t want to be surprised.”

    Their concerns stemmed from the fact that the previous code was developed when computers were too slow to solve the “real” 4D equation, he added.

    “The previous code made an approximation that you could basically ignore all the dynamics in one of the dimensions (particle speed), thereby reducing 4D to 3D,” Landreman said. “Now that computers are faster, we can test how good this approximation was. And what we found was that basically the old code was pretty darn accurate and that the predictions made for this bootstrap current are about right.”

    The calculations for both studies were run on Hopper and Edison using some additional NERSC resources, Landreman noted.

    “I really like running on NERSC systems because if you have a problem, you ask a consultant and they get back to you quickly,” Landreman said. “Also knowing that all the software is up to date and it works. I’ve been using NX lately to speed up the graphics. It’s great because you can plot results quickly without having to download any data files to your local computer.”

    See the full article here.

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 4:14 am on June 12, 2014 Permalink | Reply
    Tags: , Clean Energy, , , ,   

    From Berkeley Lab: “Manipulating and Detecting Ultrahigh Frequency Sound Waves” 

    Berkeley Logo

    Berkeley Lab

    June 11, 2014
    Lynn Yarris

    wave
    Gold plasmonic nanostructures shaped like Swiss-crosses can convert laser light into ultrahigh frequency (10GHz) sound waves.

    An advance has been achieved towards next generation ultrasonic imaging with potentially 1,000 times higher resolution than today’s medical ultrasounds. Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) have demonstrated a technique for producing, detecting and controlling ultrahigh frequency sound waves at the nanometer scale.

    Through a combination of subpicosecond laser pulses and unique nanostructures, a team led by Xiang Zhang, a faculty scientist with Berkeley Lab’s Materials Sciences Division, produced acoustic phonons – quasi-particles of vibrational energy that move through an atomic lattice as sound waves – at a frequency of 10 gigahertz (10 billion cycles per second). By comparison, medical ultrasounds today typically reach a frequency of only about 20 megahertz (20 million cycles per second.) The 10GHz phonons not only promise unprecedented resolution for acoustic imaging, they also can be used to “see” subsurface structures in nanoscale systems that optical and electron microscopes cannot.

    “We have demonstrated optical coherent manipulation and detection of the acoustic phonons in nanostructures that offer new possibilities in the development of coherent phonon sources and nano-phononic devices for chemical sensing, thermal energy management and communications,” says Zhang, who also holds the Ernest S. Kuh Endowed Chair Professor at the University of California (UC) Berkeley. In addition, he directs the National Science Foundation’s Nano-scale Science and Engineering Center, and is a member of the Kavli Energy NanoSciences Institute at Berkeley.

    Zhang is the corresponding author of a paper describing this research in Nature Communications. The paper is titled Ultrafast Acousto-plasmonic Control and Sensing in Complex Nanostructures. The lead authors are Kevin O’Brien and Norberto Daniel Lanzillotti-Kimura, members of Zhang’s research group. Other co-authors are Junsuk Rho, Haim Suchowski and Xiaobo Yin.

    Acoustic imaging offers certain advantages over optical imaging. The ability of sound waves to safely pass through biological tissue has made sonograms a popular medical diagnostic tool. Sound waves have also become a valuable tool for the non-destructive testing of materials. In recent years, ultrahigh frequency sound waves have been the subject of intense scientific study. Phonons at GHz frequencies can pass through materials that are opaque to photons, the particles that carry light. Ultrahigh frequency phonons also travel at the small wavelengths that yield a sharper resolution in ultrasound imaging.

    three
    Xiang Zhang, Haim Suchowski and Kevin O’Brien were part of the team that produced, detected and controlled ultrahigh frequency sound waves at the nanometer scale. (Photo by Roy Kaltschmidt)

    The biggest challenge has been to find effective ways of generating, detecting and controlling ultrahigh frequency sound waves. Zhang, O’Brien, Lanzillotti-Kimura and their colleagues were able to meet this challenge through the design of nanostructures that support multiple modes of both phonons and plasmons. A plasmon is a wave that rolls through the conduction electrons on the surface of a metal.

    “Through the interplay between phonons and localized surface plasmons, we can detect the spatial properties of complex phonon modes below the optical wavelength,” O’Brien says. “This allows us to detect complex nanomechanical dynamics using polarization-resolved transient absorption spectroscopy.”

    Plasmons can be used to confine light in subwavelength dimensions and are considered to be good candidates for manipulating nanoscale mechanical motion because of their large absorption cross-sections, subwavelength field localization, and high sensitivity to geometry and refractive index changes.

    “To generate 10 GHz acoustic frequencies in our plasmonic nanostructures we use a technique known as picosecond ultrasonics,” O’Brien says. “Sub-picosecond pulses of laser light excite plasmons which dissipate their energy as heat. The nanostructure rapidly expands and generates coherent acoustic phonons. This process transduces photons from the laser into coherent phonons.”

    To detect these coherent phonons, a second laser pulse is used to excite probe surface plasmons. As these plasmons move across the surface of the nanostructure, their resonance frequency shifts as the nanostructure geometry becomes distorted by the phonons. This enables the researchers to optically detect mechanical motion on the nanoscale.

    “We’re able to sense ultrafast motion along the different axes of our nanostructures simply by rotating the polarization of the probe pulse,” says Lanzillotti-Kimura. “Since we’ve shown that the polarization of the pump pulse doesn’t make a difference in our nanostructures due to hot electron diffusion, we can tailor the phonon modes which are excited by designing the symmetry of the nanostructure.”

    The plasmonic nanostructures that Zhang, O’Brien, Lanzillotti-Kimura and their colleagues designed are made of gold and shaped like a Swiss-cross. Each cross is 35 nanometers thick with horizontal and vertical arm lengths of 120 and 90 nanometers, respectively. When the two arms oscillate in phase, the crosses generate symmetric phonons. When the arms oscillate out of phase, anti-symmetric phonons are generated.

    two
    When the two arms of this Swiss-cross nanostructure oscillate in phase, symmetric phonons are produced. When the arms oscillate out of phase, anti-symmetric phonons are generated. The differences enable the detection of nanoscale motion.

    “The phase differences in the phonon modes produce an interference effect that allow us to distinguish between symmetric and anti-symmetric phonon modes using localized surface plasmons,” O’Brien says. “Being able to generate and detect phonon modes with different symmetries or spatial distributions in a structure improves our ability to detect nanoscale motion and is a step towards some potential applications of ultrahigh frequency acoustic phonons.”

    By allowing researchers to selectively excite and detect GHz mechanical motion, the Swiss-cross design of the plasmonic nanostructures provides the control and sensing capabilities needed for ultrahigh frequency acoustic imaging. For the material sciences, the acoustic vibrations can be used as nanoscale “hammers” to impose physical strains along different axes at ultrahigh frequencies. This strain can then be detected by observing the plasmonic response. Zhang and his research group are planning to use these nanoscale hammers to generate and detect ultrafast vibrations in other systems such as two-dimensional materials.

    This research was supported by the DOE Office of Science through the Energy Frontier Research Center program.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 330 other followers

%d bloggers like this: