Tagged: Clean Energy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:11 am on July 22, 2014 Permalink | Reply
    Tags: , Clean Energy, ,   

    From ESO: “Solar Farm to be Installed at La Silla” 


    European Southern Observatory

    21 July 2014
    Roberto Tamai
    E-ELT Programme Manager
    Garching bei München, Germany
    Tel: +49 89 3200 6367
    Email: rtamai@eso.org

    Lars Lindberg Christensen
    Head of ESO ePOD
    ESO ePOD, Garching, Germany
    Tel: +49 89 3200 6761
    Cellular: +49 173 3872 621
    E-mail: lars@eso.org

    As part of its green initiatives, ESO has signed an agreement with the Chilean company, Astronomy and Energy (a subsidiary of the Spanish LKS Group), to install a solar farm at the La Silla Observatory. ESO has been working on green solutions for supplying energy to its sites for several years, and these are now coming to fruition. Looking to the future, renewables are considered vital to satisfy energy needs in a sustainable manner.

    ESO LaSilla
    ESO at LaSilla

    solar

    ESO’s ambitious programme is focused on achieving the highest quality of astronomical research. This requires the design, construction and operation of the most powerful ground-based observing facilities in the world. However, the operations at ESO’s observatories present significant challenges in terms of their energy usage.

    Despite the abundance of sunshine at the ESO sites, it has not been possible up to now to make efficient use of this natural source of power. Astronomy and Energy will supply a means of effectively exploiting solar energy using crystalline photovoltaic modules (solar panels), which will be installed at La Silla.

    The installation will cover an area of more than 100 000 square metres, with the aim of being ready to supply the site by end of the year.

    The global landscape for energy has changed considerably over the last 20 years. As energy prices are increasing and vary unpredictably, ESO has been keen to look into ways to control its energy costs and also limit its ecological impact. The organisation has already managed to successfully reduce its power consumption at La Silla, and despite the additions of the VISTA and VST survey telescopes, power use has remained stable over the past few years at the Paranal Observatory, site of the VLT.

    ESO Vista Telescope
    ESO VISTA Telescope

    ESO VST telescope
    ESO VST Telescope

    The much-improved efficiency of solar cells has meant they have become a viable alternative to exploit solar energy. Solar cells of the latest generation are considered to be very reliable and almost maintenance-free, characteristics that contribute to a high availability of electric power, as required at astronomical observatories.

    As ESO looks to the future, it seeks further sustainable energy sources to be compatible across all its sites, including Cerro Armazones — close to Cerro Paranal and the site of the future European Extremely Large Telescope (E-ELT). This goal will be pursued not only by installing primary sources of renewable energy, as at La Silla, but also by realising connections to the Chilean interconnected power systems, where non-conventional renewable energy sources are going to constitute an ever-growing share of the power and energy mixes.

    The installation of a solar farm at La Silla is one of a series of initiatives ESO is taking to tackle the environmental impacts of its operations, as can be viewed here. Green energy is strongly supported by the Chilean government, which aims to increase the Chilean green energy share to 25% in 2020, with a possible target of 30% by 2030.

    See the full article, with note, here.

    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Main

    ESO, European Southern Observatory, builds and operates a suite of the world’s most advanced ground-based astronomical telescopes.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 12:19 pm on July 15, 2014 Permalink | Reply
    Tags: , , Clean Energy, , ,   

    From PPPL: “Experts assemble at PPPL to discuss mitigation of tokamak disruptions” 


    PPPL

    July 15, 2014
    John Greenwald

    Some 35 physicists from around the world gathered at PPPL last week for the second annual Laboratory-led workshop on improving ways to predict and mitigate disruptions in tokamaks. Avoiding or mitigating such disruptions, which occur when heat or electric current are suddenly reduced during fusion experiments, will be crucial for ITER the international experiment under construction in France to demonstrate the feasibility of fusion power.

    two
    Amitava Bhattacharjee, left, and John Mandrekas, a program manager in the U.S. Department of Energy’s office of Fusion Energy Sciences.(Photo by Elle Starkman/Princeton Office of
    Communications )

    PPPL Tokamak
    Tokamak at PPPL

    Presentations at the three-day session, titled “Theory and Simulation of Disruptions Workshop,” focused on the development of models that can be validated by experiment. “This is a really urgent task for ITER,” said Amitava Bhattacharjee, who heads the PPPL Theory Department and organized the workshop. The United States is responsible for designing disruption-mitigation systems for ITER, he noted, and faces a deadline of 2017.

    Speakers at the workshop included theorists and experimentalists from the ITER Organization, PPPL, General Atomics and several U.S. Universities, and from fusion facilities in the United Kingdom, China, Italy and India. Topics ranged from coping with the currents and forces that strike tokamak walls to suppressing runaway electrons that can be unleashed during experiments.

    Bringing together theorists and experimentalists is essential for developing solutions to disruptions, Bhattacharjee said. “I already see that major fusion facilities in the United States, as well as international tokamaks, are embarking on experiments that are ideal validation tools for theory and simulation,” he said. “And it is very important that theory and simulation ideas that can be validated with experimental results are presented and discussed in detail in focused workshops such as this one.”

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 4:47 am on July 14, 2014 Permalink | Reply
    Tags: , , , Clean Energy,   

    Friom NASA/ESA HUbble: “The oldest cluster in its cloud” 

    NASA Hubble Telescope

    Hubble

    14 July 2014
    No Writer Credit

    This image shows NGC 121, a globular cluster in the constellation of Tucana (The Toucan). Globular clusters are big balls of old stars that orbit the centres of their galaxies like satellites — the Milky Way, for example, has around 150.

    ngc
    Credit: NASA/ESA Hubble Acknowlegement: Stefano Campani
    Hubble ACS
    NASA Hubble ACS

    NGC 121 belongs to one of our neighbouring galaxies, the Small Magellanic Cloud (SMC). It was discovered in 1835 by English astronomer John Herschel, and in recent years it has been studied in detail by astronomers wishing to learn more about how stars form and evolve.

    Stars do not live forever — they develop differently depending on their original mass. In many clusters, all the stars seem to have formed at the same time, although in others we see distinct populations of stars that are different ages. By studying old stellar populations in globular clusters, astronomers can effectively use them as tracers for the stellar population of their host galaxies. With an object like NGC 121, which lies close to the Milky Way, Hubble is able to resolve individual stars and get a very detailed insight.

    NGC 121 is around 10 billion years old, making it the oldest cluster in its galaxy; all of the SMC’s other globular clusters are 8 billion years old or younger. However, NGC 121 is still several billions of years younger than its counterparts in the Milky Way and in other nearby galaxies like the Large Magellanic Cloud. The reason for this age gap is not completely clear, but it could indicate that cluster formation was initially delayed for some reason in the SMC, or that NGC 121 is the sole survivor of an older group of star clusters.

    This image was taken using Hubble’s Advanced Camera for Surveys (ACS). A version of this image was submitted to the Hubble’s Hidden Treasures image processing competition by contestant Stefano Campani.

    See the full article here.

    The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI), is a free-standing science center, located on the campus of The Johns Hopkins University and operated by the Association of Universities for Research in Astronomy (AURA) for NASA, conducts Hubble science operations.

    ESA50 Logo large

    AURA Icon


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:25 pm on July 13, 2014 Permalink | Reply
    Tags: , , Clean Energy, Clean Energy Project, ,   

    Clean Energy Project Hello everybody We’ve been overdue… 

    Clean Energy

    Clean Energy Project

    Hello everybody,

    We’ve been overdue for another progress report on the Clean Energy Project for some time, so here it finally comes. We hope you’ll enjoy this summary of the things that have happened since our last full report in April.

    Let’s start with the news on the CEP team again: Last time we reported that Alan was promoted to Full Professor and Sule got a job as an Assistant Professor in Ankara (Turkey). In the meantime, Johannes also landed an Assistant Professorship in the Department of Chemical and Biological Engineering at the University at Buffalo, The State University of New York, with an affiliation to the New York State Center of Excellence in Materials Informatics.
    http://www.cbe.buffalo.edu/people/full_time/j_hachmann.php

    Johannes will, however, stay involved in the Clean Energy Project and has already recruited students at Buffalo who will strengthen the CEP research efforts. Laszlo is gearing up to go out into the world as well and he will start graduate school next summer.
    http://aspuru.chem.harvard.edu/laszlo-seress/

    To compensate these losses, Ed Pyzer-Knapp from the Day Group at the University of Cambridge (UK) will join the CEP team in January 2014.
    http://www-day.ch.cam.ac.uk/epk.html

    Prof. Carlos Amador-Bedolla from UNAM in Mexico, who was active in the project a few years ago, has also started to be more active again.
    http://www.quimica.unam.mx/ficha_investigador.php?ID=77&tipo=2

    Continuity is always a big concern in a large-scale project such as the CEP, but we hope that we’ll manage the transition without too much trouble. Having the additional project branch in Buffalo will hopefully put our work on a broader foundation in the long run.

    Our work in the CEP was again recognized, e.g., by winning the 2013 Computerworld Data+ Award and the RSC Scholarship Award for Scientific Excellence of the ACS Division of Chemical Information for Johannes. CEP work has been presented on many conferences, webcasts, seminars, and talks over the last half year. It is by now a fairly well known effort in the materials science community and it has taken its place amongst the other big virtual screening projects such as the Materials Project, the Computational Materials Repository, and AFLOWLIB.

    Now to the progress on the research front: After a number of the other WCG projects have concluded, the CEP has seen a dramatic increase in computing time and returned results since the spring. These days we average between 24 and 28 y/d (that’s an increase of about 50% to our previous averages) and we have passed the mark of 21,000 years of harvested CPU time. By now we have performed over 200 million density functional theory calculations on over 3 million compounds, accumulating well over half a petabyte of data. We are currently in the process of expanding our storage capacity towards the 1PB mark by building Jabba 7 and 8. Thanks again to HGST for their generous sponsorship and Harvard FAS Research Computing for their support.

    Over the summer we have finally released the CEPDB on our new platform http://www.molecularspace.org. The launch made quite a splash. We received a lot of positive feedback in the news and from the community, and it was also nicely synchronized with the two year anniversary of the Materials Genome Initiative. We used the CEPDB release to also launched our new project webpage.

    Our latest research results and data analysis was published in “Energy and Environmental Science” and you can read all the details in this paper.

    There is still a lot of exciting research waiting to be done, and we are looking forward to tackling all this work together with you. Thanks so much for all your generous support, hard work, and enthusiasm – you guys and gals are the best! CEP would not be possible without you. CRUNCH ON!

    Best wishes from

    Your Harvard Clean Energy Project team

    The Harvard Clean Energy Project Database contains data and analyses on 2.3 million candidate compounds for organic photovoltaics. It is an open resource designed to give researchers in the field of organic electronics access to promising leads for new material developments.

    Would you like to help find new compounds for organic solar cells? By participating in the Harvard Clean Energy Project you can donate idle computer time on your PC for the discovery and design of new materials. Visit WorldCommunityGrid to get the BOINC software on which the project runs.

    CleanEnergyProjectPartners


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 7:43 am on July 12, 2014 Permalink | Reply
    Tags: , Clean Energy, , ,   

    From NERSC: “Hot Plasma Partial to Bootstrap Current” 

    NERSC Logo
    NERSC

    July 9, 2014
    Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov

    Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy.

    In a fusion reaction, energy is released when two hydrogen isotopes are fused together to form a heavier nucleus, helium. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures—more than 100 million degrees Celsius—which transforms it into hot plasma. Another key requirement of this process is magnetic confinement, the use of strong magnetic fields to keep the plasma from touching the vessel walls (and cooling) and compressing the plasma to fuse the isotopes.

    react
    A calculation of the self-generated plasma current in the W7-X reactor, performed using the SFINCS code on Edison. The colors represent the amount of electric current along the magnetic field, and the black lines show magnetic field lines. Image: Matt Landreman

    So there’s a lot going on inside the plasma as it heats up, not all of it good. Driven by electric and magnetic forces, charged particles swirl around and collide into one another, and the central temperature and density are constantly evolving. In addition, plasma instabilities disrupt the reactor’s ability to produce sustainable energy by increasing the rate of heat loss.

    Fortunately, research has shown that other, more beneficial forces are also at play within the plasma. For example, if the pressure of the plasma varies across the radius of the vessel, a self-generated current will spontaneously arise within the plasma—a phenomenon known as the “bootstrap” current.

    Now an international team of researchers has used NERSC supercomputers to further study the bootstrap current, which could help reduce or eliminate the need for an external current driver and pave the way to a more cost-effective fusion reactor. Matt Landreman, research associate at the University of Maryland’s Institute for Research in Electronics and Applied Physics, collaborated with two research groups to develop and run new codes at NERSC that more accurately calculate this self-generated current. Their findings appear in Plasma Physics and Controlled Fusion and Physics of Plasmas.

    “The codes in these two papers are looking at the average plasma flow and average rate at which particles escape from the confinement, and it turns out that plasma in a curved magnetic field will generate some average electric current on its own,” Landreman said. “Even if you aren’t trying to drive a current, if you take the hydrogen and heat it up and confine it in a curved magnetic field, it creates this current that turns out to be very important. If we ever want to make a tokamak fusion plant down the road, for economic reasons the plasma will have to supply a lot of its own current.”

    One of the unique things about plasmas is that there is often a complicated interaction between where particles are in space and their velocity, Landreman added.

    “To understand some of their interesting and complex behaviors, we have to solve an equation that takes into account both the position and the velocity of the particle,” he said. “That is the core of what these computations are designed to do.”

    Evolving Plasma Behavior

    int
    Interior of the Alcator C-Mod tokamak at the Massachusetts Institute of Technology’s Plasma Science and Fusion Center. Image: Mike Garrett

    The Plasma Physics and Controlled Fusion paper focuses on plasma behavior in tokamak reactors using PERFECT, a code Landreman wrote. Tokamak reactors, first introduced in the 1950s, are today considered by many to be the best candidate for producing controlled thermonuclear fusion power. A tokamak features a torus (doughnut-shaped) vessel and a combination of external magnets and a current driven in the plasma required to create a stable confinement system.

    In particular, PERFECT was designed to examine the plasma edge, a region of the tokamak where “lots of interesting things happen,” Landreman said. Before PERFECT, other codes were used to predict the flows and bootstrap current in the central plasma and solve equations that assume the gradients of density and temperature are gradual.

    “The problem with the plasma edge is that the gradients are very strong, so these previous codes are not necessarily valid in the edge, where we must solve a more complicated equation,” he said. “PERFECT was built to solve such an equation.”

    For example, in most of the inner part of the tokamak there is a fairly gradual gradient of the density and temperature. “But at the edge there is a fairly big jump in density and temperature—what people call the edge pedestal. What is different about PERFECT is that we are trying to account for some of this very strong radial variation,” Landreman explained.

    These findings are important because researchers are concerned that the bootstrap current may affect edge stability. PERFECT is also used to calculate plasma flow, which also may affect edge stability.

    “My co-authors had previously done some analytic calculations to predict how the plasma flow and heat flux would change in the pedestal region compared to places where radial gradients aren’t as strong,” Landreman said. “We used PERFECT to test these calculations with a brute force numerical calculation at NERSC and found that they agreed really well. The analytic calculations provide insight into how the plasma flow and heat flux will be affected by these strong radial gradients.”

    From Tokamak to Stellarator

    In the Physics of Plasmas study, the researchers used a second code, SFINCS, to focus on related calculations in a different kind of confinement concept: a stellarator. In a stellarator the magnetic field is not axisymmetric, meaning that it looks different as you circle around the donut hole. As Landreman put it, “A tokamak is to a stellarator as a standard donut is to a cruller.”

    hxt
    HSX stellarator

    First introduced in the 1950s, stellarators have played a central role in the German and Japanese fusion programs and were popular in the U.S. until the 1970s when many fusion scientists began favoring the tokamak design. In recent years several new stellarators have appeared, including the Wendelstein 7-X (W7-X) in Germany, the Helically Symmetric Experiment in the U.S. and the Large Helical Device in Japan. Two of Landreman’s coauthors on the Physics of Plasmas paper are physicists from the Max Planck Institute for Plasma Physics, where W7-X is being constructed.

    “In the W7-X design, the amount of plasma current has a strong effect on where the heat is exhausted to the wall,” Landreman explained. “So at Max Planck they are very concerned about exactly how much self-generated current there will be when they turn on their machine. Based on a prediction for this current, a set of components called the ‘divertor’ was located inside the vacuum vessel to accept the large heat exhaust. But if the plasma makes more current than expected, the heat will come out in a different location, and you don’t want to be surprised.”

    Their concerns stemmed from the fact that the previous code was developed when computers were too slow to solve the “real” 4D equation, he added.

    “The previous code made an approximation that you could basically ignore all the dynamics in one of the dimensions (particle speed), thereby reducing 4D to 3D,” Landreman said. “Now that computers are faster, we can test how good this approximation was. And what we found was that basically the old code was pretty darn accurate and that the predictions made for this bootstrap current are about right.”

    The calculations for both studies were run on Hopper and Edison using some additional NERSC resources, Landreman noted.

    “I really like running on NERSC systems because if you have a problem, you ask a consultant and they get back to you quickly,” Landreman said. “Also knowing that all the software is up to date and it works. I’ve been using NX lately to speed up the graphics. It’s great because you can plot results quickly without having to download any data files to your local computer.”

    See the full article here.

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 4:14 am on June 12, 2014 Permalink | Reply
    Tags: , Clean Energy, , , ,   

    From Berkeley Lab: “Manipulating and Detecting Ultrahigh Frequency Sound Waves” 

    Berkeley Logo

    Berkeley Lab

    June 11, 2014
    Lynn Yarris

    wave
    Gold plasmonic nanostructures shaped like Swiss-crosses can convert laser light into ultrahigh frequency (10GHz) sound waves.

    An advance has been achieved towards next generation ultrasonic imaging with potentially 1,000 times higher resolution than today’s medical ultrasounds. Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) have demonstrated a technique for producing, detecting and controlling ultrahigh frequency sound waves at the nanometer scale.

    Through a combination of subpicosecond laser pulses and unique nanostructures, a team led by Xiang Zhang, a faculty scientist with Berkeley Lab’s Materials Sciences Division, produced acoustic phonons – quasi-particles of vibrational energy that move through an atomic lattice as sound waves – at a frequency of 10 gigahertz (10 billion cycles per second). By comparison, medical ultrasounds today typically reach a frequency of only about 20 megahertz (20 million cycles per second.) The 10GHz phonons not only promise unprecedented resolution for acoustic imaging, they also can be used to “see” subsurface structures in nanoscale systems that optical and electron microscopes cannot.

    “We have demonstrated optical coherent manipulation and detection of the acoustic phonons in nanostructures that offer new possibilities in the development of coherent phonon sources and nano-phononic devices for chemical sensing, thermal energy management and communications,” says Zhang, who also holds the Ernest S. Kuh Endowed Chair Professor at the University of California (UC) Berkeley. In addition, he directs the National Science Foundation’s Nano-scale Science and Engineering Center, and is a member of the Kavli Energy NanoSciences Institute at Berkeley.

    Zhang is the corresponding author of a paper describing this research in Nature Communications. The paper is titled Ultrafast Acousto-plasmonic Control and Sensing in Complex Nanostructures. The lead authors are Kevin O’Brien and Norberto Daniel Lanzillotti-Kimura, members of Zhang’s research group. Other co-authors are Junsuk Rho, Haim Suchowski and Xiaobo Yin.

    Acoustic imaging offers certain advantages over optical imaging. The ability of sound waves to safely pass through biological tissue has made sonograms a popular medical diagnostic tool. Sound waves have also become a valuable tool for the non-destructive testing of materials. In recent years, ultrahigh frequency sound waves have been the subject of intense scientific study. Phonons at GHz frequencies can pass through materials that are opaque to photons, the particles that carry light. Ultrahigh frequency phonons also travel at the small wavelengths that yield a sharper resolution in ultrasound imaging.

    three
    Xiang Zhang, Haim Suchowski and Kevin O’Brien were part of the team that produced, detected and controlled ultrahigh frequency sound waves at the nanometer scale. (Photo by Roy Kaltschmidt)

    The biggest challenge has been to find effective ways of generating, detecting and controlling ultrahigh frequency sound waves. Zhang, O’Brien, Lanzillotti-Kimura and their colleagues were able to meet this challenge through the design of nanostructures that support multiple modes of both phonons and plasmons. A plasmon is a wave that rolls through the conduction electrons on the surface of a metal.

    “Through the interplay between phonons and localized surface plasmons, we can detect the spatial properties of complex phonon modes below the optical wavelength,” O’Brien says. “This allows us to detect complex nanomechanical dynamics using polarization-resolved transient absorption spectroscopy.”

    Plasmons can be used to confine light in subwavelength dimensions and are considered to be good candidates for manipulating nanoscale mechanical motion because of their large absorption cross-sections, subwavelength field localization, and high sensitivity to geometry and refractive index changes.

    “To generate 10 GHz acoustic frequencies in our plasmonic nanostructures we use a technique known as picosecond ultrasonics,” O’Brien says. “Sub-picosecond pulses of laser light excite plasmons which dissipate their energy as heat. The nanostructure rapidly expands and generates coherent acoustic phonons. This process transduces photons from the laser into coherent phonons.”

    To detect these coherent phonons, a second laser pulse is used to excite probe surface plasmons. As these plasmons move across the surface of the nanostructure, their resonance frequency shifts as the nanostructure geometry becomes distorted by the phonons. This enables the researchers to optically detect mechanical motion on the nanoscale.

    “We’re able to sense ultrafast motion along the different axes of our nanostructures simply by rotating the polarization of the probe pulse,” says Lanzillotti-Kimura. “Since we’ve shown that the polarization of the pump pulse doesn’t make a difference in our nanostructures due to hot electron diffusion, we can tailor the phonon modes which are excited by designing the symmetry of the nanostructure.”

    The plasmonic nanostructures that Zhang, O’Brien, Lanzillotti-Kimura and their colleagues designed are made of gold and shaped like a Swiss-cross. Each cross is 35 nanometers thick with horizontal and vertical arm lengths of 120 and 90 nanometers, respectively. When the two arms oscillate in phase, the crosses generate symmetric phonons. When the arms oscillate out of phase, anti-symmetric phonons are generated.

    two
    When the two arms of this Swiss-cross nanostructure oscillate in phase, symmetric phonons are produced. When the arms oscillate out of phase, anti-symmetric phonons are generated. The differences enable the detection of nanoscale motion.

    “The phase differences in the phonon modes produce an interference effect that allow us to distinguish between symmetric and anti-symmetric phonon modes using localized surface plasmons,” O’Brien says. “Being able to generate and detect phonon modes with different symmetries or spatial distributions in a structure improves our ability to detect nanoscale motion and is a step towards some potential applications of ultrahigh frequency acoustic phonons.”

    By allowing researchers to selectively excite and detect GHz mechanical motion, the Swiss-cross design of the plasmonic nanostructures provides the control and sensing capabilities needed for ultrahigh frequency acoustic imaging. For the material sciences, the acoustic vibrations can be used as nanoscale “hammers” to impose physical strains along different axes at ultrahigh frequencies. This strain can then be detected by observing the plasmonic response. Zhang and his research group are planning to use these nanoscale hammers to generate and detect ultrafast vibrations in other systems such as two-dimensional materials.

    This research was supported by the DOE Office of Science through the Energy Frontier Research Center program.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:22 pm on May 27, 2014 Permalink | Reply
    Tags: , Clean Energy, , ,   

    From M.I.T.: Improving a new breed of solar cells 


    MIT News

    May 27, 2014
    David L. Chandler | MIT News Office

    Solar-cell technology has advanced rapidly, as hundreds of groups around the world pursue more than two dozen approaches using different materials, technologies, and approaches to improve efficiency and reduce costs. Now a team at MIT has set a new record for the most efficient quantum-dot cells — a type of solar cell that is seen as especially promising because of its inherently low cost, versatility, and light weight.

    While the overall efficiency of this cell is still low compared to other types — about 9 percent of the energy of sunlight is converted to electricity — the rate of improvement of this technology is one of the most rapid seen for a solar technology. The development is described in a paper, published in the journal Nature Materials, by MIT professors Moungi Bawendi and Vladimir Bulović and graduate students Chia-Hao Chuang and Patrick Brown.

    The new process is an extension of work by Bawendi, the Lester Wolfe Professor of Chemistry, to produce quantum dots with precisely controllable characteristics — and as uniform thin coatings that can be applied to other materials. These minuscule particles are very effective at turning light into electricity, and vice versa. Since the first progress toward the use of quantum dots to make solar cells, Bawendi says, “The community, in the last few years, has started to understand better how these cells operate, and what the limitations are.”

    The new work represents a significant leap in overcoming those limitations, increasing the current flow in the cells and thus boosting their overall efficiency in converting sunlight into electricity.

    Many approaches to creating low-cost, large-area flexible and lightweight solar cells suffer from serious limitations — such as short operating lifetimes when exposed to air, or the need for high temperatures and vacuum chambers during production. By contrast, the new process does not require an inert atmosphere or high temperatures to grow the active device layers, and the resulting cells show no degradation after more than five months of storage in air.

    Bulović, the Fariborz Maseeh Professor of Emerging Technology and associate dean for innovation in MIT’s School of Engineering, explains that thin coatings of quantum dots “allow them to do what they do as individuals — to absorb light very well — but also work as a group, to transport charges.” This allows those charges to be collected at the edge of the film, where they can be harnessed to provide an electric current.

    The new work brings together developments from several fields to push the technology to unprecedented efficiency for a quantum-dot based system: The paper’s four co-authors come from MIT’s departments of physics, chemistry, materials science and engineering, and electrical engineering and computer science. The solar cell produced by the team has now been added to the National Renewable Energy Laboratories’ listing of record-high efficiencies for each kind of solar-cell technology.

    The overall efficiency of the cell is still lower than for most other types of solar cells. But Bulović points out, “Silicon had six decades to get where it is today, and even silicon hasn’t reached the theoretical limit yet. You can’t hope to have an entirely new technology beat an incumbent in just four years of development.” And the new technology has important advantages, notably a manufacturing process that is far less energy-intensive than other types.

    Chuang adds, “Every part of the cell, except the electrodes for now, can be deposited at room temperature, in air, out of solution. It’s really unprecedented.”

    The system is so new that it also has potential as a tool for basic research. “There’s a lot to learn about why it is so stable. There’s a lot more to be done, to use it as a testbed for physics, to see why the results are sometimes better than we expect,” Bulović says.

    A companion paper, written by three members of the same team along with MIT’s Jeffrey Grossman, the Carl Richard Soderberg Associate Professor of Power Engineering, and three others, appears this month in the journal ACS Nano, explaining in greater detail the science behind the strategy employed to reach this efficiency breakthrough.

    The new work represents a turnaround for Bawendi, who had spent much of his career working with quantum dots. “I was somewhat of a skeptic four years ago,” he says. But his team’s research since then has clearly demonstrated quantum dots’ potential in solar cells, he adds.

    Arthur Nozik, a research professor in chemistry at the University of Colorado who was not involved in this research, says, “This result represents a significant advance for the applications of quantum-dot films and the technology of low-temperature, solution-processed, quantum-dot photovoltaic cells. … There is still a long way to go before quantum-dot solar cells are commercially viable, but this latest development is a nice step toward this ultimate goal.”

    The work was supported by the Samsung Advanced Institute of Technology, the Fannie and John Hertz Foundation, and the National Science Foundation.

    See the full article here.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 3:10 pm on April 24, 2014 Permalink | Reply
    Tags: , , , , Clean Energy, ,   

    From PNNL Lab: “How a plant beckons the bacteria that will do it harm” 


    PNNL Lab

    April 24, 2014
    Tom Rickey, PNNL, (509) 375-3732

    Work on microbial signaling offers a window into better biofuels, human health

    A common plant puts out a welcome mat to bacteria seeking to invade, and scientists have discovered the mat’s molecular mix.

    The study published this week in the Proceedings of the National Academy of Sciences reveals new targets during the battle between microbe and host that researchers can exploit to protect plants.

    The team showed that the humble and oft-studied plant Arabidopsis puts out a molecular signal that invites an attack from a pathogen. It’s as if a hostile army were unknowingly passing by a castle, and the sentry stood up and yelled, “Over here!” — focusing the attackers on a target they would have otherwise simply passed by.

    “This signaling system triggers a structure in bacteria that actually looks a lot like a syringe, which is used to inject virulence proteins into its target. It’s exciting to learn that metabolites excreted by the host can play a role in triggering this system in bacteria,” said Thomas Metz, an author of the paper and a chemist at the Department of Energy’s Pacific Northwest National Laboratory.

    The findings come from a collaboration of scientists led by Scott Peck of the University of Missouri that includes researchers from Missouri, the Biological Sciences Division at PNNL, and EMSL, DOE’s Environmental Molecular Sciences Laboratory.

    The research examines a key moment in the relationship between microbe and host, when a microbe recognizes a host as a potential target and employs its molecular machinery to pierce it, injecting its contents into the plant’s cells — a crucial step in infecting an organism.

    The work focused on bacteria known as Pseudomonas syringae pv. tomato DC3000, which can ruin tomatoes as well as Arabidopsis. The bacteria employ a molecular system known as the Type 3 Secretion System, or T3SS, to infect plants. In tomatoes, the infection leads to unsightly brown spots.

    rot
    Infection of tomatoes by Pseudomonas syringae
    Image courtesy of Cornell University.

    Peck’s team at the University of Missouri had discovered a mutant type of the plant, known as Arabidopsis mkp1, which is resistant to infection by Pseudomonas syringae. The Missouri and PNNL groups compared levels of metabolites in Arabidopsis to those in the mutant mkp1 form of the plant. Peck’s group used those findings as a guide to find the compounds that had the biggest effect — a combination that invites infection.

    The researchers discovered a group of five acids that collectively had the biggest effect on turning on the bacteria’s T3SS: pyroglutamic, citric, shikimic, 4-hydroxybenzoic, and aspartic acids.

    They found that the mutant has a much lower level of these cellular products on the surface of the plant than found in normal plants. Since the resistant plants don’t have high levels of these acids, it stops the bacteria from unfurling the “syringe” in the presence of the plant. But when the combination of acids is introduced onto mkp1, it quickly becomes a target for infection.

    “We know that microbes can disguise themselves by altering the proteins or molecules that the plant uses to recognize the bacteria, as a strategy for evading detection,” said Peck, associate professor of biochemistry at the University of Missouri and lead author of the PNAS paper. “Our results now show that the plant can also disguise itself from pathogen recognition by removing the signals needed by the pathogen to become fully virulent.”

    While Peck’s study focused on bacteria known mostly for damaging tomatoes, the findings also could have implications for people. The same molecular machinery employed by Pseudomonas syringae is also used by a host of microbes to cause diseases that afflict people, including salmonella, the plague, respiratory disease, and chlamydia.

    On the energy front, the findings will help scientists grow plants that can serve as an energy source and are more resistant to infection. Also, a better understanding of the signals that microbes use helps scientists who rely on such organisms for converting materials like switchgrass and wood chips into useable fuel.

    The work opens the door to new ways to rendering harmful bacteria harmless, by modifying plants so they don’t become invasive.

    “There isn’t a single solution for disease resistance in the field, which is part of the reason these findings are important,” said Peck. “The concept of another layer of interaction between host and microbe provides an additional conceptual strategy for how resistance might be manipulated. Rather than trying to kill the bacteria, eliminating the recognition signals in the plant makes the bacteria fairly innocuous, giving the natural immune system more time to defend itself.”

    The research was funded by the National Science Foundation, and some of the research took place at EMSL. PNNL authors of the paper include Metz, Young-Mo Kim, and Ljiljana Pasa-Tolic. Missouri authors include Peck, Ying Wan, and first author Jeffrey C. Anderson.

    See the full article here.

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 9:08 pm on March 31, 2014 Permalink | Reply
    Tags: , , Clean Energy, , , ,   

    From Argonne Lab via PPPL: “Plasma Turbulence Simulations Reveal Promising Insight for Fusion Energy” 

    March 31, 2014
    By Argonne National Laboratory

    With the potential to provide clean, safe, and abundant energy, nuclear fusion has been called the “holy grail” of energy production. But harnessing energy from fusion, the process that powers the sun, has proven to be an extremely difficult challenge.

    turb
    Simulation of microturbulence in a tokamak fusion device. (Credit: Chad Jones and Kwan-Liu Ma, University of California, Davis; Stephane Ethier, Princeton Plasma Physics Laboratory)

    Scientists have been working to accomplish efficient, self-sustaining fusion reactions for decades, and significant research and development efforts continue in several countries today.

    For one such effort, researchers from the Princeton Plasma Physics Laboratory (PPPL), a DOE collaborative national center for fusion and plasma research in New Jersey, are running large-scale simulations at the Argonne Leadership Computing Facility (ALCF) to shed light on the complex physics of fusion energy. Their most recent simulations on Mira, the ALCF’s 10-petaflops Blue Gene/Q supercomputer, revealed that turbulent losses in the plasma are not as large as previously estimated.

    MIRA

    Good news

    This is good news for the fusion research community as plasma turbulence presents a major obstacle to attaining an efficient fusion reactor in which light atomic nuclei fuse together and produce energy. The balance between fusion energy production and the heat losses associated with plasma turbulence can ultimately determine the size and cost of an actual reactor.

    “Understanding and possibly controlling the underlying physical processes is key to achieving the efficiency needed to ensure the practicality of future fusion reactors,” said William Tang, PPPL principal research physicist and project lead.

    Tang’s work at the ALCF is focused on advancing the development of magnetically confined fusion energy systems, especially ITER, a multi-billion dollar international burning plasma experiment supported by seven governments including the United States.

    Currently under construction in France, ITER will be the world’s largest tokamak system, a device that uses strong magnetic fields to contain the burning plasma in a doughnut-shaped vacuum vessel. In tokamaks, unavoidable variations in the plasma’s ion temperature drive microturbulence, which can significantly increase the transport rate of heat, particles, and momentum across the confining magnetic field.

    “Simulating tokamaks of ITER’s physical size could not be done with sufficient accuracy until supercomputers as powerful as Mira became available,” said Tang.

    To prepare for the architecture and scale of Mira, Tim Williams of the ALCF worked with Tang and colleagues to benchmark and optimize their Gyrokinetic Toroidal Code – Princeton (GTC-P) on the ALCF’s new supercomputer. This allowed the research team to perform the first simulations of multiscale tokamak plasmas with very high phase-space resolution and long temporal duration. They are simulating a sequence of tokamak sizes up to and beyond the scale of ITER to validate the turbulent losses for large-scale fusion energy systems.

    Decades of experiments

    Decades of experimental measurements and theoretical estimates have shown turbulent losses to increase as the size of the experiment increases; this phenomenon occurs in the so-called Bohm regime. However, when tokamaks reach a certain size, it has been predicted that there will be a turnover point into a Gyro-Bohm regime, where the losses level off and become independent of size. For ITER and other future burning plasma experiments, it is important that the systems operate in this Gyro-Bohm regime.

    The recent simulations on Mira led the PPPL researchers to discover that the magnitude of turbulent losses in the Gyro-Bohm regime is up to 50% lower than indicated by earlier simulations carried out at much lower resolution and significantly shorter duration. The team also found that transition from the Bohm regime to the Gyro-Bohm regime is much more gradual as the plasma size increases. With a clearer picture of the shape of the transition curve, scientists can better understand the basic plasma physics involved in this phenomenon.

    “Determining how turbulent transport and associated confinement characteristics will scale to the much larger ITER-scale plasmas is of great interest to the fusion research community,” said Tang. “The results will help accelerate progress in worldwide efforts to harness the power of nuclear fusion as an alternative to fossil fuels.”

    This project has received computing time at the ALCF through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The effort was also awarded pre-production time on Mira through the ALCF’s Early Science Program, which allowed researchers to pursue science goals while preparing their GTC-P code for Mira.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 6:33 pm on March 20, 2014 Permalink | Reply
    Tags: , Clean Energy, , , , ,   

    From Oak Ridge via PPPL: “The Bleeding ‘Edge’ of Fusion Research” 

    March 20, 2014

    Few problems have vexed physicists like fusion, the process by which stars fuel themselves and by which researchers on Earth hope to create the energy source of the future.

    By heating the hydrogen isotopes tritium and deuterium to more than five times the temperature of the Sun’s surface, scientists create a reaction that could eventually produce electricity. Turns out, however, that confining the engine of a star to a manmade vessel and using it to produce energy is tricky business.

    Big problems, such as this one, require big solutions. Luckily, few solutions are bigger than Titan, the Department of Energy’s flagship Cray XK7 supercomputer managed by the Oak Ridge Leadership Computing Facility.

    tatan
    Titan

    is
    Inside Titan

    Titan allows advanced scientific applications to reach unprecedented speeds, enabling scientific breakthroughs faster than ever with only a marginal increase in power consumption. This unique marriage of number-crunching hardware enables Titan, located at Oak Ridge National Laboratory (ORNL), to reach a peak performance of 27 petaflops to claim the title of the world’s fastest computer dedicated solely to scientific research.

    PPPL fusion code

    And fusion is at the head of the research pack. In fact, a team led by Princeton Plasma Physics Laboratory’s (PPPL’s) C.S. Chang increased the performance of its fusion XGC1 code fourfold on Titan using its GPUs and CPUs, compared to its previous CPU-only incarnation after a 6-month performance engineering period during which the team tweaked its code to best take advantage of Titan’s revolutionary hybrid architecture.

    “In nature, there are two types of physics,” said Chang. The first is equilibrium, in which changes happen in a “closed” world toward a static state, making the calculations comparatively simple. “This science has been established for a couple hundred years,” he said. Unfortunately, plasma physics falls in the second category, in which a system has inputs and outputs that constantly drive the system to a nonequilibrium state, which Chang refers to as an “open” world.

    Most magnetic fusion research is centered on a tokamak, a donut-shaped vessel that shows the most promise for magnetically confining the extremely hot and fragile plasma. Because the plasma is constantly coming into contact with the vessel wall and losing mass and energy, which in turn introduces neutral particles back into the plasma, equilibrium physics generally don’t apply at the edge and simulating the environment is difficult using conventional computational fluid dynamics.

    tftr
    TFTR at PPPL Tokamak Fusion Test Reactor at Princeton Plasma Physics Laboratory Image Credit: Princeton.

    Another major reason the simulations are so complex is their multiscale nature. The distance scales involved range from millimeters (what’s going on among the gyrating particles and turbulence eddies inside the plasma itself) to meters (looking at the entire vessel that contains the plasma). The time scales introduce even more complexity, as researchers want to see how the edge plasma evolves from microseconds in particle motions and turbulence fluctuations to milliseconds and seconds in its full evolution. Furthermore, these two scales are coupled. “The simulation scale has to be very large, but still has to include the small-scale details,” said Chang.

    And few machines are as capable of delivering in that regard as is Titan. “The bigger the computer, the higher the fidelity,” he said, simply because researchers can incorporate more physics, and few problems require more physics than simulating a fusion plasma.

    On the hunt for blobs

    Studying the plasma edge is critical to understanding the plasma as a whole. “What happens at the edge is what determines the steady fusion performance at the core,” said Chang. But when it comes to studying the edge, “the effort hasn’t been very successful because of its complexity,” he added.

    Chang’s team is shedding light on a long-known and little-understood phenomenon known as “blobby” turbulence in which formations of strong plasma density fluctuations or clumps flow together and move around large amounts of edge plasma, greatly affecting edge and core performance in the DIII-D tokamak at General Atomics in San Diego, CA. DIII-D-based simulations are considered a critical stepping-stone for the full-scale, first principles simulation of the ITER plasma edge. ITER is a tokamak reactor to be built in France to test the science feasibility of fusion energy.

    iter
    ITER

    The phenomenon was discovered more than 10 years ago, and is one of the “most important things in understanding edge physics,” said Chang, adding that people have tried to model it using fluids (i.e., equilibrium physics quantities). However, because the plasma inhabits an open world, it requires first-principles, ab-initio simulations. Now, for the first time, researchers have verified the existence and modeled the behavior of these blobs using a gyrokinetic code (or one that uses the most fundamental plasma kinetic equations, with analytic treatment of the fast gyrating particle motions) and the DIII-D geometry.

    This same first-principles approach also revealed the divertor heat load footprint. The divertor will extract heat and helium ash from the plasma, acting as a vacuum system and ensuring that the plasma remains stable and the reaction ongoing.

    These discoveries were made possible because the team’s XGC1 code exhibited highly efficient weak and strong scalability on Titan’s hybrid architecture up to the full size of the machine. Collaborating with Ed D’Azevedo, supported by the OLCF and by the DOE Scientific Discovery through Advanced Computing (SciDAC) project Center for Edge Physics Simulation (EPSi), along with Pat Worley (ORNL), Jianying Liand (PPPL) and Seung-Hoe Ku (PPPL) also supported by EPSi, this team optimized its XGC1 code for Titan’s GPUs using the maximum number of nodes, boosting performance fourfold over the previous CPU-only code. This performance increase has enormous implications for predicting fusion energy efficiency in ITER.

    Full-scale simulations

    “We can now use both the CPUs and GPUs efficiently in full-scale production simulations of the tokamak plasma,” said Chang.

    Furthermore, added Chang, Titan is beginning to allow the researchers to model physics that were just a year ago out of reach altogether, such as electron-scale turbulence, that were out of reach altogether as little as a year ago. Jaguar—Titan’s CPU-only predecessor— was fine for ion-scale edge turbulence because ions are both slower and heavier than electrons (for which the computing requirement is 60 times greater), but fell seriously short when it came to calculating electron-scale turbulence. While Titan is still not quite powerful enough to model electrons as accurately as Chang would like, the team has developed a technique that allows them to simulate electron physics approximately 10 times faster than on Jaguar.

    And they are just getting started. The researchers plan on eventually simulating the full volume plasma with electron-scale turbulence to understand how these newly modeled blobs affect the fusion core, because whatever happens at the edge determines conditions in the core. “We think this blob phenomenon will be a key to understanding the core,” said Chang, adding, “All of these are critical physics elements that must be understood to raise the confidence level of successful ITER operation. These phenomena have been observed experimentally for a long time, but have not been understood theoretically at a predictable confidence level.”

    Given the team can currently use all of Titan’s more that 18,000 nodes, a better understanding of fusion is certainly in the works. A better understanding of blobby turbulence and its effects on plasma performance is a significant step toward that goal, proving yet again that few tools are more critical than simulation if mankind is to use the engines of stars to solve its most pressing dilemma: clean, abundant energy.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 322 other followers

%d bloggers like this: