Tagged: Lawrence Berkeley National laboratory Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:27 pm on November 17, 2014 Permalink | Reply
    Tags: , , , , Lawrence Berkeley National laboratory   

    From LBL: “As Temperatures Rise, Soil Will Relinquish Less Carbon to the Atmosphere Than Currently Predicted” 

    Berkeley Logo

    Berkeley Lab

    November 17, 2014
    Dan Krotz 510-486-4019

    New Berkeley Lab model quantifies interactions between soil microbes and their surroundings

    Here’s another reason to pay close attention to microbes: Current climate models probably overestimate the amount of carbon that will be released from soil into the atmosphere as global temperatures rise, according to research from the US Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab).

    The findings are from a new computer model that explores the feedbacks between soil carbon and climate change. It’s the first such model to include several physiologically realistic representations of how soil microbes break down organic matter, a process that annually unleashes about ten times as much carbon into the atmosphere as fossil fuel emissions. In contrast, today’s models include a simplistic representation of microbial behavior.

    The research is published Nov. 17 on the website of the journal Nature Climate Change.

    Based on their results, the Berkeley Lab scientists recommend that future Earth system models include a more nuanced and dynamic depiction of how soil microbes go about the business of degrading organic matter and freeing up carbon.

    This approach could help scientists more accurately predict what will happen to soil carbon as Earth’s climate changes. These predictions are especially important in vulnerable regions like the Arctic, which is expected to warm considerably this century, and which holds a vast amount of carbon in the tundra.

    “We know that microbes are the agents of change when it comes to decomposing organic matter. But the question is: How important is it to explicitly quantify complex microbial interactions in climate models?” says Jinyun Tang, a scientist in Berkeley Lab’s Earth Sciences Division who conducted the research with fellow Berkeley Lab scientist William Riley.

    “We found that it makes a big difference,” Tang says. ”We showed that warming temperatures would return less soil carbon to the atmosphere than current models predict.”

    mod
    The complex and dynamic livelihood of soil microbes is captured in this schematic. For the first time, these processes are represented in a computer model that predicts the fate of soil carbon as temperatures rise. (Credit: Berkeley Lab)

    Terrestrial ecosystems, such as the Arctic tundra and Amazon rainforest, contain a huge amount of carbon in organic matter such as decaying plant material. Thanks to soil microbes that break down organic matter, these ecosystems also contribute a huge amount of carbon to the atmosphere.

    al
    The soil above the Arctic Circle near Barrow, Alaska contains a tremendous amount of carbon. New research may help scientists better predict how much of this carbon will be released as the climate warms.

    Because soil is such a major player in the carbon cycle, even a small change in the amount of carbon it releases can have a big affect on atmospheric carbon concentrations. This dynamic implies that climate models should represent soil-carbon processes as accurately as possible.

    But here’s the problem: Numerous empirical experiments have shown that the ways in which soil microbes decompose organic matter, and respond to changes in temperature, vary over time and from place to place. This variability is not captured in today’s ecosystem models, however. Microbes are depicted statically. They respond instantaneously when they’re perturbed, and then revert back as if nothing happened.

    To better portray the variability of the microbial world, Tang and Riley developed a numerical model that quantifies the costs incurred by microbes to respire, grow, and consume energy. Their model accounts for internal physiology, such as the production of enzymes that help microbes break down organic matter. It includes external processes, such as the competition for these enzymes once they’re outside the microbe. Some enzymes adsorb onto mineral surfaces, which means they are not available to chew through organic matter. The model also includes competition between different microbial populations.

    Together, these interactions—from enzymes to minerals to populations­—represent microbial networks as ever-changing systems, much like what’s observed in experiments.

    The result? When the model was subjected to a 4 degrees Celsius change, it predicted more variable but weaker soil-carbon and climate feedbacks than current approaches.

    “There’s less carbon flux to the atmosphere in response to warming,” says Riley. “Our representation is more complex, which has benefits in that it’s likely more accurate. But it also has costs, in that the parameters used in the model need to be further studied and quantified.”

    Tang and Riley recommend more research be conducted on these microbial and mineral interactions. They also recommend that these features ultimately be included in next-generation Earth system models, such as the Department of Energy’s Accelerated Climate Modeling for Energy, or ACME.

    The research was supported by the Department of Energy’s Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 5:39 pm on November 12, 2014 Permalink | Reply
    Tags: , Lawrence Berkeley National laboratory, , ,   

    From LBL: “Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2014
    Julie Chao (510) 486-6491

    Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.

    What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1, has been published online in the Journal of Advances in Modeling Earth Systems.

    “I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”

    mw
    Michael Wehner, Berkeley Lab climate scientist

    Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.

    One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.

    sc

    The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.

    “High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”

    The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

    The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”

    Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.

    “A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”

    The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:42 pm on November 8, 2014 Permalink | Reply
    Tags: , , Lawrence Berkeley National laboratory, Synthetic Biology   

    From LBL: “Synthetic Biology for Space Exploration” 

    Berkeley Logo

    Berkeley Lab

    November 5, 2014
    Lynn Yarris (510) 486-5375

    Does synthetic biology hold the key to manned space exploration of the Moon and Mars? Berkeley Lab researchers have used synthetic biology to produce an inexpensive and reliable microbial-based alternative to the world’s most effective anti-malaria drug, and to develop clean, green and sustainable alternatives to gasoline, diesel and jet fuels. In the future, synthetic biology could also be used to make manned space missions more practical.

    “Not only does synthetic biology promise to make the travel to extraterrestrial locations more practical and bearable, it could also be transformative once explorers arrive at their destination,” says Adam Arkin, director of Berkeley Lab’s Physical Biosciences Division (PBD) and a leading authority on synthetic and systems biology.

    “During flight, the ability to augment fuel and other energy needs, to provide small amounts of needed materials, plus renewable, nutritional and taste-engineered food, and drugs-on-demand can save costs and increase astronaut health and welfare,” Arkin says. “At an extraterrestrial base, synthetic biology could make even more effective use of the catalytic activities of diverse organisms.”

    aa
    Adam Arkin is a leading authority on synthetic and systems biology.

    Arkin is the senior author of a paper in the Journal of the Royal Society Interface that reports on a techno-economic analysis demonstrating “the significant utility of deploying non-traditional biological techniques to harness available volatiles and waste resources on manned long-duration space missions.” The paper is titled Towards Synthetic Biological Approaches to Resource Utilization on Space Missions. The lead and corresponding author is Amor Menezes, a postdoctoral scholar in Arkin’s research group at the University of California (UC) Berkeley. Other co-authors are John Cumbers and John Hogan with the NASA Ames Research Center.

    One of the biggest challenges to manned space missions is the expense. The NASA rule-of-thumb is that every unit mass of payload launched requires the support of an additional 99 units of mass, with “support” encompassing everything from fuel to oxygen to food and medicine for the astronauts, etc. Most of the current technologies now deployed or under development for providing this support are abiotic, meaning non-biological. Arkin, Menezes and their collaborators have shown that providing this support with technologies based on existing biological processes is a more than viable alternative.

    am
    Amor Menezes

    “Because synthetic biology allows us to engineer biological processes to our advantage, we found in our analysis that technologies, when using common space metrics such as mass, power and volume, have the potential to provide substantial cost savings, especially in mass,” Menezes says.

    In their study, the authors looked at four target areas: fuel generation, food production, biopolymer synthesis, and pharmaceutical manufacture. They showed that for a 916 day manned mission to Mars, the use of microbial biomanufacturing capabilities could reduce the mass of fuel manufacturing by 56-percent, the mass of food-shipments by 38-percent, and the shipped mass to 3D-print a habitat for six by a whopping 85-percent. In addition, microbes could also completely replenish expired or irradiated stocks of pharmaceuticals, which would provide independence from unmanned re-supply spacecraft that take up to 210 days to arrive.

    “Space has always provided a wonderful test of whether technology can meet strict engineering standards for both effect and safety,” Arkin says. “NASA has worked decades to ensure that the specifications that new technologies must meet are rigorous and realistic, which allowed us to perform up-front techno-economic analysis.”

    mb
    Microbial-based biomanufacturing could be transformative once explorers arrive at an extraterrestrial site. (Image courtesy of Royal Academy Interface)

    The big advantage biological manufacturing holds over abiotic manufacturing is the remarkable ability of natural and engineered microbes to transform very simple starting substrates, such as carbon dioxide, water biomass or minerals, into materials that astronauts on long-term missions will need. This capability should prove especially useful for future extraterrestrial settlements.

    “The mineral and carbon composition of other celestial bodies is different from the bulk of Earth, but the earth is diverse with many extreme environments that have some relationship to those that might be found at possible bases on the Moon or Mars,” Arkin says. “Microbes could be used to greatly augment the materials available at a landing site, enable the biomanufacturing of food and pharmaceuticals, and possibly even modify and enrich local soils for agriculture in controlled environments.”

    The authors acknowledge that much of their analysis is speculative and that their calculations show a number of significant challenges to making biomanufacturing a feasible augmentation and replacement for abiotic technologies. However, they argue that the investment to overcome these barriers offers dramatic potential payoff for future space programs.

    “We’ve got a long way to go since experimental proof-of-concept work in synthetic biology for space applications is just beginning, but long-duration manned missions are also a ways off,” says Menezes. “Abiotic technologies were developed for many, many decades before they were successfully utilized in space, so of course biological technologies have some catching-up to do. However, this catching-up may not be that much, and in some cases, the biological technologies may already be superior to their abiotic counterparts.”

    This research was supported by the National Aeronautics and Space Administration (NASA) and the University of California, Santa Cruz.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:58 pm on October 30, 2014 Permalink | Reply
    Tags: , , Lawrence Berkeley National laboratory, ,   

    From LBL: “Lord of the Microrings” 

    Berkeley Logo

    Berkeley Lab

    October 30, 2014
    Lynn Yarris (510) 486-5375

    A significant breakthrough in laser technology has been reported by the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley. Scientists led by Xiang Zhang, a physicist with joint appointments at Berkeley Lab and UC Berkeley, have developed a unique microring laser cavity that can produce single-mode lasing even from a conventional multi-mode laser cavity. This ability to provide single-mode lasing on demand holds ramifications for a wide range of applications including optical metrology and interferometry, optical data storage, high-resolution spectroscopy and optical communications.

    “Losses are typically undesirable in optics but, by deliberately exploiting the interplay between optical loss and gain based on the concept of parity-time symmetry, we have designed a microring laser cavity that exhibits intrinsic single-mode lasing regardless of the gain spectral bandwidth,” says Zhang, who directs Berkeley Lab’s Materials Sciences Division and is UC Berkeley’s Ernest S. Kuh Endowed Chair Professor. “This approach also provides an experimental platform to study parity-time symmetry and phase transition phenomena that originated from quantum field theory yet have been inaccessible so far in experiments. It can fundamentally broaden optical science at both semi-classical and quantum levels”

    zz
    Xiang Zhang, director of Berkeley Lab’s Materials Sciences Division. (Photo by Roy Kaltschmidt)

    Zhang, who also directs the National Science Foundation’s Nano-scale Science and Engineering Center, and is a member of the Kavli Energy NanoSciences Institute at Berkeley, is the corresponding author of a paper in Science that describes this work. The paper is titled Single-Mode Laser by Parity-time Symmetry Breaking. Co-authors are Liang Feng, Zi Jing Wong, Ren-Min Ma and Yuan Wang.

    A laser cavity or resonator is the mirrored component of a laser in which light reflected multiple times yields a standing wave at certain resonance frequencies called modes. Laser cavities typically support multiple modes because their dimensions are much larger than optical wavelengths. Competition between modes limits the optical gain in amplitude and results in random fluctuations and instabilities in the emitted laser beams.

    “For many applications, single-mode lasing is desirable for its stable operation, better beam quality, and easier manipulation,” Zhang says. “Light emission from a single-mode laser is monochromatic with low phase and intensity noises, but creating sufficiently modulated optical gain and loss to obtain single-mode lasing has been a challenge.”
    Scanning electron microscope image of the fabricated PT symmetry microring laser cavity.

    image
    Scanning electron microscope image of the fabricated PT symmetry microring laser cavity.

    While mode manipulation and selection strategies have been developed to achieve single-mode lasing, each of these strategies has only been applicable to specific configurations. The microring laser cavity developed by Zhang’s group is the first successful concept for a general design. The key to their success is using the concept of the breaking of parity-time (PT) symmetry. The law of parity-time symmetry dictates that the properties of a system, like a beam of light, remain the same even if the system’s spatial configuration is reversed, like a mirror image, or the direction of time runs backward. Zhang and his group discovered a phenomenon called “thresholdless parity-time symmetry breaking” that provides them with unprecedented control over the resonant modes of their microring laser cavity, a critical requirement for emission control in laser physics and applications.

    lf
    Liang Feng

    “Thresholdless PT symmetry breaking means that our light beam undergoes symmetry breaking once the gain/loss contrast is introduced no matter how large this contrast is,” says Liang Feng, lead author of the Science paper, a recent posdoc in Zhang’s group and now an assistant professor with the University at Buffalo. “In other words, the threshold for PT symmetry breaking is zero gain/loss contrast.”

    Zhang, Feng and the other members of the team were able to exploit the phenomenon of thresholdless PT symmetry breaking through the fabrication of a unique microring laser cavity. This cavity consists of bilayered structures of chromium/germanium arranged periodically in the azimuthal direction on top of a microring resonator made from an indium-gallium-arsenide-phosphide compound on a substrate of indium phosphide. The diameter of the microring is 9 micrometers.

    “The introduced rotational symmetry in our microring resonator is continuous, mimicking an infinite system,” says Feng. “The counterintuitive discovery we made is that PT symmetry does not hold even at an infinitesimal gain/loss modulation when a system is rotationally symmetric. This was not observed in previous one-dimensional PT modulation systems because those finite systems did not support any continuous symmetry operations.”

    Using the continuous rotational symmetry of their microring laser cavity to facilitate thresholdless PT symmetry breaking,

    Zhang, Feng and their collaborators are able to delicately manipulate optical gain and loss in such a manner as to ultimately yield single-mode lasing.

    “PT symmetry breaking means an optical mode can be gain-dominant for lasing, whereas PT symmetry means all the modes remain passive,” says Zi-Jing Wong, co-lead author and a graduate student in Zhang’s group. “With our microring laser cavity, we facilitate a desired mode in PT symmetry breaking, while keeping all other modes PT symmetric. Although PT symmetry breaking by itself cannot guarantee single-mode lasing, when acting together with PT symmetry for all other modes, it facilitates single-mode lasing.”

    In their Science paper, the researchers suggest that single-mode lasing through PT-symmetry breaking could pave the way to next generation optoelectronic devices for communications and computing as it enables the independent manipulation of multiple laser beams without the “crosstalk” problems that plague today’s systems. Their microring laser cavity concept might also be used to engineer optical modes in a typical multi-mode laser cavity to create a desired lasing mode and emission pattern.

    “Our microring laser cavities could also replace the large laser boxes that are routinely used in labs and industry today,” Feng says. “Moreover, the demonstrated single-mode operation regardless of gain spectral bandwidth may create a laser chip carrying trillions of informational signals at different frequencies. This would make it possible to shrink a huge datacenter onto a tiny photonic chip.”

    This research was supported by the Office of Naval Research MURI program.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:20 pm on October 29, 2014 Permalink | Reply
    Tags: , , , , , Lawrence Berkeley National laboratory   

    From LBL: “New Lab Startup Afingen Uses Precision Method to Enhance Plants” 

    Berkeley Logo

    Berkeley Lab

    October 29, 2014
    Julie Chao (510) 486-6491

    Imagine being able to precisely control specific tissues of a plant to enhance desired traits without affecting the plant’s overall function. Thus a rubber tree could be manipulated to produce more natural latex. Trees grown for wood could be made with higher lignin content, making for stronger yet lighter-weight lumber. Crops could be altered so that only the leaves and certain other tissues had more wax, thus enhancing the plant’s drought tolerance, while its roots and other functions were unaffected.

    By manipulating a plant’s metabolic pathways, two scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), Henrik Scheller and Dominique Loqué, have figured out a way to genetically rewire plants to allow for an exceptionally high level of control over the spatial pattern of gene expression, while at the same time boosting expression to very high levels. Now they have launched a startup company called Afingen to apply this technology for developing low-cost biofuels that could be cost-competitive with gasoline and corn ethanol.

    two
    Henrik Scheller (left) and Dominique Loque hold a tray of Arabidopsis Thaliana plants, which they used in their research. (Berkeley Lab photo)

    “With this tool we seem to have found a way to control very specifically what tissue or cell type expresses whatever we want to express,” said Scheller. “It’s a new way that people haven’t thought about to increase metabolic pathways. It could be for making more cell wall, for increasing the stress tolerance response in a specific tissue. We think there are many different applications.”

    Cost-competitive biofuels

    Afingen was awarded a Small Business Innovation Research (SBIR) grant earlier this year for $1.72 million to engineer switchgrass plants that will contain 20 percent more fermentable sugar and 40 percent less lignin in selected structures. The grant was provided under a new SBIR program at DOE that combines an SBIR grant with an option to license a specific technology produced at a national laboratory or university through DOE-supported research.

    “Techno-economic modeling done at (the Joint BioEnergy Institute, or JBEI) has shown that you would get a 23 percent reduction in the price of the biofuel with just a 20 percent reduction in lignin,” said Loqué. “If we could also increase the sugar content and make it easier to extract, that would reduce the price even further. But of course it also depends on the downstream efficiency.”

    Scheller and Loqué are plant biologists with the Department of Energy’s Joint BioEnergy Institute (JBEI), a Berkeley Lab-led research center established in 2007 to pursue breakthroughs in the production of cellulosic biofuels. Scheller heads the Feedstocks Division and Loqué leads the cell wall engineering group.

    The problem with too much lignin in biofuel feedstocks is that it is difficult and expensive to break down; reducing lignin content would allow the carbohydrates to be released and converted into fuels much more cost-effectively. Although low-lignin plants have been engineered, they grow poorly because important tissues lack the strength and structural integrity provided by the lignin. With Afingen’s technique, the plant can be manipulated to retain high lignin levels only in its water-carrying vascular cells, where cell-wall strength is needed for survival, but low levels throughout the rest of the plant.

    The centerpiece of Afingen’s technology is an “artificial positive feedback loop,” or APFL. The concept targets master transcription factors, which are molecules that regulate the expression of genes involved in certain biosynthetic processes, that is, whether certain genes are turned “on” or “off.” The APFL technology is a breakthrough in plant biotechnology, and Loqué and Scheller recently received an R&D 100 Award for the invention.

    An APFL is a segment of artificially produced DNA coded with instructions to make additional copies of a master transcription factor; when it is inserted at the start of a chosen biosynthetic pathway—such as the pathway that produces cellulose in fiber tissues—the plant cell will synthesize the cellulose and also make a copy of the master transcription factor that launched the cycle in the first place. Thus the cycle starts all over again, boosting cellulose production.

    The process differs from classical genetic engineering. “Some people distinguish between ‘transgenic’ and ‘cisgenic.’ We’re using only pieces of DNA that are already in that plant and just rearranging them in a new way,” said Scheller. “We’re not bringing in foreign DNA.”

    Other licensees and applications

    This breakthrough technique can also be used in fungi and for a wide variety of uses in plants, for example, to increase food crop yields or to boost production of highly specialized molecules used by the pharmaceutical and chemical industries. “It could also increase the quality of forage crops, such as hay fed to cows, by increasing the sugar content or improving the digestibility,” Loqué said.

    Another intriguing application is for biomanufacturing. By engineering plants to grow entirely new pharmaceuticals, specialty chemicals, or polymer materials, the plant essentially becomes a “factory.” “We’re interested in using the plant itself as a host for production,” Scheller said. “Just like you can upregulate pathways in plants that make cell walls or oil, you can also upregulate pathways that make other compounds or properties of interest.”

    Separately, two other companies are using the APFL technology. Tire manufacturer Bridgestone has a cooperative research and development agreement (CRADA) with JBEI to develop more productive rubber-producing plants. FuturaGene, a Brazilian paper and biomass company, has licensed the technology for exclusive use with eucalyptus trees and several other crops; APFL can enhance or develop traits to optimize wood quality for pulping and bioenergy applications.

    “The inventors/founders of Afingen made the decision to not compete for a license in fields of use that were of interest to other companies that had approached JBEI. This allowed JBEI to move the technology forward more quickly on several fronts,” said Robin Johnston, Berkeley Lab’s Acting Deputy Chief Technology Transfer Officer. “APFL is a very insightful platform technology, and I think only a fraction of the applications have even been considered yet.”

    Afingen currently has one employee—Ai Oikawa, a former postdoctoral researcher and now the director of plant engineering—and will be hiring three more in November. It is the third startup company to spin out of JBEI. The first two were Lygos, which uses synthetic biology tools to produce chemical compounds, and TeselaGen, which makes tools for DNA synthesis and cloning.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:05 pm on October 28, 2014 Permalink | Reply
    Tags: , CUORE collaboration, Lawrence Berkeley National laboratory,   

    From LBL: “Creating the Coldest Cubic Meter in the Universe” 

    Berkeley Logo

    Berkeley Lab

    October 28, 2014
    Kate Greene 510-486-4404

    In an underground laboratory in Italy, an international team of scientists has created the coldest cubic meter in the universe. The cooled chamber—roughly the size of a vending machine—was chilled to 6 milliKelvin or -273.144 degrees Celsius in preparation for a forthcoming experiment that will study neutrinos, ghostlike particles that could hold the key to the existence of matter around us.

    cube
    Scientist inspect the cryostat of the of the Cryogenic Underground Observatory for Rare Events. Credit: CUORE collaboration

    The collaboration responsible for the record-setting refrigeration is called the Cryogenic Underground Observatory for Rare Events (CUORE), supported jointly by the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and the Department of Energy’s Office of Science and National Science Foundation in the US. Lawrence Berkeley National Lab (Berkeley Lab) manages the CUORE project in the US. The CUORE collaboration is made of 157 scientists from the U.S., Italy, China, Spain, and France, and is based in the underground Italian facility called Laboratori Nazionali del Gran Sasso (LNGS) of the INFN.

    “We’ve been building this experiment for almost ten years,” says Yury Kolomensky, senior faculty scientist in the Physics Division of Berkeley Lab, professor of physics at UC Berkeley, and U.S. spokesperson for the CUORE collaboration. “This is a tremendous feat of cryogenics. We’ve exceeded our goal of 10 milliKelvin. Nothing in the universe this large has ever been as cold.”

    The chamber, technically called a cryostat, was designed and built in Italy, and maintained the ultra-cold temperature for more than two weeks. An international team of physicists, including students and postdoctoral scholars from Italy and the US, worked for over two years to assemble the cryostat, iron out the kinks, and demonstrate its record-breaking performance. The claim that no other object of similar size and temperature – either natural or man-made – exists in the universe was detailed in a recent paper by Jonathan Ouellet, Berkeley Lab Nuclear Science staff and UC Berkeley graduate student.

    In order to achieve such a low-temperature cryostat, the team used a multi chamber design that looks something like Russian nesting dolls: six chambers in total, each becoming progressively smaller and colder.

    dolls
    An illustration of the cross-section of the cryostat with a human figure for scale. Credit: CUORE collaboration

    The chambers are evacuated, isolating the insides from the room temperature, like in a thermos. The outer chambers are cooled to the temperature of liquid helium with mechanical coolers called pulse tubes – which do not require expensive cryogenic liquids. The innermost chamber is cooled using a process similar to traditional refrigeration in which a fluid evaporates and takes heat along with it. The only fluid that operates at such cold temperatures, however, is liquid helium. The researchers use a mixture of Helium-3 and Helium-4 that continuously circulates in a specialized cryogenic unit called dilution refrigerator, removing any remnant heat energy from the smallest chamber. The CUORE dilution refrigerator, built by Leiden Cryogenics in Netherlands, is one of the most powerful in the world. “It’s a Mack truck of dilution refrigerators,” Kolomensky says.

    The ultimate purpose for the coldest cubic meter in the universe is to house a new ultra-sensitive detector. The goal of CUORE is to observe a hypothesized rare process called neutrinoless double-beta decay. Detection of this process would allow researchers to demonstrate, for the first time, that neutrinos are their own antiparticles, thereby offering a possible explanation for the abundance of matter over anti-matter in our universe —in other words, why the galaxies, stars, and ultimately people exist in the universe at all.

    To detect neutrinoless double-beta decay, the team is using a detector made of 19 independent towers of tellurium dioxide (TeO2) crystals. Fifty-two crystals, each a little smaller than a Rubik’s cube, make up each tower. The team expects that they would be able to see evidence of the rare radioactive process within these cube-shaped crystals because the phenomenon would produce a barely detectable temperature rise, picked up by highly sensitive temperature sensors.

    Berkeley Lab, with Lawrence Livermore National Lab, has supplied roughly half the crystals for the CUORE project. In addition, Berkeley Lab designed and fabricated the highly sensitive temperature sensors – Neutron Transmutation Doped thermistors invented by Eugene Haller, UC Berkeley faculty and senior faculty scientist in the Material Science Division.

    UC postdocs Tom Banks and Tommy O’Donnell, who also have joint appointments with the Nuclear Science Division at Berkeley Lab, led the international team of physicists, engineers, and technicians to assemble over ten thousand parts into towers in nitrogen-filled glove boxes, including and bonding almost 8000 25-micron gold wires to 100-micron sized pads on the temperature sensors and on copper pads connected to detector wiring.

    The last of the 19 towers has recently been completed; all towers are now safely stored underground at LNGS, waiting to occupy the record-breaking vessel. The coldest cubic meter in the known universe is not just the feat of engineering; it will become a premier science instrument next year.

    US-CUORE team was lead by late Prof. Stuart Freedman until his untimely passing in 2012. Other current and former Berkeley Lab members of the CUORE collaboration not previously mentioned include US Contractor Project Manager Sergio Zimmermann (Engineering Division), former US Contractor Project Manager Richard Kadel (Physics Division, retired), staff scientists Jeffrey Beeman (Materials Science Division), Brian Fujikawa (Nuclear Science Division), Sarah Morgan (Engineering), Alan Smith (EH&S), postdocs Raul Hennings-Yeomans (UCB and NSD), Ke Han (NSD, now Yale), and Yuan Mei (NSD), graduate students Alexey Drobizhev and Sachi Wagaarachchi (UCB and NSD), and engineers David Biare, Lucio di Paolo (NSD and LNGS), and Joseph Wallig (Engineering).

    For more information: CUORE collaboration news release here.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:43 pm on October 3, 2014 Permalink | Reply
    Tags: , , , Lawrence Berkeley National laboratory,   

    From LBL: “News Center RCas9: A Programmable RNA Editing Tool” 

    Berkeley Logo

    Berkeley Lab

    October 3, 2014
    Lynn Yarris (510) 486-5375

    A powerful scientific tool for editing the DNA instructions in a genome can now also be applied to RNA, the molecule that translates DNA’s genetic instructions into the production of proteins. A team of researchers with Berkeley Lab and the University of California (UC) Berkeley has demonstrated a means by which the CRISPR/Cas9 protein complex can be programmed to recognize and cleave RNA at sequence-specific target sites. This finding has the potential to transform the study of RNA function by paving the way for direct RNA transcript detection, analysis and manipulation.

    sch
    Schematic shows how RNA-guided Cas9 working with PAMmer can target ssRNA for programmable, sequence-specific cleavage.

    Led by Jennifer Doudna, biochemist and leading authority on the CRISPR/Cas9 complex, the Berkeley team showed how the Cas9 enzyme can work with short DNA sequences known as “PAM,” for protospacer adjacent motif, to identify and bind with specific site of single-stranded RNA (ssRNA). The team is designating this RNA-targeting CRISPR/Cas9 complex as RCas9.

    “Using specially designed PAM-presenting oligonucleotides, or PAMmers, RCas9 can be specifically directed to bind or cut RNA targets while avoiding corresponding DNA sequences, or it can be used to isolate specific endogenous messenger RNA from cells,” says Doudna, who holds joint appointments with Berkeley Lab’s Physical Biosciences Division and UC Berkeley’s Department of Molecular and Cell Biology and Department of Chemistry, and is also an investigator with the Howard Hughes Medical Institute (HHMI). “Our results reveal a fundamental connection between PAM binding and substrate selection by RCas9, and highlight the utility of RCas9 for programmable RNA transcript recognition without the need for genetically introduced tags.”

    jd
    Biochemist Jennifer Doudna is leading authority on the CRISPR/Cas9 complex (Photo by Roy Kaltschmidt)

    From safer, more effective medicines and clean, green, renewable fuels, to the clean-up and restoration of our air, water and land, the potential is there for genetically engineered bacteria and other microbes to produce valuable goods and perform critical services. To exploit the vast potential of microbes, scientists must be able to precisely edit their genetic information.

    In recent years, the CRISPR/Cas complex has emerged as one of the most effective tools for doing this. CRISPR, which stands for Clustered Regularly Interspaced Short Palindromic Repeats, is a central part of the bacterial immune system and handles sequence recognition. Cas9 – Cas stands for CRISPR-assisted – is an RNA-guided enzyme that handles the sniping of DNA strands at the specified sequence site.

    Together, CRISPR and Cas9 can be used to precisely edit the DNA instructions in a targeted genome for making desired types of proteins. The DNA is cut at a specific location so that old DNA instructions can be removed and/or new instructions inserted.

    Until now, it was thought that Cas9 could not be used on the RNA molecules that transcribe those DNA instructions into the desired proteins.

    “Just as Cas9 can be used to cut or bind DNA in a sequence-specific manner, RCas9 can cut or bind RNA in a sequence-specific manner,” says Mitchell O’Connell, a member of Doudna’s research group and the lead author of a paper in Nature that describes this research titled Programmable RNA recognition and cleavage by CRISPR/Cas9. Doudna is the corresponding author. Other co-authors are Benjamin Oakes, Samuel Sternberg, Alexandra East Seletsky and Matias Kaplan.

    two
    Benjamin Oakes and Mitch O’Connell are part of the collaboration led by Jennifer Doudna that showed how the CRISPR/Cas9 complex can serve as a programmable RNA editor. (Photo by Roy Kaltschmidt)

    In an earlier study, Doudna and her group showed that the genome editing ability of Cas9 is made possible by presence of PAM, which marks where cutting is to commence and activates the enzyme’s strand-cleaving activity. In this latest study, Doudna, Mitchell and their collaborators show that PAMmers, in a similar manner, can also stimulate site-specific endonucleolytic cleavage of ssRNA targets. They used Cas9 enzymes from the bacterium Streptococcus pyogenes to perform a variety of in vitro cleavage experiments using a panel of RNA and DNA targets.

    “While RNA interference has proven useful for manipulating gene regulation in certain organisms, there has been a strong motivation to develop an orthogonal nucleic-acid-based RNA-recognition system such as RCas9,” Doudna says. “The molecular basis for RNA recognition by RCas9 is now clear and requires only the design and synthesis of a matching guide RNA and complementary PAMmer.”

    The researchers envision a wide range of potential applications for RCas9. For example, an RCas9 tethered to a protein translation initiation factor and targeted to a specific mRNA could essentially act as a designer translation factor to “up-” or “down-” regulate protein synthesis from that mRNA.

    “Tethering RCas9 to beads could be used to isolate RNA or native RNA–protein complexes of interest from cells for downstream analysis or assays,” Mitchell says. “RCsa9 fused to select protein domains could promote or exclude specific introns or exons, and RCas9 tethered to a fluorescent proteins could be used to observe RNA localization and transport in living cells.”

    This research was primarily supported by the NIH-funded Center for RNA Systems Biology.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:21 pm on October 2, 2014 Permalink | Reply
    Tags: , , , , Lawrence Berkeley National laboratory, , ,   

    From LBL: “A Closer Look at the Perfect Fluid” 

    Berkeley Logo

    Berkeley Lab

    October 2, 2014
    Kate Greene 510-486-4404

    Researchers at Berkeley Lab and their collaborators have honed a way to probe the quark-gluon plasma, the kind of matter that dominated the universe immediately after the big bang.

    gp
    A simulated collision of lead ions, courtesy the ALICE experiment at CERN. – See more at: http://newscenter.lbl.gov/2014/10/02/a-closer-look-at-the-perfect-fluid/#sthash.LuD3V5BH.dpuf

    By combining data from two high-energy accelerators, nuclear scientists have refined the measurement of a remarkable property of exotic matter known as quark-gluon plasma. The findings reveal new aspects of the ultra-hot, “perfect fluid” that give clues to the state of the young universe just microseconds after the big bang.

    The multi-institutional team known as the JET Collaboration, led by researchers at the U.S. Department of Energy’s Lawrence Berkeley National Lab (Berkeley Lab), published their results in a recent issue of Physical Review C. The JET Collaboration is one of the Topical Collaborations in nuclear theory established by the DOE Office of Science in 2010. JET, which stands for Quantitative Jet and Electromagnetic Tomography, aims to study the probes used to investigate high-energy, heavy-ion collisions. The Collaboration currently has 12 participating institutions with Berkeley Lab as the leading institute.

    “We have made, by far, the most precise extraction to date of a key property of the quark-gluon plasma, which reveals the microscopic structure of this almost perfect liquid,” says Xin-Nian Wang, physicist in the Nuclear Science Division at Berkeley Lab and managing principal investigator of the JET Collaboration. Perfect liquids, Wang explains, have the lowest viscosity-to-density ratio allowed by quantum mechanics, which means they essentially flow without friction.

    Hot Plasma Soup

    To create and study the quark-gluon plasma, nuclear scientists used particle accelerators called the Relativistic Heavy-ion Collider (RHIC) at the Brookhaven National Laboratory in New York and the Large Hadron Collider (LHC) at CERN in Switzerland. By accelerating heavy atomic nuclei to high energies and blasting them into each other, scientists are able to recreate the hot temperature conditions of the early universe.

    BNL RHIC Campus
    BNL RHIC
    BNL RHIC schematic
    RHIC at BNL

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Inside protons and neutrons that make up the colliding atomic nuclei are elementary particles called quarks, which are bound together tightly by other elementary particles called gluons. Only under extreme conditions, such as collisions in which temperatures exceed by a million times those at the center of the sun, do quarks and gluons pull apart to become the ultra-hot, frictionless perfect fluid known as quark-gluon plasma.

    “The temperature is so high that the boundaries between different nuclei disappear so everything becomes a hot-plasma soup of quarks and gluons,” says Wang. This ultra-hot soup is contained within a chamber in the particle accelerator, but it is short-lived—quickly cooling and expanding—making it a challenge to measure. Experimentalists have developed sophisticated tools to overcome the challenge, but translating experimental observations into precise quantitative understanding of the quark-gluon plasma has been difficult to achieve until now, he says.

    Looking Inside

    In this new work, Wang’s team refined a probe that makes use of a phenomenon researchers at Berkeley Lab first theoretically outlined 20 years ago: energy loss of a high-energy particle, called a jet, inside the quark gluon plasma.

    “When a hot quark-gluon plasma is generated, sometimes you also produce these very energetic particles with an energy a thousand times larger than that of the rest of the matter,” says Wang. This jet propagates through the plasma, scatters, and loses energy on its way out.

    Since the researchers know the energy of the jet when it is produced, and can measure its energy coming out, they can calculate its energy loss, which provides clues to the density of the plasma and the strength of its interaction with the jet. “It’s like an x-ray going through a body so you can see inside,” says Wang.

    we
    Xin Nian Wang, physicist in the Nuclear Science Division at Berkeley Lab and managing principal investigator of the JET Collaboration.

    One difficulty in using a jet as an x-ray of the quark-gluon plasma is the fact that a quark-gluon plasma is a rapidly expanding ball of fire—it doesn’t sit still. “You create this hot fireball that expands very fast as it cools down quickly to ordinary matter,” Wang says. So it’s important to develop a model to accurately describe the expansion of plasma, he says. The model must rely on a branch of theory called relativistic hydrodynamics in which the motion of fluids is described by equations from Einstein’s theory of special relativity.

    Over the past few years, researchers from the JET Collaboration have developed such a model that can describe the process of expansion and the observed phenomena of an ultra-hot perfect fluid. “This allows us to understand how a jet propagates through this dynamic fireball,” says Wang

    Employing this model for the quark-gluon plasma expansion and jet propagation, the researchers analyzed combined data from the PHENIX and STAR experiments at RHIC and the ALICE and CMS experiments at LHC since each accelerator created quark-gluon plasma at different initial temperatures. The team determined one particular property of the quark-gluon plasma, called the jet transport coefficient, which characterizes the strength of interaction between the jet and the ultra-hot matter. “The determined values of the jet transport coefficient can help to shed light on why the ultra-hot matter is the most ideal liquid the universe has ever seen,” Wang says.

    BNL Phenix
    PHENIX at BNL

    BNL Star
    STAR at BNL

    CERN ALICE New
    ALICE at CERN

    CERN CMS New
    CMS at CERN

    Peter Jacobs, head of the experimental group at Berkeley Lab that carried out the first jet and flow measurements with the STAR Collaboration at RHIC, says the new result is “very valuable as a window into the precise nature of the quark gluon plasma. The approach taken by the JET Collaboration to achieve it, by combining efforts of several groups of theorists and experimentalists, shows how to make other precise measurements of properties of the quark gluon plasma in the future.”

    The team’s next steps are to analyze future data at lower RHIC energies and higher LHC energies to see how these temperatures might affect the plasma’s behavior, especially near the phase transition between ordinary matter and the exotic matter of the quark-gluon plasma.

    This work was supported by the DOE Office of Science, Office of Nuclear Physics and used the facilities of the National Energy Research Scientific Computing Center (NERSC) located at Berkeley Lab.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:04 pm on September 29, 2014 Permalink | Reply
    Tags: , , , Lawrence Berkeley National laboratory   

    From LBL: “MaxBin: Automated Sorting Through Metagenomes” 

    Berkeley Logo

    Berkeley Lab

    September 29, 2014
    Lynn Yarris (510) 486-5375

    Microbes – the single-celled organisms that dominate every ecosystem on Earth – have an amazing ability to feed on plant biomass and convert it into other chemical products. Tapping into this talent has the potential to revolutionize energy, medicine, environmental remediation and many other fields. The success of this effort hinges in part on metagenomics, the emerging technology that enables researchers to read all the individual genomes of a sample microbial community at once. However, given that even a teaspoon of soil can contain billions of microbes, there is a great need to be able to cull the genomes of individual microbial species from a metagenomic sequence.

    Enter MaxBin, an automated software program for binning (sorting) the genomes of individual microbial species from metagenomic sequences. Developed at the U.S. Department of Energy (DOE)’s Joint BioEnergy Institute (JBEI), under the leadership of Steve Singer, who directs JBEI’s Microbial Communities Group, MaxBin facilitates the genomic analysis of uncultivated microbial populations that can hold the key to the production of new chemical materials, such as advanced biofuels or pharmaceutical drugs.

    cd
    MaxBin, an automated software program for binning the genomes of individual microbial species from metagenomic sequences, is available on-line through JBEI.

    “MaxBin automates the binning of assembled metagenomic scaffolds using an expectation-maximization algorithm after the assembly of metagenomic sequencing reads,” says Singer, a chemist who also holds an appointment with Berkeley Lab’s Earth Sciences Division. “Previous binning methods either required a significant amount of work by the user, or required a large number of samples for comparison. MaxBin requires only a single sample and is a push-button operation for users.”

    three
    JBEI researchers Yu-Wei Wu, Steve Singer and Danny Tang developed MaxBin to automatically recover individual genomes from metagenomes using an expectation-maximization algorithm. (Photo by Roy Kaltschmidt)

    The key to the success of MaxBin is its expectation-maximization algorithm, which was developed by Yu-Wei Wu, a post-doctoral researcher in Singer’s group. This algorithm enables the classification of metagenomic sequences into discrete bins that represent the genomes of individual microbial populations within a sample community.

    “Using our expectation-maximization algorithm, MaxBin combines information from tetranucleotide frequencies and scaffold coverage levels to organize metagenomic sequences into the individual bins, which are predicted from an initial identification of marker genes in assembled sequences,” Wu says.

    MaxBin was successfully tested on samples from the Human Microbiome Project and from green waste compost. In these tests, which were carried out by Yung-Tsu Tang, a student intern from the City College of San Francisco, MaxBin proved to be highly accurate in its ability to recover individual genomes from metagenomic datasets with variable sequencing coverages.

    “Applying MaxBin to an enriched cellulolytic consortia enabled us to identify a number of uncultivated cellulolytic bacteria, including a myxobacterium that possesses a remarkably reduced genome and expanded set of genes for biomass deconstruction compared to its closest sequenced relatives,” Singer says. “This demonstrates that the processes required for recovering genomes from metagenomic datasets can be applied to understanding biomass breakdown in the environment”.

    MaxBin is now being used at JBEI in its efforts to use microbes for the production of advanced biofuels – gasoline, diesel and jet fuel – from plant biomass. MaxBin is also available for downloading. To date, more than 150 researchers have accessed it.

    A paper describing MaxBin in detail has been published in the journal Microbiome. The paper is titled MaxBin: an automated binning method to recover individual genomes from metagenomes using an expectation-maximization algorithm. Co-authoring this paper in addition to Singer, Wu and Tang, were Susannah Tringe of the Joint Genome Institute, and Blake Simmons of JBEI.

    • See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:41 pm on September 27, 2014 Permalink | Reply
    Tags: , CO2 studies, Lawrence Berkeley National laboratory, ,   

    From LBL: “Pore models track reactions in underground carbon capture” 

    Berkeley Logo

    Berkeley Lab

    September 25, 2014

    Using tailor-made software running on top-tier supercomputers, a Lawrence Berkeley National Laboratory team is creating microscopic pore-scale simulations that complement or push beyond laboratory findings.

    image
    Computed pH on calcite grains at 1 micron resolution. The iridescent grains mimic crushed material geoscientists extract from saline aquifers deep underground to study with microscopes. Researchers want to model what happens to the crystals’ geochemistry when the greenhouse gas carbon dioxide is injected underground for sequestration. Image courtesy of David Trebotich, Lawrence Berkeley National Laboratory.

    The models of microscopic underground pores could help scientists evaluate ways to store carbon dioxide produced by power plants, keeping it from contributing to global climate change.

    The models could be a first, says David Trebotich, the project’s principal investigator. “I’m not aware of any other group that can do this, not at the scale at which we are doing it, both in size and computational resources, as well as the geochemistry.” His evidence is a colorful portrayal of jumbled calcite crystals derived solely from mathematical equations.

    The iridescent menagerie is intended to act just like the real thing: minerals geoscientists extract from saline aquifers deep underground. The goal is to learn what will happen when fluids pass through the material should power plants inject carbon dioxide underground.

    Lab experiments can only measure what enters and exits the model system. Now modelers would like to identify more of what happens within the tiny pores that exist in underground materials, as chemicals are dissolved in some places but precipitate in others, potentially resulting in preferential flow paths or even clogs.

    Geoscientists give Trebotich’s group of modelers microscopic computerized tomography (CT, similar to the scans done in hospitals) images of their field samples. That lets both camps probe an anomaly: reactions in the tiny pores happen much more slowly in real aquifers than they do in laboratories.

    Going deep

    Deep saline aquifers are underground formations of salty water found in sedimentary basins all over the planet. Scientists think they’re the best deep geological feature to store carbon dioxide from power plants.

    But experts need to know whether the greenhouse gas will stay bottled up as more and more of it is injected, spreading a fluid plume and building up pressure. “If it’s not going to stay there (geoscientists) will want to know where it is going to go and how long that is going to take,” says Trebotich, who is a computational scientist in Berkeley Lab’s Applied Numerical Algorithms Group.

    He hopes their simulation results ultimately will translate to field scale, where “you’re going to be able to model a CO2 plume over a hundred years’ time and kilometers in distance.” But for now his group’s focus is at the microscale, with attention toward the even smaller nanoscale.

    At such tiny dimensions, flow, chemical transport, mineral dissolution and mineral precipitation occur within the pores where individual grains and fluids commingle, says a 2013 paper Trebotich coauthored with geoscientists Carl Steefel (also of Berkeley Lab) and Sergi Molins in the journal Reviews in Mineralogy and Geochemistry.

    These dynamics, the paper added, create uneven conditions that can produce new structures and self-organized materials – nonlinear behavior that can be hard to describe mathematically.

    Modeling at 1 micron resolution, his group has achieved “the largest pore-scale reactive flow simulation ever attempted” as well as “the first-ever large-scale simulation of pore-scale reactive transport processes on real-pore-space geometry as obtained from experimental data,” says the 2012 annual report of the lab’s National Energy Research Scientific Computing Center (NERSC).

    The simulation required about 20 million processor hours using 49,152 of the 153,216 computing cores in Hopper, a Cray XE6 that at the time was NERSC’s flagship supercomputer.

    cray hopper
    Cray Hopper at NERSC

    “As CO2 is pumped underground, it can react chemically with underground minerals and brine in various ways, sometimes resulting in mineral dissolution and precipitation, which can change the porous structure of the aquifer,” the NERSC report says. “But predicting these changes is difficult because these processes take place at the pore scale and cannot be calculated using macroscopic models.

    “The dissolution rates of many minerals have been found to be slower in the field than those measured in the laboratory. Understanding this discrepancy requires modeling the pore-scale interactions between reaction and transport processes, then scaling them up to reservoir dimensions. The new high-resolution model demonstrated that the mineral dissolution rate depends on the pore structure of the aquifer.”

    Trebotich says “it was the hardest problem that we could do for the first run.” But the group redid the simulation about 2½ times faster in an early trial of Edison, a Cray XC-30 that succeeded Hopper. Edison, Trebotich says, has larger memory bandwidth.

    cray edison
    Cray Edison at NERSC

    Rapid changes

    Generating 1-terabyte data sets for each microsecond time step, the Edison run demonstrated how quickly conditions can change inside each pore. It also provided a good workout for the combination of interrelated software packages the Trebotich team uses.

    The first, Chombo, takes its name from a Swahili word meaning “toolbox” or “container” and was developed by a different Applied Numerical Algorithms Group team. Chombo is a supercomputer-friendly platform that’s scalable: “You can run it on multiple processor cores, and scale it up to do high-resolution, large-scale simulations,” he says.

    Trebotich modified Chombo to add flow and reactive transport solvers. The group also incorporated the geochemistry components of CrunchFlow, a package Steefel developed, to create Chombo-Crunch, the code used for their modeling work. The simulations produce resolutions “very close to imaging experiments,” the NERSC report said, combining simulation and experiment to achieve a key goal of the Department of Energy’s Energy Frontier Research Center for Nanoscale Control of Geologic CO2

    Now Trebotich’s team has three huge allocations on DOE supercomputers to make their simulations even more detailed. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is providing 80 million processor hours on Mira, an IBM Blue Gene/Q at Argonne National Laboratory. Through the Advanced Scientific Computing Research Leadership Computing Challenge (ALCC), the group has another 50 million hours on NERSC computers and 50 million on Titan, a Cray XK78 at Oak Ridge National Laboratory’s Leadership Computing Center. The team also held an ALCC award last year for 80 million hours at Argonne and 25 million at NERSC.

    mira
    MIRA at Argonne

    titan
    TITAN at Oak Ridge

    With the computer time, the group wants to refine their image resolutions to half a micron (half of a millionth of a meter). “This is what’s known as the mesoscale: an intermediate scale that could make it possible to incorporate atomistic-scale processes involving mineral growth at precipitation sites into the pore scale flow and transport dynamics,” Trebotich says.

    Meanwhile, he thinks their micron-scale simulations already are good enough to provide “ground-truthing” in themselves for the lab experiments geoscientists do.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 355 other followers

%d bloggers like this: