Tagged: Energy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:23 am on August 15, 2015 Permalink | Reply
    Tags: , Energy,   

    From U Chicago: “Copper clusters capture and convert carbon dioxide to make fuel” 

    U Chicago bloc

    University of Chicago

    August 14, 2015
    Payal Marathe

    A copper tetramer catalyst created by researchers at Argonne National Laboratory may help capture and convert carbon dioxide in a way that ultimately saves energy. It consists of small clusters of four copper atoms each, supported on a thin film of aluminum oxide. These catalysts work by binding to carbon dioxide molecules, orienting them in a way that is ideal for chemical reactions.
    Courtesy of Larry Curtiss

    Capture and convert: This is the motto of carbon dioxide reduction—a process that stops the greenhouse gas before it escapes from chimneys and power plants into the atmosphere and instead turns it into a useful product.

    One possible end product is methanol, a liquid fuel and the focus of a recent study conducted at Argonne National Laboratory. The chemical reactions that make methanol from carbon dioxide rely on a catalyst to speed up the conversion, and Argonne scientists identified a new material that could fill this role. With its unique structure, this catalyst can capture and convert carbon dioxide in a way that ultimately saves energy.

    They call it a copper tetramer.

    It consists of small clusters of four copper atoms each, supported on a thin film of aluminum oxide. These catalysts work by binding to carbon dioxide molecules, orienting them in a way that is ideal for chemical reactions. The structure of the copper tetramer is such that most of its binding sites are open, which means it can attach more strongly to carbon dioxide and can better accelerate the conversion.

    The current industrial process to reduce carbon dioxide to methanol uses a catalyst of copper, zinc oxide and aluminum oxide. A number of its binding sites are occupied merely in holding the compound together, which limits how many atoms can catch and hold carbon dioxide.

    “With our catalyst, there is no inside,” said paper co-author Stefan Vajda, a senior chemist at Argonne and a fellow of the University of Chicago’s Institute for Molecular Engineering. “All four copper atoms are participating because with only a few of them in the cluster, they are all exposed and able to bind.”

    To compensate for a catalyst with fewer binding sites, the current method of reduction creates high-pressure conditions to facilitate stronger bonds with carbon dioxide molecules. But compressing gas into a high-pressure mixture takes a lot of energy.

    The benefit of enhanced binding is that the new catalyst requires lower pressure and less energy to produce the same amount of methanol.

    Carbon dioxide emissions are an ongoing environmental problem, and according to the authors, it’s important that research identifies optimal ways to deal with the waste.

    “We’re interested in finding new catalytic reactions that will be more efficient than the current catalysts, especially in terms of saving energy,” said Larry Curtiss, an Argonne distinguished fellow who co-authored this paper.

    Copper tetramers could allow for the capture and conversion of carbon dioxide on a larger scale—reducing an environmental threat and creating a useful product like methanol, which can be transported and burned for fuel.

    The catalyst still has a long journey ahead from the lab to industry. Potential obstacles include instability and figuring out how to manufacture mass quantities. There’s a chance that copper tetramers may decompose when put to use in an industrial setting, so ensuring long-term durability is a critical step for future research, Curtiss said. And while the scientists needed only nanograms of the material for this study, that number would have to be multiplied dramatically for industrial purposes.

    Meanwhile, researchers are interested in searching for other catalysts that might even outperform their copper tetramer. These catalysts can be varied in size, composition and support material, resulting in a list of more than 2,000 potential combinations, Vajda said.

    But the scientists don’t have to run thousands of different experiments, said Peter Zapol, an Argonne physicist and co-author of this paper. Instead, they will use advanced calculations to make predictions [simulations], and then test the catalysts that seem most promising.

    “We haven’t yet found a catalyst better than the copper tetramer, but we hope to,” Vajda said. “With global warming becoming a bigger burden, it’s pressing that we keep trying to turn carbon dioxide emissions back into something useful.”

    For this research, the team used the Center for Nanoscale Materials as well as beamline 12-ID-C of the Advanced Photon Source [APS].

    ANL APS interior
    APS at ANL

    Curtiss said the Advanced Photon Source allowed the scientists to observe ultra-low loadings of their small clusters, down to a few nanograms, which was a critical piece of this investigation.

    The study, Carbon dioxide conversion to methanol over size-selected Cu4 clusters at low pressures, was published in the Journal of the American Chemical Society and was funded by the DOE’s Office of Basic Energy Sciences. Co-authors also included researchers from the University of Freiburg and Yale University.

    This article first appeared on the Argonne National Laboratory website.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Chicago Campus

    An intellectual destination

    One of the world’s premier academic and research institutions, the University of Chicago has driven new ways of thinking since our 1890 founding. Today, UChicago is an intellectual destination that draws inspired scholars to our Hyde Park and international campuses, keeping UChicago at the nexus of ideas that challenge and change the world.

  • richardmitnick 7:34 am on April 23, 2015 Permalink | Reply
    Tags: , , , Energy   

    From ANU: “Australia can cut emissions and grow its economy” 

    ANU Australian National University Bloc

    Australian National University

    22 April 2015
    No Writer Credit


    Australia can make deep cuts to its carbon emissions and move to full renewable energy for its electricity supply at a relatively low cost, an ANU report has found.

    The report, written by Associate Professor Frank Jotzo and PhD scholar Luke Kemp, reviews the evidence from major studies over the past eight years.

    It finds that the cost estimates for Australia reaching ambitious emissions reduction goals came down in every successive major report.

    “Deep cuts to Australia’s emissions can be achieved, at a low cost,” said Associate Professor Jotzo, director of the ANU Centre for Climate Economics and Policy at the Crawford School of Public Policy.

    Australia has committed to cut greenhouse gas emissions by five per cent of year 2000 levels by 2020, and is due in coming months to decide on emissions reduction targets for after 2020.

    Australia is among the world’s highest producers of per-capita carbon emissions, due to a heavy reliance on coal for electricity generation.

    Associate Professor Jotzo’s report, commissioned by WWF Australia (World Wildlife Fund), found the cost of moving to renewable energy was becoming cheaper, and strong climate action could be achieved while maintaining economic growth.

    “At the heart of a low-carbon strategy for Australia is a carbon-free power system,” he said.

    “Australia has among the best prerequisites in the world for moving to a fully renewable energy electricity supply.”

    He said the costs of carbon-free technology, such as wind and solar power, have fallen faster than expected.

    “For example, large-scale solar panel power stations are already only half the cost that the Treasury’s 2008 and 2011 modelling studies estimated they would be in the year 2030,” he said.

    The report is available at the WWF Australia website.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ANU Campus

    ANU is a world-leading university in Australia’s capital city, Canberra. Our location points to our unique history, ties to the Australian Government and special standing as a resource for the Australian people.

    Our focus on research as an asset, and an approach to education, ensures our graduates are in demand the world-over for their abilities to understand, and apply vision and creativity to addressing complex contemporary challenges.

  • richardmitnick 9:06 am on April 22, 2015 Permalink | Reply
    Tags: , Energy, ,   

    From NOVA: “The EPA’s Natural Gas Problem” 



    11 Feb 2015
    Phil McKenna

    When U.S. President Barack Obama recently announced plans to reign in greenhouse gas emissions from the oil and gas production, the opposing drum beats from industry and environmental groups were as fast as they were relentless. The industry group America’s Natural Gas Alliance bombarded Twitter with paid advertisements stating how little their industry actually emits. Press releases from leading environmental organizations deploring the plan’s reliance on largely voluntary actions flooded email inboxes.

    Opposition to any new regulation by industry, however, isn’t as lockstep as its lobbying groups would have us believe. At the same time, environmentalists’ focus on voluntary versus mandatory measures misses a much graver concern.

    The White House and EPA are seeking to regulate methane emissions from the oil and gas industry.

    The joint White House and U.S. Environmental Protection Agency proposal would reduce emissions of methane, the primary component of natural gas, by 40–45% from 2012 levels in the coming decade. It’s a laudable goal. While natural gas is relatively clean burning—emitting roughly half the amount of carbon dioxide per unit of energy as coal—it is an incredibly potent greenhouse gas if it escapes into the atmosphere unburned.

    Methane emissions from the oil and gas sector are estimated to be equivalent to the pollution from 180 coal-fired power plants, according to studies done by the Environmental Defense Fund (EDF), an environmental organization. Yet there is a problem: despite that estimate, no one, including EDF, knows for certain how much methane the oil and gas industry actually emits.

    The EPA publishes an annual inventory of U.S. Greenhouse Gas emissions, which it describes as “the most comprehensive accounting of total greenhouse gas emissions for all man-made sources in the United States.” But their estimates for the natural gas industry are, by their own admission, outdated, based on limited data, and likely significantly lower than actual emissions.

    The Baseline

    Getting the number right is extremely important as it will serve as the baseline for any future reductions. “The smaller the number they start with, the smaller the amount they have to reduce in coming years by regulation,” says Anthony Ingraffea, a professor of engineering at Cornell University in Ithaca, New York. “A 45% reduction on a rate that is too low will be a very small reduction. From a scientific perspective, this doesn’t amount to a hill of beans.”

    Ingraffea says methane emissions are likely several times higher than what the EPA estimates. (Currently, the EPA says that up to 1.8% of the natural gas distributed and produced in the U.S. escapes to the atmosphere.) Even if Ingraffea is right, its still a small percentage, but methane’s potency as a greenhouse gas makes even a small release incredibly significant. Over 100 years, methane traps 34 times more heat in the atmosphere than carbon dioxide. If you are only looking 20 years into the future, a time frame given equal weight by the United Nation’s Intergovernmental Panel on Climate Change, methane is 86 times more potent than carbon dioxide.

    After being damaged during Hurricane Ike in September 2008, a natural gas tank spews methane near Sabine Pass, Texas.

    If Ingraffea is right, the amount of methane released into the atmosphere from oil and gas wells, pipelines, processing and storage facilities has a warming affect approaching that of the country’s 557 coal fired power plants. Reducing such a high rate of emissions by 40–45% would certainly help stall climate change. It would also likely be much more difficult to achieve than the cuts industry and environmental groups are currently debating.

    Ingraffea first called attention to what he and others believe are EPA underestimates in 2011 when he published a highly controversial paper along with fellow Cornell professor Robert Howarth. Their research suggested the amount of methane emitted by the natural gas industry was so great that relying on natural gas was actually worse for the climate than burning coal.

    Following the recent White House and EPA announcement, industry group America’s Natural Gas Alliance (ANGA) stated that they have reduced emissions by 17% since 1990 while increasing production by 37%. “We question why the administration would single out our sector for regulation, given our demonstrated reductions,” the organization wrote in a press release following the White House’s proposed policies. ANGA bases its emissions reduction on the EPA’s own figures and stands by the data. “We like to have independent third party verification, and we use the EPA’s figures for that,” says ANGA spokesman Daniel Whitten.

    Shifting Estimates

    But are the EPA estimates correct, and are they sufficiently independent? To come up with its annual estimate, the EPA doesn’t make direct measurements of methane emissions each year. Rather they multiply emission factors, the volume of a gas thought to be emitted by a particular source—like a mile of pipeline or a belching cow—by the number of such sources in a given area. For the natural gas sector, emission factors are based on a limited number of measurements conducted in the early 1990s in industry-funded studies.

    In 2010 the EPA increased its emissions factors for methane from the oil and natural gas sector, citing “outdated and potentially understated” emissions. The end result was a more than doubling of its annual emissions estimate from the prior year. In 2013, however, the EPA reversed course, lowering estimates for key emissions factors for methane at wells and processing facilities by 25–30%. When reached for comment, the EPA pointed me to their existing reports.

    The change was not driven by better scientific understanding but by political pressure, Howarth says. “The EPA got huge pushback from industry and decreased their emissions again, and not by collecting new data.” The EPA states that the reduction in emissions factors was based on “a significant amount of new information” that the agency received about the natural gas industry.

    However, a 2013 study published in the journal Geophysical Research Letters concludes that “the main driver for the 2013 reduction in production emissions was a report prepared by the oil and gas industry.” The report was a non-peer reviewed survey of oil and gas companies conducted by ANGA and the American Petroleum Institute.

    The EPA’s own inspector general released a report that same year that was highly critical of the agency’s estimates of methane and other harmful gasses. “Many of EPA’s existing oil and gas production emission factors are of questionable quality because they are based on limited and/or low quality data.” The report concluded that the agency likely underestimates emissions, which “hampers [the] EPA’s ability to accurately assess risks and air quality impacts from oil and gas production activities.”


    Soon after the EPA lowered its emissions estimates, a number of independent studies based on direct measurements found higher methane emissions. In November 2013, a study based on direct measurements of atmospheric methane concentrations across the United States concluded actual emissions from the oil and gas sector were 1.5 times higher than EPA estimates. The study authors noted, “the US EPA recently decreased its methane emission factors for fossil fuel extraction and processing by 25–30% but we find that [methane] data from across North America instead indicate the need for a larger adjustment of the opposite sign.”

    In February 2014, a study published in the journal Science reviewed 20 years of technical literature on natural gas emissions in the U.S. and Canada and concluded that “official inventories consistently underestimate actual CH4 emissions.”

    “When you actually go out and measure methane emissions directly, you tend to come back with measurements that are higher than the official inventory,” says Adam Brandt, lead author of the study and an assistant professor of energy resources engineering at Stanford University. Brandt and his colleagues did not attempt to make an estimate of their own, but stated that in a worst-case scenario total methane emissions from the oil and gas sector could be three times higher than the EPA’s estimate.

    On January 22, eight days after the White House’s announcement, another study found similarly high emissions from a sector of the natural gas industry that is often overlooked. The study made direction measurements of methane emissions from natural gas pipelines and storage facilities in and around Boston, Massachusetts, and found that they were 3.9 times higher than the EPA’s estimate for the “downstream” sector, or the parts of the system which transmit, distribute, and store natural gas.

    Most natural gas leaks are small, but large ones can have catastrophic consequences. The wreckage above was caused by a leak in San Bruno, California, in 2010.

    Boston’s aging, leak-prone, cast-iron pipelines likely make the city more leaky than most, but the high volume of emissions—losses around the city total roughly $1 billion worth of natural gas per decade—are nonetheless surprising. The majority of methane emissions were previously believed to occur “upstream” at wells and processing facilities. Efforts to curb emissions including the recent goals set by the White House have overlooked the smaller pipelines that deliver gas to end users.

    “Emissions from end users have been only a very small part of conversation on emissions from natural gas,” says lead author Kathryn McKain, an atmospheric scientist at Harvard University. “Our findings suggest that we don’t understand the underlying emission processes which is essential for creating effective policy for reducing emissions.”

    The Boston study was one of 16 recent or ongoing studies coordinated by EDF to try to determine just how much methane is actually being emitted from the industry as a whole. Seven studies, focusing on different aspects of oil and gas industry infrastructure, have been published thus far. Two of the studies, including the recent Boston study, have found significantly higher emission rates. One study, conducted in close collaboration with industry, found lower emissions. EDF says it hopes to have all studies completed by the end of 2015. The EPA told me it will take the studies into account for possible changes in its current methane emission factors.

    Fraction of a Percent

    EDF is simultaneously working with industry to try to reduce methane emissions. A recent study commissioned by the environmental organization concluded the US oil and gas industry could cut methane emissions by 40% from projected 2018 levels at a cost of less than one cent per thousand cubic feet of natural gas, which today sells for about $5. The reductions could be achieved with existing emissions-control technologies and policies.

    “We are talking about one third or one fourth of a percent of the price of gas to meet these goals,” says Steven Hamburg chief scientist for EDF. The 40–45% reduction goal recently announced by the White House is nearly identical to the level of cuts analyzed by EDF. To achieve the reduction the White House proposes mandatory changes in new oil and gas infrastructure as well as voluntary measures for existing infrastructure.

    Thomas Pyle, president of the Institute for Energy Research, an industry organization, says industry is already reducing its methane emissions and doesn’t need additional rules. “It’s like regulating ice cream producers not to spill any ice cream during the ice cream making process,” he says. “It is self-evident for producers to want to capture this product with little or no emissions and make money from it.”

    Unlike making ice cream, however, natural gas producers often vent their product intentionally as part of the production process. One of the biggest sources of methane emissions in natural gas production is gas that is purposely vented from pneumatic devices which use pressurized methane to open and close valves and operate pumps. They typically release or “bleed” small amounts of gas during their operation.

    Such equipment is widely used throughout natural gas extraction, processing, and transmission process. A recent study by Natural Resources Defense Council (NRDC) estimates natural gas driven pneumatic equipment vents 1.6–1.9 million metric tons of methane each year. The figure accounts for nearly one-third of all methane lost by the natural gas industry, as estimated by the EPA.

    A natural gas distribution facility

    “Low-bleed” or “zero-bleed” controllers are available, though they are more expensive. The latter use compressed air or electricity to operate instead of pressurized natural gas, or they capture methane that would otherwise be vented and reuse it. “Time and time again we see that we can operate this equipment without emissions or with very low emissions,” Hamburg says. Increased monitoring and repair of unintended leaks at natural gas facilities could reduce an additional third of the industry’s methane emissions according to the NRDC study.

    Environmentalist organizations have come out in strong opposition to the lack of mandatory regulations for existing infrastructure, which will account for nearly 90% of methane emissions in 2018 according to a recent EDF report.

    While industry groups oppose mandatory regulations on new infrastructure, at least one industry leader isn’t concerned. “I don’t believe the new regulations will hurt us at all,” says Mark Boling an executive vice president at Houston-based Southwestern Energy Company, the nation’s fourth largest producer of natural gas.

    Boling says leak monitoring and repair programs his company initiated starting in late 2013 will pay for themselves in 12 to 18 months through reduced methane emissions. Additionally, he says the company has also replaced a number of pneumatic devices with zero-bleed solar powered electric pumps. Southwestern Energy is now testing air compressors powered by fuel cells to replace additional methane-bleeding equipment Boling says. In November, Southwestern Energy launched ONE Future, a coalition of companies from across the natural gas industry. Their goal is to lower the industry’s methane emissions below one percent.

    Based on the EPA emissions rate of 1.8% and fixes identified by EDF and NRDC, their goal seems attainable. But what if the actual emissions rate is significantly higher, as Howarth and Ingraffea have long argued and recent studies seem to suggest? “We can sit here and debate whose numbers are right, ‘Is it 4%? 8%? Whatever,’ ” Boling says. “But there are cost effective opportunities out there to reduce emissions, and we need to step up and do it.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 3:52 pm on March 9, 2015 Permalink | Reply
    Tags: , , , , Energy   

    From Caltech: “One Step Closer to Artificial Photosynthesis and “Solar Fuels” 

    Caltech Logo

    Ker Than

    Ke Sun, a Caltech postdoc in the lab of George L. Argyros Professor and Professor of Chemistry Nate Lewis, peers into a sample of a new, protective film that he has helped develop to aid in the process of harnessing sunlight to generate fuels.
    Credit: Lance Hayashida/Caltech Marcomm

    Caltech scientists, inspired by a chemical process found in leaves, have developed an electrically conductive film that could help pave the way for devices capable of harnessing sunlight to split water into hydrogen fuel.

    When applied to semiconducting materials such as silicon, the nickel oxide film prevents rust buildup and facilitates an important chemical process in the solar-driven production of fuels such as methane or hydrogen.

    “We have developed a new type of protective coating that enables a key process in the solar-driven production of fuels to be performed with record efficiency, stability, and effectiveness, and in a system that is intrinsically safe and does not produce explosive mixtures of hydrogen and oxygen,” says Nate Lewis, the George L. Argyros Professor and professor of chemistry at Caltech and a coauthor of a new study, published the week of March 9 in the online issue of the journal the Proceedings of the National Academy of Sciences, that describes the film.

    The development could help lead to safe, efficient artificial photosynthetic systems—also called solar-fuel generators or “artificial leaves”—that replicate the natural process of photosynthesis that plants use to convert sunlight, water, and carbon dioxide into oxygen and fuel in the form of carbohydrates, or sugars.

    The artificial leaf that Lewis’ team is developing in part at Caltech’s Joint Center for Artificial Photosynthesis (JCAP) consists of three main components: two electrodes—a photoanode and a photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules to generate oxygen gas, protons, and electrons, while the photocathode recombines the protons and electrons to form hydrogen gas. The membrane, which is typically made of plastic, keeps the two gases separate in order to eliminate any possibility of an explosion, and lets the gas be collected under pressure to safely push it into a pipeline.

    Scientists have tried building the electrodes out of common semiconductors such as silicon or gallium arsenide—which absorb light and are also used in solar panels—but a major problem is that these materials develop an oxide layer (that is, rust) when exposed to water.

    Lewis and other scientists have experimented with creating protective coatings for the electrodes, but all previous attempts have failed for various reasons. “You want the coating to be many things: chemically compatible with the semiconductor it’s trying to protect, impermeable to water, electrically conductive, highly transparent to incoming light, and highly catalytic for the reaction to make oxygen and fuels,” says Lewis, who is also JCAP’s scientific director. “Creating a protective layer that displayed any one of these attributes would be a significant leap forward, but what we’ve now discovered is a material that can do all of these things at once.”

    The team has shown that its nickel oxide film is compatible with many different kinds of semiconductor materials, including silicon, indium phosphide, and cadmium telluride. When applied to photoanodes, the nickel oxide film far exceeded the performance of other similar films—including one that Lewis’s group created just last year. That film was more complicated—it consisted of two layers versus one and used as its main ingredient titanium dioxide (TiO2, also known as titania), a naturally occurring compound that is also used to make sunscreens, toothpastes, and white paint.

    “After watching the photoanodes run at record performance without any noticeable degradation for 24 hours, and then 100 hours, and then 500 hours, I knew we had done what scientists had failed to do before,” says Ke Sun, a postdoc in Lewis’s lab and the first author of the new study.

    Lewis’s team developed a technique for creating the nickel oxide film that involves smashing atoms of argon into a pellet of nickel atoms at high speeds, in an oxygen-rich environment. “The nickel fragments that sputter off of the pellet react with the oxygen atoms to produce an oxidized form of nickel that gets deposited onto the semiconductor,” Lewis says.

    Crucially, the team’s nickel oxide film works well in conjunction with the membrane that separates the photoanode from the photocathode and staggers the production of hydrogen and oxygen gases.

    “Without a membrane, the photoanode and photocathode are close enough to each other to conduct electricity, and if you also have bubbles of highly reactive hydrogen and oxygen gases being produced in the same place at the same time, that is a recipe for disaster,” Lewis says. “With our film, you can build a safe device that will not explode, and that lasts and is efficient, all at once.”

    Lewis cautions that scientists are still a long way off from developing a commercial product that can convert sunlight into fuel. Other components of the system, such as the photocathode, will also need to be perfected.

    “Our team is also working on a photocathode,” Lewis says. “What we have to do is combine both of these elements together and show that the entire system works. That will not be easy, but we now have one of the missing key pieces that has eluded the field for the past half-century.”

    Along with Lewis and Sun, additional authors on the paper, “Stable solar-driven oxidation of water by semiconducting photoanodes protected by transparent catalytic nickel oxide films,” include Caltech graduate students Fadl Saadi, Michael Lichterman, Xinghao Zhou, Noah Plymale, and Stefan Omelchenko; William Hale, from the University of Southampton; Hsin-Ping Wang and Jr-Hau He, from King Abdullah University in Saudi Arabia; Kimberly Papadantonakis, a scientific research manager at Caltech; and Bruce Brunschwig, the director of the Molecular Materials Research Center at Caltech. Funding was provided by the Office of Science at the U.S. Department of Energy, the National Science Foundation, the Beckman Institute, and the Gordon and Betty Moore Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

  • richardmitnick 10:00 am on February 26, 2015 Permalink | Reply
    Tags: , Energy,   

    From NYT: “Bill Gates and Other Business Leaders Urge U.S. to Increase Energy Research” 

    New York Times

    The New York Times

    FEB. 23, 2015

    This Duke Energy battery project in Texas, supported by federal research dollars, stores power from wind turbines for later use. A new report calls on the government to increase its spending on energy research.

    The government is spending far too little money on energy research, putting at risk the long-term goals of reducing carbon emissions and alleviating energy poverty, some of the country’s top business leaders found in a new report.

    The American Energy Innovation Council, a group of six executives that includes the Microsoft co-founder Bill Gates and the General Electric chief Jeffrey R. Immelt, urged Congress and the White House to make expanded energy research a strategic national priority.

    The leaders pointed out that the United States had fallen behind a slew of other countries in the percentage of economic output being spent on energy research, among them China, Japan, France and South Korea. Their report urged leaders of both political parties to start increasing funds to ultimately triple today’s level of research spending, about $5 billion a year.

    “Growing and consistent appropriations for energy innovation should be a top U.S. priority over the next decade,” the business leaders recommended in their report. “The budget numbers over the last five years are a major failure in U.S. energy policy.”

    At stake, Mr. Gates said in an interview, are not just long-term goals like reducing emissions of greenhouse gases, but also American leadership in industries of the future, including advanced nuclear reactors and coal-burning power plants that could capture and bury their emissions.

    “Our universities, our national labs are the best in the world,” Mr. Gates said, but he added that a chronic funding shortfall was holding back the pace of their work.

    The report did credit the Obama administration and Congress with some gains, including a one-time injection of funds in the economic stimulus bill of 2009. But subsequent budgets have essentially dropped back to prior levels, and spending on American energy research remains far below the high point it reached just after the energy crises of the 1970s.

    In the past, the report found, investments in energy innovation have paid major dividends. Mr. Gates cited the example of hydraulic fracturing to unlock gas and oil in shale deposits, a technique developed in part with federal research money that has led to a newfound abundance of oil and gas, lowering prices for consumers.

    Similar innovation is needed in low-emission sources of energy, the report found, if the goal of limiting global warming is to be met while making energy more available to poor people around the world. Experts involved in writing the report said the needed breakthroughs included safer types of nuclear reactors, cheaper methods of capturing carbon dioxide emissions at power plants and improved batteries that can store large amounts of energy.

    The new report is an update on similar recommendations the same business leaders made five years ago. While the report found that the picture remained generally bleak, it did cite some progress.

    For instance, Congress established the Advanced Research Projects Agency-Energy, or ARPA-E, modeled on the Pentagon research agency that helped create the Internet. And the Energy Department has funded a string of energy innovation hubs across the country.

    “There’s some very promising things that are in these centers, but the pace is absolutely limited by the modest funding level,” Mr. Gates said. “Those should be funded at a much higher level.”

    The report pointed out that funding for ARPA-E was less than $300 million per year, and urged that it be raised closer to $1 billion. The entire federal appropriation for energy research is less than Americans spend every year buying potato and tortilla chips, the report noted.

    The recommendations in the report are similar to those made by other groups in recent years. But with the federal budget under pressure, the idea of a major push on energy research has gained little traction in Washington.

    The business leaders hope to change that as the 2016 presidential race gets under way, urging both parties to embrace ambitious research plans.

    Aside from Mr. Gates and Mr. Immelt, the American Energy Innovation Council comprises Norman R. Augustine, a former chairman and chief executive of Lockheed Martin; John Doerr, the Silicon Valley venture capitalist; Chad Holliday, a former chairman and chief executive of DuPont who soon will become chairman of Shell; and Tom Linebarger, chairman and chief executive of Cummins.

    In pushing their case in Washington, the leaders are likely to encounter reluctance on the right to increase government spending, as well as some philosophical objections to expanding the government’s role in the energy market. On the left, they may encounter wariness from environmentalists who, while not opposing new research, do not want that push to detract from rapid deployment of current clean-energy technologies, like wind and solar power.

    “I am 100 percent for more research, since who could possibly oppose that?” said Joseph J. Romm, who helped manage federal energy research in the Bill Clinton administration and later founded a widely read blog on climate change. “But it is only a small part of the answer, and certainly not the most important.”

    He added that aggressive deployment of existing technologies and a price on emissions of carbon dioxide would go a long way to reduce emissions, and that the latter would help unlock more private innovation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:24 am on February 17, 2015 Permalink | Reply
    Tags: , , Energy, , ,   

    From physicsworld: “Smaller fusion reactors could deliver big gains” 


    Feb 16, 2015
    Michael Banks

    Hot topic: size may not be everything in tokamak design

    Researchers from the UK firm Tokamak Energy say that future fusion reactors could be made much smaller than previously envisaged – yet still deliver the same energy output. That claim is based on calculations showing that the fusion power gain – a measure of the ratio of the power from a fusion reactor to the power required to maintain the plasma in steady state – does not depend strongly on the size of the reactor. The company’s finding goes against conventional thinking, which says that a large power output is only possible by building bigger fusion reactors.

    The largest fusion reactor currently under construction is the €16bn ITER facility in Cadarache, France.

    ITER Tokamak

    This will weigh about 23,000 tonnes when completed in the coming decade and consist of a deuterium–tritium plasma held in a 60 m-tall, doughnut-shaped “tokamak”. ITER aims to produce a fusion power gain (Q) of 10, meaning that, in theory, the reactor will emit 10 times the power it expends by producing 500 MW from 50 MW of input power. While ITER has a “major” plasma radius of 6.21 m, it is thought that an actual future fusion power plant delivering power to the grid would need a 9 m radius to generate 1 GW.

    Low power brings high performance

    The new study, led by Alan Costley from Tokamak Energy, which builds compact tokamaks, shows that smaller, lower-power, and therefore lower-cost reactors could still deliver a value of Q similar to ITER. The work focused on a key parameter in determining plasma performance called the plasma “beta”, which is the ratio of the plasma pressure to the magnetic pressure. By using scaling expressions consistent with existing experiments, the researchers show that the power needed for high fusion performance can be three or four times lower than previously thought.

    Combined with the finding on the size-dependence of Q, these results imply the possibility of building lower-power, smaller and cheaper pilot plants and reactors. “The consequence of beta-independent scaling is that tokamaks could be much smaller, but still have a high power gain,” David Kingham, Tokamak Energy chief executive, told Physics World.

    The researchers propose that a reactor with a radius of just 1.35 m would be able to generate 180 MW, with a Q of 5. This would result in a reactor just 1/20th of the size of ITER. “Although there are still engineering challenges to overcome, this result is underpinned by good science,” says Kingham. “We hope that this work will attract further investment in fusion energy.”

    Many challenges remain

    Howard Wilson, director of the York Plasma Institute at the University of York in the UK, points out, however, that the result relies on being able to achieve a very high magnetic field. “We have long been aware that a high magnetic field enables compact fusion devices – the breakthrough would be in discovering how to create such high magnetic fields in the tokamak,” he says. “A compact fusion device may indeed be possible, provided one can achieve high confinement of the fuel, demonstrate efficient current drive in the plasma, exhaust the heat and particles effectively without damaging material surfaces, and create the necessary high magnetic fields.”

    The work by Tokamak Energy follows an announcement late last year that the US firm Lockheed Martin plans to build a “truck-sized” compact fusion reactor by 2019 that would be capable of delivering 100 MW. However, the latest results from Tokamak Energy might not be such bad news for ITER. Kingham adds that his firm’s work means that, in principle, ITER is actually being built much larger than necessary – and so should outperform its Q target of 10.

    The research is published in Nuclear Fusion.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

  • richardmitnick 5:28 pm on February 9, 2015 Permalink | Reply
    Tags: , Energy, , ,   

    From LBL: “New Design Tool for Metamaterials” 

    Berkeley Logo

    Berkeley Lab

    February 9, 2015
    Lynn Yarris (510) 486-5375

    Confocal microscopy confirmed that the nonlinear optical properties of metamaterials can be predicted using a
    theory about light passing through nanostructures.

    Metamaterials – artificial nanostructures engineered with electromagnetic properties not found in nature – offer tantalizing future prospects such as high resolution optical microscopes and superfast optical computers. To realize the vast potential of metamaterials, however, scientists will need to hone their understanding of the fundamental physics behind them. This will require accurately predicting nonlinear optical properties – meaning that interaction with light changes a material’s properties, for example, light emerges from the material with a different frequency than when it entered. Help has arrived.

    Scientists with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley have shown, using a recent theory for nonlinear light scattering when light passes through nanostructures, that it is possible to predict the nonlinear optical properties of metamaterials.

    “The key question has been whether one can determine the nonlinear behavior of metamaterials from their exotic linear behavior,” says Xiang Zhang, director of Berkeley Lab’s Materials Sciences Division and an international authority on metamaterial engineering who led this study. “We’ve shown that the relative nonlinear susceptibility of large classes of metamaterials can be predicted using a comprehensive nonlinear scattering theory. This will allow us to efficiently design metamaterials with strong nonlinearity for important applications such as coherent Raman sensing, entangled photon generation and frequency conversion.”

    Xiang Zhang, Haim Suchowski and Kevin O’Brien were part of the team that discovered a way to predict thenon-linear optical properties of metamaterials. (Photo by Roy Kaltschmidt)

    Zhang, who holds the Ernest S. Kuh Endowed Chair at UC Berkeley and is a member of the Kavli Energy NanoSciences Institute at Berkeley (Kavli ENSI), is the corresponding author of a paper describing this research in the journal Nature Materials. The paper is titled Predicting nonlinear properties of metamaterials from the linear response. The other authors are Kevin O’Brien, Haim Suchowski, Junsuk Rho, Alessandro Salandrino, Boubacar Kante and Xiaobo Yin.

    The unique electromagnetic properties of metamaterials stem from their physical structure rather than their chemical composition. This structure, for example, provides certain metamaterials with a negative refractive index, an optical property in which the phase front of light moving through a material propagates backward towards the source. The phase front light moving through natural materials always propagates forward, away from its source.

    Zhang and his group have already exploited the linear optical properties of metamaterials to create the world’s first optical invisibility cloak and mimic black holes. Most recently they used a nonlinear metamaterial with a refractive index of zero to generate “phase mismatch–free nonlinear light,” meaning light waves moved through the material gaining strength in all directions. However, engineering nonlinear metamaterials remains in its infancy, with no general conclusion on the relationship between linear and nonlinear properties.

    Metamaterial arrays whose geometry varied gradually from a symmetric bar to an asymmetric U-shape were used to compare the predictive abilities of Miller’s rule and a non-linear light scattering theory.

    For the past several decades, scientists have estimated the nonlinear optical properties in natural crystals using a formulation known as “Miller’s rule,” for the physicist Robert Miller who authored it. In this new study, Zhang and his group found that Miller’s rule doesn’t work for a number of metamaterials. That’s the bad news. The good news is that a nonlinear light scattering theory, developed for nanostructures by Dutch scientist Sylvie Roke, does.

    “From the linear properties, one calculates the nonlinear polarization and the mode of the nanostructure at the second harmonic,” says Kevin O’Brien, co-lead author of the Nature Materials paper and a member of Zhang’s research group. “We found the nonlinear emission is proportional to the overlap integral between these, not simply determined by their linear response.”

    Zhang, O’Brien, Suchowski, and the other contributors to this study evaluated Miller’s rule and the nonlinear light scattering theory by comparing their predictions to experimental results obtained using a nonlinear stage-scanning confocal microscope.

    “Nonlinear stage-scanning confocal microscopy is critical because it allows us to rapidly measure the nonlinear emission from thousands of different nanostructures while minimizing the potential systematic errors, such as intensity or beam pointing variations, often associated with tuning the wavelength of an ultrafast laser,” O’Brien says.

    The researchers used confocal microscopy to observe the second harmonic generation from metamaterial arrays whose geometry was gradually shifted from a symmetric bar-shape to an asymmetric U-shape. Second harmonic light is a nonlinear optical property in which photons with the same frequency interact with a nonlinear material to produce new photons at twice the energy and half the wavelength of the originals. It was the discovery of optical second harmonic generation in 1961 that started modern nonlinear optics.

    “Our results show that nonlinear scattering theory can be a valuable tool in the design of nonlinear metamaterials not only for second-order but also higher order nonlinear optical responses over a broad range of wavelengths,” O’Brien says. “We’re now using these experimental and theoretical techniques to explore other nonlinear processes in metamaterials, such as parametric amplification and entangled photon generation.”

    This research was supported by the DOE Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

  • richardmitnick 8:20 am on January 30, 2015 Permalink | Reply
    Tags: , , Energy, , Solar Fuels   

    From Science 2.0: “Calculating The Future Of Solar-fuel Refineries” 

    Science 2.0 bloc

    Science 2.0

    January 30th 2015
    News Staff

    The process of converting the sun’s energy into liquid fuels requires a sophisticated, interrelated series of choices but a solar refinery is especially tricky to map out because the designs involve newly developed or experimental technologies. This makes it difficult to develop realistic plans that are economically viable and energy efficient.

    In a paper recently published in the journal Energy & Environmental Science, a team led by University of Wisconsin-Madison chemical and biological engineering Professors Christos Maravelias and George Huber outlined a tool to help engineers better gauge the overall yield, efficiency and costs associated with scaling solar-fuel production processes up into large-scale refineries.


    That’s where the new UW-Madison tool comes in. It’s a framework that focuses on accounting for general variables and big-picture milestones associated with scaling up energy technologies to the refinery level. This means it’s specifically designed to remain relevant even as solar-fuel producers and researchers experiment with new technologies and ideas for technologies that don’t yet exist.

    Renewable-energy researchers at UW-Madison have long emphasized the importance of considering energy production as a holistic process, and Maravelias says the new framework could be used by a wide range of solar energy stakeholders, from basic science researchers to business decision-makers. The tool could also play a role in wider debates about which renewable-energy technologies are most appropriate for society to pursue on a large scale.

    “The nice thing about it being general is that if a researcher develops a different technology – and there are many different ways to generate solar fuels – our framework would still be applicable, and if someone wants a little more detail, our framework can be adjusted accordingly,” Maravelias says.

    In addition to bringing clarity to the solar refinery conversation, the framework could also be adapted to help analyze and plan any number of other energy-related processes, says Jeff Herron, a postdoc in Maravelias’ group and the paper’s lead author.

    “People tend to be narrowly focused on their particular role within a bigger picture,” Herron says. “I think bringing all that together is unique to our work, and I think that’s going to be one of the biggest impacts.”

    Ph.D. student Aniruddha Upadhye and postdoc Jiyong Kim also contributed to the project. The research was funded by the U.S. Department of Energy.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 3:17 pm on January 23, 2015 Permalink | Reply
    Tags: , , , Energy, ,   

    From BNL: “Self-Assembled Nanotextures Create Antireflective Surface on Silicon Solar Cells” 

    Brookhaven Lab

    January 21, 2015
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Nanostructured surface textures—with shapes inspired by the structure of moths’ eyes—prevent the reflection of light off silicon, improving conversion of sunlight to electricity

    Chuck Black of the Center for Functional Nanomaterials displays a nanotextured square of silicon on top of an ordinary silicon wafer. The nanotextured surface is completely antireflective and could boost the production of solar energy from silicon solar cells.

    Reducing the amount of sunlight that bounces off the surface of solar cells helps maximize the conversion of the sun’s rays to electricity, so manufacturers use coatings to cut down on reflections. Now scientists at the U.S. Department of Energy’s Brookhaven National Laboratory show that etching a nanoscale texture onto the silicon material itself creates an antireflective surface that works as well as state-of-the-art thin-film multilayer coatings.

    The surface nanotexture … drastically cut down on reflection of many wavelengths of light simultaneously.

    Their method, described in the journal Nature Communications and submitted for patent protection, has potential for streamlining silicon solar cell production and reducing manufacturing costs. The approach may find additional applications in reducing glare from windows, providing radar camouflage for military equipment, and increasing the brightness of light-emitting diodes.

    “For antireflection applications, the idea is to prevent light or radio waves from bouncing at interfaces between materials,” said physicist Charles Black, who led the research at Brookhaven Lab’s Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility.

    Preventing reflections requires controlling an abrupt change in “refractive index,” a property that affects how waves such as light propagate through a material. This occurs at the interface where two materials with very different refractive indices meet, for example at the interface between air and silicon. Adding a coating with an intermediate refractive index at the interface eases the transition between materials and reduces the reflection, Black explained.

    “The issue with using such coatings for solar cells,” he said, “is that we’d prefer to fully capture every color of the light spectrum within the device, and we’d like to capture the light irrespective of the direction it comes from. But each color of light couples best with a different antireflection coating, and each coating is optimized for light coming from a particular direction. So you deal with these issues by using multiple antireflection layers. We were interested in looking for a better way.”

    For inspiration, the scientists turned to a well-known example of an antireflective surface in nature, the eyes of common moths. The surfaces of their compound eyes have textured patterns made of many tiny “posts,” each smaller than the wavelengths of light. This textured surface improves moths’ nighttime vision, and also prevents the “deer in the headlights” reflecting glow that might allow predators to detect them.

    “We set out to recreate moth eye patterns in silicon at even smaller sizes using methods of nanotechnology,” said Atikur Rahman, a postdoctoral fellow working with Black at the CFN and first author of the study.

    A closeup shows how the nanotextured square of silicon completely blocks reflection compared with the surrounding silicon wafer.

    The scientists started by coating the top surface of a silicon solar cell with a polymer material called a “block copolymer,” which can be made to self-organize into an ordered surface pattern with dimensions measuring only tens of nanometers. The self-assembled pattern served as a template for forming posts in the solar cell like those in the moth eye using a plasma of reactive gases—a technique commonly used in the manufacture of semiconductor electronic circuits.

    The resulting surface nanotexture served to gradually change the refractive index to drastically cut down on reflection of many wavelengths of light simultaneously, regardless of the direction of light impinging on the solar cell.

    “Adding these nanotextures turned the normally shiny silicon surface absolutely black,” Rahman said.

    Solar cells textured in this way outperform those coated with a single antireflective film by about 20 percent, and bring light into the device as well as the best multi-layer-coatings used in the industry.

    “We are working to understand whether there are economic advantages to assembling silicon solar cells using our method, compared to other, established processes in the industry,” Black said.

    Hidden layer explains better-than-expected performance

    One intriguing aspect of the study was that the scientists achieved the antireflective performance by creating nanoposts only half as tall as the required height predicted by a mathematical model describing the effect. So they called upon the expertise of colleagues at the CFN and other Brookhaven scientists to help sort out the mystery.

    Details of the nanotextured antireflective surface as revealed by a scanning electron microscope at the Center for Functional Nanomaterials. The tiny posts, each smaller than the wavelengths of light, are reminiscent of the structure of moths’ eyes, an example of an antireflective surface found in nature.

    “This is a powerful advantage of doing research at the CFN—both for us and for academic and industrial researchers coming to use our facilities,” Black said. “We have all these experts around who can help you solve your problems.”

    Using a combination of computational modeling, electron microscopy, and surface science, the team deduced that a thin layer of silicon oxide similar to what typically forms when silicon is exposed to air seemed to be having an outsized effect.

    “On a flat surface, this layer is so thin that its effect is minimal,” explained Matt Eisaman of Brookhaven’s Sustainable Energy Technologies Department and a professor at Stony Brook University. “But on the nanopatterned surface, with the thin oxide layer surrounding all sides of the nanotexture, the oxide can have a larger effect because it makes up a significant portion of the nanotextured material.”

    Said Black, “This ‘hidden’ layer was the key to the extra boost in performance.”

    The scientists are now interested in developing their self-assembly based method of nanotexture patterning for other materials, including glass and plastic, for antiglare windows and coatings for solar panels.

    This research was supported by the DOE Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 5:03 pm on December 22, 2014 Permalink | Reply
    Tags: , Energy, ,   

    From LBL: “Piezoelectricity in a 2D Semiconductor” 

    Berkeley Logo

    Berkeley Lab

    December 22, 2014
    Lynn Yarris (510) 486-5375

    A door has been opened to low-power off/on switches in micro-electro-mechanical systems (MEMS) and nanoelectronic devices, as well as ultrasensitive bio-sensors, with the first observation of piezoelectricity in a free standing two-dimensional semiconductor by a team of researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab).

    Xiang Zhang, director of Berkeley Lab’s Materials Sciences Division and an international authority on nanoscale engineering, led a study in which piezoelectricity – the conversion of mechanical energy into electricity or vice versa – was demonstrated in a free standing single layer of molybdenum disulfide, a 2D semiconductor that is a potential successor to silicon for faster electronic devices in the future.

    “Piezoelectricity is a well-known effect in bulk crystals, but this is the first quantitative measurement of the piezoelectric effect in a single layer of molecules that has intrinsic in-plane dipoles,” Zhang says. “The discovery of piezoelectricity at the molecular level not only is fundamentally interesting, but also could lead to tunable piezo-materials and devices for extremely small force generation and sensing.”

    Xiang Zhang directs Berkeley Lab’s Materials Sciences Division (photo by Roy Kaltschmidt, Berkeley Lab)

    Zhang, who holds the Ernest S. Kuh Endowed Chair at the University of California (UC) Berkeley and is a member of the Kavli Energy NanoSciences Institute at Berkeley, is the corresponding author of a paper in Nature Nanotechnology describing this research. The paper is titled Observation of Piezoelectricity in Free-standing Monolayer MoS2. The co-lead authors are Hanyu Zhu and Yuan Wang, both members of Zhang’s UC Berkeley research group. (See below for a complete list of co-authors.)

    Since its discovery in 1880, the piezoelectric effect has found wide application in bulk materials, including actuators, sensors and energy harvesters. There is rising interest in using nanoscale piezoelectric materials to provide the lowest possible power consumption for on/off switches in MEMS and other types of electronic computing systems. However, when material thickness approaches a single molecular layer, the large surface energy can cause piezoelectric structures to be thermodynamically unstable.

    Over the past couple of years, Zhang and his group have been carrying out detailed studies of molybdenum disulfide, a 2D semiconductor that features high electrical conductance comparable to that of graphene, but, unlike graphene, has natural energy band-gaps, which means its conductance can be switched off.

    “Transition metal dichalcogenides such as molybdenum disulfide can retain their atomic structures down to the single layer limit without lattice reconstruction, even in ambient conditions,” Zhang says. “Recent calculations predicted the existence of piezoelectricity in these 2D crystals due to their broken inversion symmetry. To test this, we combined a laterally applied electric field with nano-indentation in an atomic force microscope for the measurement of piezoelectrically-generated membrane stress.”

    To maximize piezoelectric coupling, electrodes (yellow dashed lines) were defined parallel to the zigzag edges (white dashed lines) of the MoS2 monolayer. Green and red colors denote the intensity of reflection and photoluminescence respectively.

    Zhang and his group used a free-standing molybdenum disulfide single layer crystal to avoid any substrate effects, such as doping and parasitic charge, in their measurements of the intrinsic piezoelectricity. They recorded a piezoelectric coefficient of 2.9×10-10 C/m, which is comparable to many widely used materials such as zinc oxide and aluminum nitride.

    “Knowing the piezoelectric coefficient is important for designing atomically thin devices and estimating their performance,” says Nature paper co-lead author Zhu. “The piezoelectric coefficient we found in molybdenum disulfide is sufficient for use in low-power logic switches and biological sensors that are sensitive to molecular mass limits.”

    Zhang, Zhu and their co-authors also discovered that if several single layers of molybdenum disulfide crystal were stacked on top of one another, piezoelectricity was only present in the odd number of layers (1,3,5, etc.)

    “This discovery is interesting from a physics perspective since no other material has shown similar layer-number sensitivity,” Zhu says. “The phenomenon might also prove useful for applications in which we want devices consisting of as few as possible material types, where some areas of the device need to be non-piezoelectric.”

    In addition to logic switches and biological sensors, piezoelectricity in molybdenum disulfide crystals might also find use in the potential new route to quantum computing and ultrafast data-processing called “valleytronics.” In valleytronics, information is encoded in the spin and momentum of an electron moving through a crystal lattice as a wave with energy peaks and valleys.

    “Some types of valleytronic devices depend on absolute crystal orientation, and piezoelectric anisotropy can be employed to determine this,’ says Nature paper co-lead author Wang. “We are also investigating the possibility of using piezoelectricity to directly control valleytronic properties such as circular dichroism in molybdenum disulfide.”

    In addition to Zhang, Zhu and Wang, other co-authors of the Nature paper were Jun Xiao, Ming Liu, Shaomin Xiong, Zi Jing Wong, Ziliang Ye, Yu Ye and Xiaobo Yin.

    This research was supported by Light-Material Interactions in Energy Conversion, an Energy Frontier Research Center led by the California Institute of Technology, in which Berkeley Lab is a major partner. The Energy Frontier Research Center program is supported by DOE’s Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 463 other followers

%d bloggers like this: