Tagged: Applied Research & Technology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:22 pm on April 24, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From phys.org: “Silicon Valley marks 50 years of Moore’s Law” 

    physdotorg
    phys.org

    April 24, 2015
    Pete Carey, San Jose Mercury News

    1
    Plot of CPU transistor counts against dates of introduction; note the logarithmic vertical scale; the line corresponds to exponential growth with transistor count doubling every two years. Credit: Wikipedia

    Computers were the size of refrigerators when an engineer named Gordon Moore laid the foundations of Silicon Valley with a vision that became known as “Moore’s Law.”

    Moore, then the 36-year-old head of research at Fairchild Semiconductor, predicted in a trade magazine article published 50 years ago Sunday that computer chips would double in complexity every year, at little or no added cost, for the next 10 years. In 1975, based on industry developments, he updated the prediction to doubling every two years.

    And for the past five decades, chipmakers have proved him right – spawning scores of new companies and shaping Silicon Valley to this day.

    “If Silicon Valley has a heartbeat, it’s Moore’s Law. It drove the valley at what has been a historic speed, unmatched in history, and allowed it to lead the rest of the world,” said technology consultant Rob Enderle.

    Moore’s prediction quickly became a business imperative for chip companies. Those that ignored the timetable went out of business. Companies that followed it became rich and powerful, led by Intel, the company Moore co-founded.

    Thanks to Moore’s Law, people carry smartphones in their pocket or purse that are more powerful than the biggest computers made in 1965 – or 1995, for that matter. Without it, there would be no slender laptops, no computers powerful enough to chart a genome or design modern medicine’s lifesaving drugs. Streaming video, social media, search, the cloud-none of that would be possible on today’s scale.

    “It fueled the information age,” said Craig Hampel, chief scientist at Rambus, a Sunnyvale semiconductor company. “As you drive around Silicon Valley, 99 percent of the companies you see wouldn’t be here” without cheap computer processors due to Moore’s Law.

    Moore was asked in 1964 by Electronics magazine to write about the future of integrated circuits for the magazine’s April 1965 edition.

    The basic building blocks of the digital age, integrated circuits are chips of silicon that hold tiny switches called transistors. More transistors meant better performance and capabilities.

    Taking stock of how semiconductor manufacturing was shrinking transistors and regularly doubling the number that would fit on an integrated circuit, Moore got some graph paper and drew a line for the predicted annual growth in the number of transistors on a chip. It shot up like a missile, with a doubling of transistors every year for at least a decade.

    It seemed clear to him what was coming, if not to others.

    “Integrated circuits will lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles, and personal portable communications equipment,” he wrote.

    California Institute of Technology professor Carver Mead coined the name Moore’s Law, and as companies competed to produce the most powerful chips, it became a law of survival-double the transistors every year or die.

    “In the beginning, it was just a way of chronicling the progress,” Moore, now 86, said in an interview conducted by Intel. “But gradually, it became something that the various industry participants recognized. … You had to be at least that fast or you were falling behind.”

    Moore’s Law also held prices down because advancing technology made it inexpensive to pack chips with increasing numbers of transistors. If transistors hadn’t gotten cheaper as they grew in number on a chip, integrated circuits would still be a niche product for the military and others able to afford a very high price. Intel’s first microprocessor, or computer on a chip, with 2,300 transistors, cost more than $500 in current dollars. Today, an Intel Core i5 microprocessor has more than a billion transistors-and costs $276.

    “That was my real objective-to communicate that we have a technology that’s going to make electronics cheap,” Moore said.

    The reach of Moore’s Law extends beyond personal tech gadgets.

    “The really cool thing about it is it’s not just iPhones,” said G. Dan Hutcheson of VLSI Research, a technology market research company based in Santa Clara. “Every drug developed in the past 20 years or so had to have the computing power to get down and model molecules. They never would have been able to without that power. DNA analysis, genomes, wouldn’t exist-you couldn’t do the genetic testing. It all boils down to transistors.”

    Hutcheson says what Moore predicted was much more than a self-fulfilling prophecy. He had foreseen that optics, chemistry and physics would be combined to shrink transistors over time without substantial added cost.

    As transistors become vanishingly small, it’s harder to keep Moore’s Law going.

    About a decade ago, the shrinking of the physical dimensions led to overheating and stopped major performance boosts for every new generation of chips. Companies responded by introducing so-called multicore computers, with several processors on a PC.

    “What’s starting to happen is people are looking to other innovations on silicon to give them performance” as a way to extend Moore’s Law, said Spike Narayan, director of science and technology at IBM‘s Almaden Research Center.

    Then, about a year and a half ago, “something even more drastic started happening,” Narayan said. The wires connecting transistors became so small that they became more resistant to electrical current. “Big problem,” he said.

    “That’s why you see all the materials research and innovation,” he said of new efforts to find alternative materials and structures for chips.

    Another issue confronting Moore’s Law is that the energy consumed by chips has begun to rise as transistors shrink. “Our biggest challenge” is energy efficiency, said Alan Gara, chief architect of the Aurora supercomputer Intel is building for Argonne National Laboratory near Chicago.

    Intel says it sees a path to continue the growth predicted by Moore’s Law through the next decade. The next generation of processors is in “full development mode,” said Mark Bohr, an Intel senior fellow who leads a group that decides how each generation of Intel chips will be made. Bohr is spending his time on the generation after that, in which transistors will shrink to 7 nanometers. The average human hair is 25,000 nanometers wide.

    At some point the doubling will slow down, says Chenming Hu, an electrical engineering and computer science professor at the University of California, Berkeley. Hu is a key figure in the development of a new transistor structure that’s helping keep Moore’s Law going.

    “It’s totally understandable that a company, in order to gain more market share and beat out all competitors, needs to double and triple if you can,” Hu said. “That’s why this scaling been going on at such a fast pace. But no exponential growth can go on forever.”

    Hu says what’s likely is that at some point the doubling every two years will slow to every four or five years.

    “And that’s probably a better thing than flash and fizzle out. You really want have the same growth at lower pace.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

     
  • richardmitnick 2:59 pm on April 23, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , Yellowstone Caldera   

    From phys,org: “Scientists see deeper Yellowstone magma” 

    physdotorg
    phys.org

    April 23, 2015
    No Writer Credit

    1
    A new University of Utah study in the journal Science provides the first complete view of the plumbing system that supplies hot and partly molten rock from the Yellowstone hotspot to the Yellowstone supervolcano. The study revealed a gigantic magma reservoir beneath the previously known magma chamber. This cross-section illustration cutting southwest-northeast under Yelowstone depicts the view revealed by seismic imaging. Seismologists say new techniques have provided a better view of Yellowstone’s plumbing system, and that it hasn’t grown larger or closer to erupting. They estimate the annual chance of a Yellowstone supervolcano eruption is 1 in 700,000. Credit: Hsin-Hua Huang, University of Utah

    University of Utah seismologists discovered and made images of a reservoir of hot, partly molten rock 12 to 28 miles beneath the Yellowstone supervolcano, and it is 4.4 times larger than the shallower, long-known magma chamber.

    The hot rock in the newly discovered, deeper magma reservoir would fill the 1,000-cubic-mile Grand Canyon 11.2 times, while the previously known magma chamber would fill the Grand Canyon 2.5 times, says postdoctoral researcher Jamie Farrell, a co-author of the study published online today in the journal Science.

    “For the first time, we have imaged the continuous volcanic plumbing system under Yellowstone,” says first author Hsin-Hua Huang, also a postdoctoral researcher in geology and geophysics. “That includes the upper crustal magma chamber we have seen previously plus a lower crustal magma reservoir that has never been imaged before and that connects the upper chamber to the Yellowstone hotspot plume below.”

    Contrary to popular perception, the magma chamber and magma reservoir are not full of molten rock. Instead, the rock is hot, mostly solid and spongelike, with pockets of molten rock within it. Huang says the new study indicates the upper magma chamber averages about 9 percent molten rock – consistent with earlier estimates of 5 percent to 15 percent melt – and the lower magma reservoir is about 2 percent melt.

    So there is about one-quarter of a Grand Canyon worth of molten rock within the much larger volumes of either the magma chamber or the magma reservoir, Farrell says.

    No increase in the danger

    The researchers emphasize that Yellowstone’s plumbing system is no larger – nor closer to erupting – than before, only that they now have used advanced techniques to make a complete image of the system that carries hot and partly molten rock upward from the top of the Yellowstone hotspot plume – about 40 miles beneath the surface – to the magma reservoir and the magma chamber above it.

    “The magma chamber and reservoir are not getting any bigger than they have been, it’s just that we can see them better now using new techniques,” Farrell says.

    Study co-author Fan-Chi Lin, an assistant professor of geology and geophysics, says: “It gives us a better understanding the Yellowstone magmatic system. We can now use these new models to better estimate the potential seismic and volcanic hazards.”

    The researchers point out that the previously known upper magma chamber was the immediate source of three cataclysmic eruptions of the Yellowstone caldera 2 million, 1.2 million and 640,000 years ago, and that isn’t changed by discovery of the underlying magma reservoir that supplies the magma chamber.

    “The actual hazard is the same, but now we have a much better understanding of the complete crustal magma system,” says study co-author Robert B. Smith, a research and emeritus professor of geology and geophysics at the University of Utah.

    2
    The gorgeous colors of Yellowstone National Park’s Grand Prismatic hot spring are among the park’s myriad hydrothermal features created by the fact Yellowstone is a supervolcano – the largest type of volcano on Earth. A new University of Utah study reports discovery of a huge magma reservoir beneath Yellowstone’s previously known magma chamber. That doesn’t increase the risk of an eruption, but means scientists are getting a better view of Yellowstone’s volcanic plumbing system. Credit: “Windows into the Earth,” Robert B. Smith and Lee J. Siegel

    The three supervolcano eruptions at Yellowstone – on the Wyoming-Idaho-Montana border – covered much of North America in volcanic ash. A supervolcano eruption today would be cataclysmic, but Smith says the annual chance is 1 in 700,000.

    Before the new discovery, researchers had envisioned partly molten rock moving upward from the Yellowstone hotspot plume via a series of vertical and horizontal cracks, known as dikes and sills, or as blobs. They still believe such cracks move hot rock from the plume head to the magma reservoir and from there to the shallow magma chamber.

    Anatomy of a supervolcano

    The study in Science is titled, The Yellowstone magmatic system from the mantle plume to the upper crust. Huang, Lin, Farrell and Smith conducted the research with Brandon Schmandt at the University of New Mexico and Victor Tsai at the California Institute of Technology. Funding came from the University of Utah, National Science Foundation, Brinson Foundation and William Carrico.

    Yellowstone is among the world’s largest supervolcanoes, with frequent earthquakes and Earth’s most vigorous continental geothermal system.

    The three ancient Yellowstone supervolcano eruptions were only the latest in a series of more than 140 as the North American plate of Earth’s crust and upper mantle moved southwest over the Yellowstone hotspot, starting 17 million years ago at the Oregon-Idaho-Nevada border. The hotspot eruptions progressed northeast before reaching Yellowstone 2 million years ago.

    Here is how the new study depicts the Yellowstone system, from bottom to top:

    — Previous research has shown the Yellowstone hotspot plume rises from a depth of at least 440 miles in Earth’s mantle. Some researchers suspect it originates 1,800 miles deep at Earth’s core. The plume rises from the depths northwest of Yellowstone. The plume conduit is roughly 50 miles wide as it rises through Earth’s mantle and then spreads out like a pancake as it hits the uppermost mantle about 40 miles deep. Earlier Utah studies indicated the plume head was 300 miles wide. The new study suggests it may be smaller, but the data aren’t good enough to know for sure.

    — Hot and partly molten rock rises in dikes from the top of the plume at 40 miles depth up to the bottom of the 11,200-cubic mile magma reservoir, about 28 miles deep. The top of this newly discovered blob-shaped magma reservoir is about 12 miles deep, Huang says. The reservoir measures 30 miles northwest to southeast and 44 miles southwest to northeast. “Having this lower magma body resolved the missing link of how the plume connects to the magma chamber in the upper crust,” Lin says.

    — The 2,500-cubic mile upper magma chamber sits beneath Yellowstone’s 40-by-25-mile caldera, or giant crater. Farrell says it is shaped like a gigantic frying pan about 3 to 9 miles beneath the surface, with a “handle” rising to the northeast. The chamber is about 19 miles from northwest to southeast and 55 miles southwest to northeast. The handle is the shallowest, long part of the chamber that extends 10 miles northeast of the caldera.

    Scientists once thought the shallow magma chamber was 1,000 cubic miles. But at science meetings and in a published paper this past year, Farrell and Smith showed the chamber was 2.5 times bigger than once thought. That has not changed in the new study.

    Discovery of the magma reservoir below the magma chamber solves a longstanding mystery: Why Yellowstone’s soil and geothermal features emit more carbon dioxide than can be explained by gases from the magma chamber, Huang says. Farrell says a deeper magma reservoir had been hypothesized because of the excess carbon dioxide, which comes from molten and partly molten rock.

    A better, deeper look at Yellowstone

    As with past studies that made images of Yellowstone’s volcanic plumbing, the new study used seismic imaging, which is somewhat like a medical CT scan but uses earthquake waves instead of X-rays to distinguish rock of various densities. Quake waves go faster through cold rock, and slower through hot and molten rock.

    For the new study, Huang developed a technique to combine two kinds of seismic information: Data from local quakes detected in Utah, Idaho, the Teton Range and Yellowstone by the University of Utah Seismograph Stations and data from more distant quakes detected by the National Science Foundation-funded EarthScope array of seismometers, which was used to map the underground structure of the lower 48 states.

    The Utah seismic network has closely spaced seismometers that are better at making images of the shallower crust beneath Yellowstone, while EarthScope’s seismometers are better at making images of deeper structures.

    “It’s a technique combining local and distant earthquake data better to look at this lower crustal magma reservoir,” Huang says.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

     
  • richardmitnick 7:34 am on April 23, 2015 Permalink | Reply
    Tags: , Applied Research & Technology, ,   

    From ANU: “Australia can cut emissions and grow its economy” 

    ANU Australian National University Bloc

    Australian National University

    22 April 2015
    No Writer Credit

    1

    Australia can make deep cuts to its carbon emissions and move to full renewable energy for its electricity supply at a relatively low cost, an ANU report has found.

    The report, written by Associate Professor Frank Jotzo and PhD scholar Luke Kemp, reviews the evidence from major studies over the past eight years.

    It finds that the cost estimates for Australia reaching ambitious emissions reduction goals came down in every successive major report.

    “Deep cuts to Australia’s emissions can be achieved, at a low cost,” said Associate Professor Jotzo, director of the ANU Centre for Climate Economics and Policy at the Crawford School of Public Policy.

    Australia has committed to cut greenhouse gas emissions by five per cent of year 2000 levels by 2020, and is due in coming months to decide on emissions reduction targets for after 2020.

    Australia is among the world’s highest producers of per-capita carbon emissions, due to a heavy reliance on coal for electricity generation.

    Associate Professor Jotzo’s report, commissioned by WWF Australia (World Wildlife Fund), found the cost of moving to renewable energy was becoming cheaper, and strong climate action could be achieved while maintaining economic growth.

    “At the heart of a low-carbon strategy for Australia is a carbon-free power system,” he said.

    “Australia has among the best prerequisites in the world for moving to a fully renewable energy electricity supply.”

    He said the costs of carbon-free technology, such as wind and solar power, have fallen faster than expected.

    “For example, large-scale solar panel power stations are already only half the cost that the Treasury’s 2008 and 2011 modelling studies estimated they would be in the year 2030,” he said.

    The report is available at the WWF Australia website.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ANU Campus

    ANU is a world-leading university in Australia’s capital city, Canberra. Our location points to our unique history, ties to the Australian Government and special standing as a resource for the Australian people.

    Our focus on research as an asset, and an approach to education, ensures our graduates are in demand the world-over for their abilities to understand, and apply vision and creativity to addressing complex contemporary challenges.

     
  • richardmitnick 7:04 am on April 23, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From DESY: “Scientists X-ray anti-inflammatory drug candidates” 

    DESY
    DESY

    2015/04/22
    No Writer Credit

    1
    Structure of the Spiegelmer NOX-E36 bound to its target protein CCL2. Credit: Dominik Oberthür/CFEL

    Using DESY’s ultra bright X-ray source PETRA III, scientists have decoded the molecular and three-dimensional structure of two promising drug candidates from the new group of Spiegelmers for the first time.

    DESY Petra III
    DESY Petra III interior
    PETRA III

    The results provide a deeper understanding of the mode of action of these substances that have already entered clinical trials. The researchers from the Universities of Hamburg and Aarhus (Denmark) together with colleagues from the biotech company NOXXON in Berlin present their work in the journal Nature Communications.

    Spiegelmers are a young group of promising pharmaceutical substances. They rely on the same building blocks as the nucleic acids RNA and DNA that fulfil various tasks in the organism – from storing genetic information and messaging to the regulation of genes. Artificial RNA or DNA molecules called aptamers can be tailored to bind to certain proteins with high specificity, blocking their function. Aptamers are well tolerated in the organism as they consist of natural building blocks. For these reasons, aptamers are seen as promising drug candidates. Since 2006, an aptamer for the treatment of age-related macular degeneration [AMD], an eye condition that can lead to blindness, is approved and on the market.

    Usually, RNA and DNA molecules are quickly degraded by enzymes within the body. This severely limits their application as pharmaceutical drugs. However, most biomolecules come in two mirror-image variants, the L-form and the D-form. Natural nucleic acids always exist in the D-form, while proteins are always build in their L-form in the body. Artificial aptamers that are constructed in the naturally not occurring L-form are not degraded by the organism. These mirror-image variants of aptamers are called Spiegelmers. “An advantage of Spiegelmers is that they are not targeted by the body’s enzymes,” explains Prof. Christian Betzel from the University of Hamburg.

    “Spiegelmers can be identified and optimised in the lab through a sophisticated evolutionary procedure. However, exact structure data of Spiegelmers have not been available until now,” says first author Dr. Dominik Oberthür from the Center for Free-Electron Laser Science CFEL, a cooperation of DESY, Max Planck Society and the University of Hamburg. If the exact structure of a Spiegelmer and its binding site at the target protein is known, its mode of action can be decoded and its structure could be further fine-tuned, if necessary.

    The team around Betzel used PETRA III’s bright X-rays to analyse the Spiegelmer NOX-E36 from NOXXON. It blocks the protein CCL2 that is involved in many inflammatory processes in the body. “If you target an inflammatory protein with a Spiegelmer, you have a good chance to tone down the inflammation in the body,” notes Betzel. NOX-E36 has already been successfully tested in a phase IIa clinical trial with patients.

    In order to analyse the structure of the drug candidate, the scientists first had to grow crystals of the Spiegelmer bound to its target protein CCL2. “Growing these crystals was quite a challenge,” recalls Betzel. Because it contradicts their natural function, most biomolecules are notoriously hard to crystallise.

    The crystals were analysed at the PETRA III measuring station P13, run by the European Molecular Biology Laboratory EMBL. Crystals diffract X-ray light, producing a characteristic pattern on the detector. From this diffraction pattern the structure of the crystal’s building blocks can be calculated – in this case the Spiegelmer’s structure, bound to its target protein. In the same manner, a group around Laure Yatime from the University of Aarhus solved the structure of another Spiegelmer: NOX-D20 binds to the protein C5a that is involved into many inflammatory processes, too. The group also reports the structure in Nature Communications.

    The analyses reveal the structure of both Spiegelmers with a spatial resolution of 0.2 nanometres (millionths of a millimetre) – that’s on the order of individual atoms. “I am delighted to finally have a high resolution visualization of the remarkable shapes of two Spiegelmer drug candidates,” comments Dr. Sven Klussmann, founder and chief scientific officer of NOXXON, and also co-author on both articles. “The structural data not only provide the first look at the unusual interaction of a mirror-image oligonucleotide with a natural protein but also deepens our understanding of the two molecules’ mode of action.”

    Reference:
    Crystal structure of a mirror-image L-RNA aptamer (Spiegelmer) in complex with the natural L-protein target CCL2; Dominik Oberthür, John Achenbach, Azat Gabdulkhakov, Klaus Buchner, Christian Maasch, Sven Falke, Dirk Rehders, Sven Klussmann & Christian Betzel; „Nature Communications“, 2015; DOI: 10.1038/ncomms7923

    Structural basis for the targeting of complement anaphylatoxin C5a using a mixed L-RNA/L-DNA aptamer; Laure Yatime, Christian Maasch, Kai Hoehlig, Sven Klussmann, Gregers R. Andersen & Axel Vater; „Nature Communications“, 2015; DOI: 10.1038/ncomms7481

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    desi

    DESY is one of the world’s leading accelerator centres. Researchers use the large-scale facilities at DESY to explore the microcosm in all its variety – from the interactions of tiny elementary particles and the behaviour of new types of nanomaterials to biomolecular processes that are essential to life. The accelerators and detectors that DESY develops and builds are unique research tools. The facilities generate the world’s most intense X-ray light, accelerate particles to record energies and open completely new windows onto the universe. 
That makes DESY not only a magnet for more than 3000 guest researchers from over 40 countries every year, but also a coveted partner for national and international cooperations. Committed young researchers find an exciting interdisciplinary setting at DESY. The research centre offers specialized training for a large number of professions. DESY cooperates with industry and business to promote new technologies that will benefit society and encourage innovations. This also benefits the metropolitan regions of the two DESY locations, Hamburg and Zeuthen near Berlin.

     
  • richardmitnick 6:38 am on April 23, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , , Vision   

    From New Scientist: “These neon-lit cells reveal new ways of preventing blindness’ 

    NewScientist

    New Scientist

    22 April 2015
    Andy Coghlan

    1
    (Image: Alain Chédotal/INSERM)

    These neon cells may be blinding, but targeting them could also help preserve sight. In this close-up image of blood vessels – shown in blue – that supply blood to the retina of a one-week-old mouse, the nuclei of cells lining their walls appear in fluorescent colours. The bright-yellow cells are the ones of interest: they could be targeted to help prevent blindness in ageing eyes.

    Age-related macular degeneration or AMD, often strikes in middle age, causing a person’s vision to deteriorate.

    2
    Picture of the fundus showing intermediate age-related macular degeneration

    A key driver of the disease is excessive growth of obtrusive blood vessels in the retina. A team led by Alain Chédotal of the Institute of Vision in Paris has now discovered that a protein called Slit2 contributes to the rapid increase in offending blood vessels.

    The yellow cells in the picture are the ones that are dividing. When this activity occurs in middle age, it triggers the excessive increase in blood vessels that results in AMD. By blocking Slit2, it might be possible to reduce this effect, says Chédotal.

    When the team genetically altered mice so that they couldn’t produce Slit2, the animals no longer overproduced the blood vessels that lead to blindness. The researchers think that drugs targeting Slit2 could generate new treatments for AMD.

    Pioneering treatments for AMD currently rely on replacing epithelial pigment cells in the retina that are damaged by the disease. A team in the US has used pigment cells made from human embryonic stem cells to reverse damaged sight, in one case allowing a blind man to ride his horse again.

    Journal reference: Nature Medicine, DOI: 10.1038/nm.3849

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:40 pm on April 22, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From Princeton: “Decoding the Cell’s Genetic Filing System (Nature Chemistry)” 

    Princeton University
    Princeton University

    April 22, 2015
    Tien Nguyen

    1
    Source: Nature Chemistry

    A fully extended strand of human DNA measures about five feet in length. Yet it occupies a space just one-tenth of a cell by wrapping itself around histones—spool-like proteins—to form a dense hub of information called chromatin.

    Access to these meticulously packed genes is regulated by post-translational modifications, chemical changes to the structure of histones that act as on-off signals for gene transcription. Mistakes or mutations in histones can cause diseases such as glioblastoma, a devastating pediatric brain cancer.

    Researchers at Princeton University have developed a facile method to introduce non-native chromatin into cells to interrogate these signaling pathways. Published on April 6 in the journal Nature Chemistry, this work is the latest chemical contribution from the Muir lab towards understanding nature’s remarkable information indexing system.

    Tom Muir, the Van Zandt Williams, Jr. Class of ’65 Professor of Chemistry, began investigating transcriptional pathways in the so-called field of epigenetics almost a decade earlier. Deciphering such a complex and dynamic system posed a formidable challenge, but his research lab was undeterred. “It’s better to fail at something important than to succeed at something trivial,” he said.

    Muir recognized the value of introducing chemical approaches to epigenetics to complement early contributions that came mainly from molecular biologists and geneticists. If epigenetics was like a play, he said, molecular biology and genetics could identify the characters but chemistry was needed to understand the subplots.

    These subplots, or post-translational modifications of histones, of which there are more than 100, can occur cooperatively and simultaneously. Traditional methods to probe post-translational modifications involved synthesizing modified histones one at a time, which was a very slow process that required large amounts of biological material.

    Last year, the Muir group introduced a method that would massively accelerate this process. The researchers generated a library of 54 nucleosomes—single units of chromatin, like pearls on a necklace—encoded with DNA-barcodes, unique genetic tags that can be easily identified. Published in the journal Nature Methods, the high throughput method required only microgram amounts of each nucleosome to run approximately 4,500 biochemical assays.

    “The speed and sensitivity of the assay was shocking,” Muir said. Each biochemical assay involved treatment of the DNA-barcoded nucleosome with a writer, reader or nuclear extract, to reveal a particular binding preference of the histone. The products were then isolated using a technique called chromatin immunoprecipitation and characterized by DNA sequencing, essentially an ordered readout of the nucleotides.

    “There have been incredible advances in genetic sequencing over the last 10 years that have made this work possible,” said Manuel Müller, a postdoctoral researcher in the Muir lab and co-author on the Nature Methods article.

    2
    Schematic of approach using split inteins

    With this method, researchers could systematically interrogate the signaling system to propose mechanistic pathways. But these mechanistic insights would remain hypotheses unless they could be validated in vivo, meaning inside the cellular environment.

    The only method for modifying histones in vivo was extremely complicated and specific, said Yael David, a postdoctoral researcher in the Muir lab and lead author on the recent Nature Chemistry study that demonstrated a new and easily customizable approach.

    The method relied on using ultra-fast split inteins, protein fragments that have a great affinity for one another. First, one intein fragment was attached to a modified histone, by encoding it into a cell. Then, the other intein fragment was synthetically fused to a label, which could be a small protein tag, fluorophore or even an entire protein like ubiquitin.

    Within minutes of being introduced into the cell, the labeled intein fragment bound to the histone intein fragment. Then like efficient and courteous matchmakers, the inteins excised themselves and created a new bond between the label and modified histone. “It’s really a beautiful way to engineer proteins in a cell,” David said.

    Regions of the histone may be loosely or tightly packed, depending on signals from the cell indicating whether or not to transcribe a gene. By gradually lowering the amount of labeled intein introduced, the researchers could learn about the structure of chromatin and tease out which areas were more accessible than others.

    Future plans in the Muir lab will employ these methods to ask specific biological questions, such as whether disease outcomes can be altered by manipulating signaling pathway. “Ultimately, we’re developing methods at the service of biological questions,” Muir said.

    Read the articles:

    Nguyen, U.T.T.; Bittova, L.; Müller, M.; Fierz, B.; David, Y.; Houck-Loomis, B.; Feng, V.; Dann, G.P.; Muir, T.W. Accelerated chromatin biochemistry using DNA-barcoded nucleosome libraries. Nature Methods, 2014, 11, 834.

    David, Y.; Vila-Perelló, M; Verma, S.; Muir, T.W. Chemical tagging and customizing of cellular chromatin states using ultrafast trans-splicing inteins. Nature Chemistry, Advance online publication, April 6, 2015.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Princeton University Campus

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

     
  • richardmitnick 1:09 pm on April 22, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From U Washington: “Study explores new avenues of breast cancer therapy” 

    U Washington

    University of Washington

    04.16.2015
    Michael McCarthy

    1
    The protein landscape of aggressive forms of breast cancer are examine by molecular and cell biology researcher Robert Lawrence and Judit Villen, a genome scientist.

    An exhaustive analysis has been conducted of more than 12,000 distinct proteins present in an often aggressive and difficult to treat form of breast cancer, called triple-negative breast cancer.

    The results may help explain why these cancers often fail to respond to current drug treatments and may provide researchers with new targets for drug therapy.

    The researchers’ findings appear in this week’s issue of the journal Cell Reports. Robert Lawrence, a University of Washington graduate student in molecular and cellular biology, is the lead author of the article, The proteomic landscape of triple-negative breast cancer. Dr. Judit Villén, UW assistant professor of genome sciences, is the paper’s senior author.

    Triple-negative breast cancer cells have low levels of three receptors found in many breast cancers. Two receptors are for the hormones estrogen and progesterone and one receptor is for human epidermal growth factor receptor 2 or HER2.

    2
    The study was performed in collaboration with the labs of Su-In Lee, assistant professor of genome sciences and of computer science and engineering, and Anthony Blau, director of the UW’s Center for Cancer Innovation. Both co-authored the study.

    About one in five breast cancers are triple-negative. They tend to be more aggressive and grow and spread more rapidly. They are also less likely to respond to many standard treatments. Triple-negative breast cancer occurs more often in women under age 40 and in African American women.

    In the new study, the researchers used a technique called mass spectrometry to identify and quantify the proteins being produced in twenty breast cell lines and four breast tumor samples. This process is called proteomic analysis. The study was performed in collaboration with the labs of Dr. C Anthony Blau, director of the UW’s Center for Cancer Innovation and Su-In Lee, assistant professor of genome sciences and of computer science and engineering, both co-authors on the study.

    Analysis of the mass spectrometry data revealed that subtypes within these cancer samples could be identified on the basis of the types of proteins they expressed and the quantity of those proteins.

    “In terms of the protein expression, even within these subtypes, the cells are very different from one another, which suggests they will behave differently and respond to treatment differently,” Lawrence said.

    To further explore the relationship between genes, proteins and drug response, the researchers correlated their proteomic findings with existing genomic databases and conducted drug sensitivity tests on 16 of the cell lines.

    Their findings suggested why one drug that might work in one case of triple-negative cancers might fail to work on another. For example, the researchers found that some of the triple-negative breast cancer cells produced low levels of proteins involved in cell proliferation while producing high levels of proteins that allow cells to spread. Because most conventional chemotherapy targets pathways that promote proliferation, these cells would likely be able to resist standard treatments. Treatments that target the means by which the cancer spreads may, therefore, prove more effective against triple-negative breast cancers that resist conventional therapy.

    To make their findings available to other researchers, the UW team has created a website (https://zucchini.gs.washington.edu/BreastCancerProteome/) where the new proteomic and drug sensitivity findings as well as genomic data can be accessed.

    “We want this to be a resource for researchers everywhere,” Villén said. “Investigators will be able to go to the site and type in the name of their protein of interest and see how it is expressed in these cells. Or they can type in the name of a drug and see which genes and proteins are associated with the tumor’s sensitivity or resistance to the drug.”

    Villén expects in the coming year that proteomic analysis will be used in clinical trials, such as the Center for Cancer Innovation’s clinical trial in metastatic triple negative breast cancer.. Currently, using mass spectrometry, her lab can analyze a tumor sample in 24 hours. She expects that once researchers determine the 100 or so most important proteins, a tissue sample could be tested in just an hour. This analysis will provide a more detailed diagnostics of breast tumors, and may offer novel therapeutic avenues for resistant tumors.

    In addition to Blau, Lawrence, Lee and Villén, co-authors were Elizabeth M. Perez, Daniel Hernández and Kelsey M Haas, of the UW Department of Genome Sciences, Chris P. Miller of the UW Center for Cancer Innovation and Hanna Y. Irie of the Icahn School of Medicine, Mount Sinai, in New York City.

    The research was supported by Howard Temin Pathway to Independence Award K99/R00 from the National Cancer Institute at the National Institutes of Health (R00CA140789); a National Science Foundation grant (DBI-1355899); and funds from the South Sound CARE Foundation, the Washington Research Foundation, and the Gary E. Milgard Family Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Washington is one of the world’s preeminent public universities. Our impact on individuals, on our region, and on the world is profound — whether we are launching young people into a boundless future or confronting the grand challenges of our time through undaunted research and scholarship. Ranked number 10 in the world in Shanghai Jiao Tong University rankings and educating more than 54,000 students annually, our students and faculty work together to turn ideas into impact and in the process transform lives and our world. For more about our impact on the world, every day.

    So what defines us — the students, faculty and community members at the University of Washington? Above all, it’s our belief in possibility and our unshakable optimism. It’s a connection to others, both near and far. It’s a hunger that pushes us to tackle challenges and pursue progress. It’s the conviction that together we can create a world of good. Join us on the journey.

     
  • richardmitnick 9:21 am on April 22, 2015 Permalink | Reply
    Tags: Applied Research & Technology, ,   

    From UCSD: “‘Holey’ graphene for energy storage” 

    UC San Diego bloc

    UC San Diego

    April 21, 2015
    Liezel Labios

    1
    Rajaram Narayanan, a nanoengineering graduate student at UC San Diego Jacobs School of Engineering and lead author of the Nano Letters paper.

    2
    Zigzag and armchair defects in graphene.

    Engineers at the University of California, San Diego have discovered a method to increase the amount of electric charge that can be stored in graphene, a two-dimensional form of carbon. The research, published recently online in the journal Nano Letters, may provide a better understanding of how to improve the energy storage ability of capacitors for potential applications in cars, wind turbines, and solar power.

    Capacitors charge and discharge very fast, and are more useful for quick large bursts of energy, such as in camera flashes and power plants. Their ability to rapidly charge and discharge is an advantage over the long charge time of batteries. However, the problem with capacitors is that they store less energy than batteries.

    How can the energy storage of a capacitor be improved? One approach by researchers in the lab of mechanical engineering professor Prabhakar Bandaru at the Jacobs School of Engineering at UC San Diego was to introduce more charge into a capacitor electrode using graphene as a model material for their tests. The principle is that increased charge leads to increased capacitance, which translates to increased energy storage.

    How it’s made

    Making a perfect carbon nanotube structure ― one without defects, which are holes corresponding to missing carbon atoms ― is next to impossible. Rather than avoiding defects, the researchers in Bandaru’s lab figured out a practical way to use them instead.

    “I was motivated from the point of view that charged defects may be useful for energy storage,” said Bandaru.

    The team used a method called argon-ion based plasma processing, in which graphene samples are bombarded with positively-charged argon ions. During this process, carbon atoms are knocked out of the graphene layers and leave behind holes containing positive charges ― these are the charged defects. Exposing the graphene samples to argon plasma increased the capacitance of the materials three-fold.

    “It was exciting to show that we can introduce extra capacitance by introducing charged defects, and that we could control what kind of charged defect we could introduce into a material,” said Rajaram Narayanan, a graduate student in professor Bandaru’s research group and first author of the study.

    Using Raman spectroscopy and electrochemical measurements, the team was able to characterize the types of defects that argon plasma processing introduced into the graphene lattices. The results revealed the formation of extended defects known as “armchair” and “zigzag” defects, which are named based on the configurations of the missing carbon atoms.

    Additionally, electrochemical studies helped the team discover a new length scale that measures the distance between charges. “This new length scale will be important for electrical applications, since it can provide a basis for how small we can make electrical devices,” said Bandaru.

    Journal reference:

    R. Narayanan, H. Yamada, M. Karakaya, R. Podila, A. M. Rao, and P. R. Bandaru. Modulation of the Electrostatic and Quantum Capacitances of Few Layered Graphenes through Plasma Processing. Nano Letters 2015. DOI: 10.1021/acs.nanolett.5b00055

    This work was supported by a grant from the National Science Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC San Diego Campus

    The University of California, San Diego (also referred to as UC San Diego or UCSD), is a public research university located in the La Jolla area of San Diego, California, in the United States.[12] The university occupies 2,141 acres (866 ha) near the coast of the Pacific Ocean with the main campus resting on approximately 1,152 acres (466 ha).[13] Established in 1960 near the pre-existing Scripps Institution of Oceanography, UC San Diego is the seventh oldest of the 10 University of California campuses and offers over 200 undergraduate and graduate degree programs, enrolling about 22,700 undergraduate and 6,300 graduate students. UC San Diego is one of America’s Public Ivy universities, which recognizes top public research universities in the United States. UC San Diego was ranked 8th among public universities and 37th among all universities in the United States, and rated the 18th Top World University by U.S. News & World Report ‘s 2015 rankings.

     
  • richardmitnick 9:06 am on April 22, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From NOVA: “The EPA’s Natural Gas Problem” 

    PBS NOVA

    NOVA

    11 Feb 2015
    Phil McKenna

    When U.S. President Barack Obama recently announced plans to reign in greenhouse gas emissions from the oil and gas production, the opposing drum beats from industry and environmental groups were as fast as they were relentless. The industry group America’s Natural Gas Alliance bombarded Twitter with paid advertisements stating how little their industry actually emits. Press releases from leading environmental organizations deploring the plan’s reliance on largely voluntary actions flooded email inboxes.

    Opposition to any new regulation by industry, however, isn’t as lockstep as its lobbying groups would have us believe. At the same time, environmentalists’ focus on voluntary versus mandatory measures misses a much graver concern.

    1
    The White House and EPA are seeking to regulate methane emissions from the oil and gas industry.

    The joint White House and U.S. Environmental Protection Agency proposal would reduce emissions of methane, the primary component of natural gas, by 40–45% from 2012 levels in the coming decade. It’s a laudable goal. While natural gas is relatively clean burning—emitting roughly half the amount of carbon dioxide per unit of energy as coal—it is an incredibly potent greenhouse gas if it escapes into the atmosphere unburned.

    Methane emissions from the oil and gas sector are estimated to be equivalent to the pollution from 180 coal-fired power plants, according to studies done by the Environmental Defense Fund (EDF), an environmental organization. Yet there is a problem: despite that estimate, no one, including EDF, knows for certain how much methane the oil and gas industry actually emits.

    The EPA publishes an annual inventory of U.S. Greenhouse Gas emissions, which it describes as “the most comprehensive accounting of total greenhouse gas emissions for all man-made sources in the United States.” But their estimates for the natural gas industry are, by their own admission, outdated, based on limited data, and likely significantly lower than actual emissions.

    The Baseline

    Getting the number right is extremely important as it will serve as the baseline for any future reductions. “The smaller the number they start with, the smaller the amount they have to reduce in coming years by regulation,” says Anthony Ingraffea, a professor of engineering at Cornell University in Ithaca, New York. “A 45% reduction on a rate that is too low will be a very small reduction. From a scientific perspective, this doesn’t amount to a hill of beans.”

    Ingraffea says methane emissions are likely several times higher than what the EPA estimates. (Currently, the EPA says that up to 1.8% of the natural gas distributed and produced in the U.S. escapes to the atmosphere.) Even if Ingraffea is right, its still a small percentage, but methane’s potency as a greenhouse gas makes even a small release incredibly significant. Over 100 years, methane traps 34 times more heat in the atmosphere than carbon dioxide. If you are only looking 20 years into the future, a time frame given equal weight by the United Nation’s Intergovernmental Panel on Climate Change, methane is 86 times more potent than carbon dioxide.

    2
    After being damaged during Hurricane Ike in September 2008, a natural gas tank spews methane near Sabine Pass, Texas.

    If Ingraffea is right, the amount of methane released into the atmosphere from oil and gas wells, pipelines, processing and storage facilities has a warming affect approaching that of the country’s 557 coal fired power plants. Reducing such a high rate of emissions by 40–45% would certainly help stall climate change. It would also likely be much more difficult to achieve than the cuts industry and environmental groups are currently debating.

    Ingraffea first called attention to what he and others believe are EPA underestimates in 2011 when he published a highly controversial paper along with fellow Cornell professor Robert Howarth. Their research suggested the amount of methane emitted by the natural gas industry was so great that relying on natural gas was actually worse for the climate than burning coal.

    Following the recent White House and EPA announcement, industry group America’s Natural Gas Alliance (ANGA) stated that they have reduced emissions by 17% since 1990 while increasing production by 37%. “We question why the administration would single out our sector for regulation, given our demonstrated reductions,” the organization wrote in a press release following the White House’s proposed policies. ANGA bases its emissions reduction on the EPA’s own figures and stands by the data. “We like to have independent third party verification, and we use the EPA’s figures for that,” says ANGA spokesman Daniel Whitten.

    Shifting Estimates

    But are the EPA estimates correct, and are they sufficiently independent? To come up with its annual estimate, the EPA doesn’t make direct measurements of methane emissions each year. Rather they multiply emission factors, the volume of a gas thought to be emitted by a particular source—like a mile of pipeline or a belching cow—by the number of such sources in a given area. For the natural gas sector, emission factors are based on a limited number of measurements conducted in the early 1990s in industry-funded studies.

    In 2010 the EPA increased its emissions factors for methane from the oil and natural gas sector, citing “outdated and potentially understated” emissions. The end result was a more than doubling of its annual emissions estimate from the prior year. In 2013, however, the EPA reversed course, lowering estimates for key emissions factors for methane at wells and processing facilities by 25–30%. When reached for comment, the EPA pointed me to their existing reports.

    The change was not driven by better scientific understanding but by political pressure, Howarth says. “The EPA got huge pushback from industry and decreased their emissions again, and not by collecting new data.” The EPA states that the reduction in emissions factors was based on “a significant amount of new information” that the agency received about the natural gas industry.

    However, a 2013 study published in the journal Geophysical Research Letters concludes that “the main driver for the 2013 reduction in production emissions was a report prepared by the oil and gas industry.” The report was a non-peer reviewed survey of oil and gas companies conducted by ANGA and the American Petroleum Institute.

    The EPA’s own inspector general released a report that same year that was highly critical of the agency’s estimates of methane and other harmful gasses. “Many of EPA’s existing oil and gas production emission factors are of questionable quality because they are based on limited and/or low quality data.” The report concluded that the agency likely underestimates emissions, which “hampers [the] EPA’s ability to accurately assess risks and air quality impacts from oil and gas production activities.”

    Underestimated

    Soon after the EPA lowered its emissions estimates, a number of independent studies based on direct measurements found higher methane emissions. In November 2013, a study based on direct measurements of atmospheric methane concentrations across the United States concluded actual emissions from the oil and gas sector were 1.5 times higher than EPA estimates. The study authors noted, “the US EPA recently decreased its methane emission factors for fossil fuel extraction and processing by 25–30% but we find that [methane] data from across North America instead indicate the need for a larger adjustment of the opposite sign.”

    In February 2014, a study published in the journal Science reviewed 20 years of technical literature on natural gas emissions in the U.S. and Canada and concluded that “official inventories consistently underestimate actual CH4 emissions.”

    “When you actually go out and measure methane emissions directly, you tend to come back with measurements that are higher than the official inventory,” says Adam Brandt, lead author of the study and an assistant professor of energy resources engineering at Stanford University. Brandt and his colleagues did not attempt to make an estimate of their own, but stated that in a worst-case scenario total methane emissions from the oil and gas sector could be three times higher than the EPA’s estimate.

    On January 22, eight days after the White House’s announcement, another study found similarly high emissions from a sector of the natural gas industry that is often overlooked. The study made direction measurements of methane emissions from natural gas pipelines and storage facilities in and around Boston, Massachusetts, and found that they were 3.9 times higher than the EPA’s estimate for the “downstream” sector, or the parts of the system which transmit, distribute, and store natural gas.

    3
    Most natural gas leaks are small, but large ones can have catastrophic consequences. The wreckage above was caused by a leak in San Bruno, California, in 2010.

    Boston’s aging, leak-prone, cast-iron pipelines likely make the city more leaky than most, but the high volume of emissions—losses around the city total roughly $1 billion worth of natural gas per decade—are nonetheless surprising. The majority of methane emissions were previously believed to occur “upstream” at wells and processing facilities. Efforts to curb emissions including the recent goals set by the White House have overlooked the smaller pipelines that deliver gas to end users.

    “Emissions from end users have been only a very small part of conversation on emissions from natural gas,” says lead author Kathryn McKain, an atmospheric scientist at Harvard University. “Our findings suggest that we don’t understand the underlying emission processes which is essential for creating effective policy for reducing emissions.”

    The Boston study was one of 16 recent or ongoing studies coordinated by EDF to try to determine just how much methane is actually being emitted from the industry as a whole. Seven studies, focusing on different aspects of oil and gas industry infrastructure, have been published thus far. Two of the studies, including the recent Boston study, have found significantly higher emission rates. One study, conducted in close collaboration with industry, found lower emissions. EDF says it hopes to have all studies completed by the end of 2015. The EPA told me it will take the studies into account for possible changes in its current methane emission factors.

    Fraction of a Percent

    EDF is simultaneously working with industry to try to reduce methane emissions. A recent study commissioned by the environmental organization concluded the US oil and gas industry could cut methane emissions by 40% from projected 2018 levels at a cost of less than one cent per thousand cubic feet of natural gas, which today sells for about $5. The reductions could be achieved with existing emissions-control technologies and policies.

    “We are talking about one third or one fourth of a percent of the price of gas to meet these goals,” says Steven Hamburg chief scientist for EDF. The 40–45% reduction goal recently announced by the White House is nearly identical to the level of cuts analyzed by EDF. To achieve the reduction the White House proposes mandatory changes in new oil and gas infrastructure as well as voluntary measures for existing infrastructure.

    Thomas Pyle, president of the Institute for Energy Research, an industry organization, says industry is already reducing its methane emissions and doesn’t need additional rules. “It’s like regulating ice cream producers not to spill any ice cream during the ice cream making process,” he says. “It is self-evident for producers to want to capture this product with little or no emissions and make money from it.”

    Unlike making ice cream, however, natural gas producers often vent their product intentionally as part of the production process. One of the biggest sources of methane emissions in natural gas production is gas that is purposely vented from pneumatic devices which use pressurized methane to open and close valves and operate pumps. They typically release or “bleed” small amounts of gas during their operation.

    Such equipment is widely used throughout natural gas extraction, processing, and transmission process. A recent study by Natural Resources Defense Council (NRDC) estimates natural gas driven pneumatic equipment vents 1.6–1.9 million metric tons of methane each year. The figure accounts for nearly one-third of all methane lost by the natural gas industry, as estimated by the EPA.

    4
    A natural gas distribution facility

    “Low-bleed” or “zero-bleed” controllers are available, though they are more expensive. The latter use compressed air or electricity to operate instead of pressurized natural gas, or they capture methane that would otherwise be vented and reuse it. “Time and time again we see that we can operate this equipment without emissions or with very low emissions,” Hamburg says. Increased monitoring and repair of unintended leaks at natural gas facilities could reduce an additional third of the industry’s methane emissions according to the NRDC study.

    Environmentalist organizations have come out in strong opposition to the lack of mandatory regulations for existing infrastructure, which will account for nearly 90% of methane emissions in 2018 according to a recent EDF report.

    While industry groups oppose mandatory regulations on new infrastructure, at least one industry leader isn’t concerned. “I don’t believe the new regulations will hurt us at all,” says Mark Boling an executive vice president at Houston-based Southwestern Energy Company, the nation’s fourth largest producer of natural gas.

    Boling says leak monitoring and repair programs his company initiated starting in late 2013 will pay for themselves in 12 to 18 months through reduced methane emissions. Additionally, he says the company has also replaced a number of pneumatic devices with zero-bleed solar powered electric pumps. Southwestern Energy is now testing air compressors powered by fuel cells to replace additional methane-bleeding equipment Boling says. In November, Southwestern Energy launched ONE Future, a coalition of companies from across the natural gas industry. Their goal is to lower the industry’s methane emissions below one percent.

    Based on the EPA emissions rate of 1.8% and fixes identified by EDF and NRDC, their goal seems attainable. But what if the actual emissions rate is significantly higher, as Howarth and Ingraffea have long argued and recent studies seem to suggest? “We can sit here and debate whose numbers are right, ‘Is it 4%? 8%? Whatever,’ ” Boling says. “But there are cost effective opportunities out there to reduce emissions, and we need to step up and do it.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 8:25 am on April 22, 2015 Permalink | Reply
    Tags: Applied Research & Technology, , ,   

    From Harvard: “A leap for ‘artificial leaf’” 

    Harvard University

    Harvard University

    April 21, 2015
    Peter Reuell

    New technique could open door to producing alternative-energy devices more cheaply

    1
    With support from Harvard President Drew Faust’s Climate Change Solutions Fund, Patterson Rockwood Professor of Energy Daniel Nocera and colleagues created an efficient method to harness the power of light to generate two powerful fuels. File photo by Rose Lincoln/Harvard Staff Photographer

    As an idea, the notion of an “artificial leaf” was always meant to be simple: Could scientists, using a handful of relatively cheap materials, harness the power of light to generate two powerful fuels — hydrogen and oxygen — by breaking apart water molecules?

    In practice, however, the idea faced a number of hurdles, including how to pattern the catalysts on silicon that would power the reaction. But that could soon change, says Patterson Rockwood Professor of Energy Daniel Nocera.

    Using an electro-chemical process similar to etching, Nocera and colleagues have developed a system of patterning that works in just minutes, as opposed to the weeks other techniques need.

    Dubbed reactive interface patterning promoted by lithographic electrochemistry, or RIPPLE, the process can be so tightly controlled that researchers can build photonic structures that control the light hitting the device and greatly increase its efficiency. The new system is described in two papers that appeared in recent weeks in the Journal of the American Chemical Society and the Proceedings of the National Academy of Sciences.

    “This is what I call frugal innovation,” Nocera said. “We called it RIPPLE because you can think of it like dropping a pebble in water that makes a pattern of ripples. This is really the simplest patterning technique that I know of. We take silicon, coat it with our catalyst, and within minutes we can pattern it using a standard electro-chemical technique we use in the lab.”

    The project was one of seven research efforts supported in the inaugural year of Harvard President Drew Faust’s Climate Change Solutions Fund. The $20 million fund was created to spur the development of renewable energy solutions and speed the transition from fossil fuels.

    “It’s already working,” Nocera said of the project. “We already have a home run, and that makes me very happy, because the idea we proposed actually works.”

    The ability to pattern catalysts — using cobalt phosphate to spur the creation of oxygen and a nickel-zinc alloy for hydrogen — on the silicon substrate is particularly important, Nocera said.

    “In our current system, we just have flat silicon and the catalyst is covering it, so the light has to come through the catalyst, and we have some energy loss,” he explained. “Using this, we are able to pattern the catalyst, so we have bare silicon in one location and the catalyst in another, so the light doesn’t have to go though the catalyst, making the system more efficient.”

    Equally important, Nocera said, the system allows for fast patterning of relatively large areas — far larger than other systems that use nano-scale patterning techniques.

    Ironically, the discovery of the technique came about almost by accident.

    “What we were trying to do was generate intense electrical fields to deposit the catalyst selectively on silicon,” Nocera explained. “It was during what was basically a control experiment that we noticed we didn’t need an intense electric field and could pattern the silicon quite easily.”

    While the mechanism at work in the patterning isn’t fully understood, Nocera and colleagues can maintain precise control over the process and produce everything from patterns of lines to rings to squares on silicon substrates.

    “It’s phenomenological. We don’t understand the mechanism yet,” Nocera said. “But we do understand how to control it, so we can fine-tune the spacing of the patterns, and what we’ve already produced can work for energy applications — with the catalyst and the artificial leaf, it’s remarkable.”

    See the full article here.

    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 434 other followers

%d bloggers like this: