Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:28 am on April 27, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Fracking’s Hidden Hazards” 

    PBS NOVA

    NOVA

    22 Apr 2015
    Terri Cook

    Late on a Saturday evening in November 2011, Sandra Ladra was reclining in a chair in her living room in Prague, Oklahoma, watching television with her family. Suddenly, the house started to shake, and rocks began to fall off her stone-faced fireplace, onto the floor and into Ladra’s lap, onto her legs, and causing significant injuries that required immediate medical treatment.

    The first tremor that shook Ladra’s home was a magnitude-5.0 earthquake, an unusual event in what used to be a relatively calm state, seismically speaking. Two more struck the area over the next two days. More noteworthy, though, are her claims that the events were manmade. In a petition filed in the Lincoln County District Court, she alleges that the earthquake was the direct result of the actions of two energy companies, New Dominion and Spress Oil Company, that had injected wastewater fluids deep underground in the area.

    1
    House damage in central Oklahoma from a magnitude 5.7 earthquake on November 6, 2011. No image credit

    Ladra’s claim is not as preposterous as it may seem. Scientists have recognized since the 1960s that humans can cause earthquakes by injecting fluids at high pressure into the ground. This was first established near Denver, Colorado, at the federal chemical weapons manufacturing facility known as the Rocky Mountain Arsenal. Faced with the thorny issue of how to get rid of the arsenal’s chemical waste, the U.S. Army drilled a 12,044-feet-deep disposal well and began routinely injecting wastewater into it in March 1962.

    Less than seven weeks later, earthquakes were reported in the area, a region that had last felt an earthquake in 1882. Although the Army initially denied any link, when geologist David Evans demonstrated a strong correlation between the Arsenal’s average injection rate and the frequency of earthquakes, the Army agreed to halt its injections.

    Since then direct measurements, hydrologic modeling, and other studies have shown that earthquakes like those at the Rocky Mountain Arsenal occur when injection increases the fluid pressure in the pores and fractures of rocks or soil. By reducing the frictional force that resists fault slip, the increased pore pressure can lubricate preexisting faults. This increase alters the ambient stress level, potentially triggering earthquakes on favorably oriented faults.

    Although injection-induced earthquakes have become commonplace across broad swaths of the central and eastern U.S over the last few years, building codes—and the national seismic hazard maps used to update them—don’t currently take this increased hazard into account. Meanwhile, nagging questions—such as how to definitively diagnose an induced earthquake, whether manmade quakes will continue to increase in size, and how to judge whether mitigation measures are effective—have regulators, industry, and the public on shaky ground.

    Surge in Seismicity

    The quake that shook Ladra’s home is one example of the dramatic increase in seismicity that began across the central and eastern U.S. in 2001. Once considered geologically stable, the midcontinent has grown increasingly feisty, recording an 11-fold increase in the number of quakes between 2008 and 2011 compared with the previous 31 years, according to a study published in Geology in 2013.

    The increase has been especially dramatic in Oklahoma, which in 2014 recorded 585 earthquakes of magnitude 3.0 or greater—more than in the previous 35 years combined. “The increase in seismicity is huge relative to the past,” says Randy Keller, who retired in December after serving for seven years as the director of the Oklahoma Geological Survey (OGS).

    Yesterday, Oklahoma finally acknowledged that the uptick in earthquakes is likely due to wastewater disposal. “The Oklahoma Geological Survey has determined that the majority of recent earthquakes in central and north-central Oklahoma are very likely triggered by the injection of produced water in disposal wells,” the state reported on a new website. While the admission is an about-face for the government, which had previously questioned any link between the two, it doesn’t coincide with any new regulations intended to stop the earthquakes or improve building codes to cope with the tremors. For now, residents of Oklahoma may be just as vulnerable as they have been.

    This surge in seismicity has been accompanied by a spike in the number of injection wells and the corresponding amount of wastewater disposed via those wells. According to the Railroad Commission of Texas, underground wastewater injection in Texas increased from 46 million barrels in 2005 to nearly 3.5 billion barrels in 2011. Much of that fluid has been injected in the Dallas area, where prior to 2008, only one possible earthquake large enough to be noticed by people had occurred in recorded history. Since 2008, the U.S. Geological Survey (USGS) has documented over 120 quakes in the area.

    The increase in injection wells is due in large part to the rapid expansion of the shale-gas industry, which has unlocked vast new supplies of natural gas and oil that would otherwise be trapped in impermeable shale formations. The oil and gas is released by a process known as fracking, which injects a mix of water, chemicals, and sand at high enough pressure to fracture the surrounding rock, forming cracks through which the hydrocarbons, mixed with large volumes of fluid, can flow. The resulting mixture is pumped to the surface, where the hydrocarbons are separated out, leaving behind billions of gallons of wastewater, much of which is injected back underground.

    Many scientists, including Keller, believe there is a correlation between the two increases. “It’s hard to look at where the earthquakes are, and where the injection wells are, and not conclude there’s got to be some connection,” he says. Rex Buchanan, interim director of the Kansas Geological Survey (KGS), agrees there’s a correlation for most of the recent tremors in his state. “Certainly we’re seeing a huge spike in earthquakes in an area where we’ve also got big disposal wells,” he says. But there have been other earthquakes whose cause “we’re just not sure about,” Buchanan says.

    Diagnosing an Earthquake

    Buchanan’s uncertainty stems in part from the fact that determining whether a specific earthquake was natural or induced by human activity is highly controversial. Yet this is the fundamental scientific question at the core of Ladra’s lawsuit and dozens of similar cases that have been filed across the heartland over the last few years. Beyond assessing legal liability, this determination is also important for assessing potential seismic hazard as well as for developing effective methods of mitigation.

    One reason it’s difficult to assess whether a given earthquake was human-induced is that both types of earthquakes look similar on seismograms; they can’t be distinguished by casual observation. A second is that manmade earthquakes are unusual events; only about 0.1 percent of injection wells in the U.S. have been linked to induced earthquakes large enough to be felt, according to Arthur McGarr, a geologist at the USGS Earthquake Science Center. Finally, scientists have comparatively few unambiguous examples of induced earthquakes. That makes it difficult to create a yardstick against which potential “suspects” can be compared. Like a team of doctors attempting to diagnose a rare disease, scientists must examine all the “symptoms” of an earthquake to make the best possible pronouncement.

    To accomplish this, two University of Texas seismologists developed a checklist of seven “yes” and “no” questions that focus on four key characteristics: the area’s background seismicity, the proximity of an earthquake to an active injection well, the timing of the seismicity relative to the onset of injection, and the injection practices. Ultimately, “if an injection activity and an earthquake sequence correlate in space and time, with no known previous earthquake activity in the area, the earthquakes were likely induced,” wrote McGarr and co-authors in Science earlier this year.

    3
    Oilfield waste arrives by tanker truck at a wastewater disposal facility near Platteville, Colorado.

    These criteria, however, remain open to interpretation, as the Prague example illustrates. Ladra’s petition cites three scientific studies that have linked the increase in seismicity in central Oklahoma to wastewater injection operations. A Cornell University-led study, which specifically examined the earthquake in which Ladra claims she was injured, concluded that event began within about 200 meters of active injection wells—closely correlating in space—and was therefore induced.

    In a March 2013 written statement, the OGS had concluded that this earthquake was the result of natural causes, as were two subsequent tremors that shook Prague over the next few days. The second earthquake, a magnitude-5.7 event that struck less than 24 hours later, was the largest earthquake ever recorded in Oklahoma.

    The controversy hinged on several of the “symptoms,” including the timing of the seismicity. Prior to the Prague sequence, scientists believed that a lag time of weeks to months between the initiation of injection and the onset of seismicity was typical. But in Prague, the fluid injection has been occurring for nearly 20 years. The OGS therefore concluded that there was no clear temporal correlation. By contrast, the Cornell researchers decided that the diagnostic time scale of induced seismicity needs to be reconsidered.

    Another key issue that has been raised by the OGS is that of background seismicity. Oklahoma has experienced relatively large earthquakes in the past, including a magnitude-5.0 event that occurred in 1952 and more than 10 earthquakes of magnitude 4.0 or greater since then, so the Prague sequence was hardly the first bout of shaking in the region.

    The uncertainty associated with both these characteristics places the Prague earthquakes in an uncomfortable middle ground between earthquakes that are “clearly not induced” and “clearly induced” on the University of Texas checklist, making a definitive diagnosis unlikely. Meanwhile, the increasing frequency of earthquakes across the midcontinent and the significant size of the Prague earthquakes are causing scientists to rethink the region’s potential seismic hazard.

    Is the Public at Risk?

    Earthquake hazard is a function of multiple factors, including event magnitude and depth, recurrence interval, and the material through which the seismic waves propagate. These data are incorporated into calculations the USGS uses to generate the National Seismic Hazard Maps.

    Updated every six years, these maps indicate the potential for severe ground shaking across the country over a 50-year period and are used to set design standards for earthquake-resistant construction. The maps influence decisions about building codes, insurance rates, and disaster management strategies, with a combined estimated economic impact totaling hundreds of billions of dollars per year.

    When the latest version of the maps was released in July, the USGS intentionally excluded the hazard from manmade earthquakes. Part of the reason was the timing, according to Nicolas Luco, a research structural engineer at the USGS. The maps are released on a schedule that dovetails with building code revisions, so they couldn’t delay the charts even though the induced seismicity update wasn’t ready, he says.

    Such changes, however, may take years to implement. Luco notes that the building code revisions based upon the previous version of the USGS hazard maps, released in 2008, just became law in California in 2014, a six-year lag in one of the most seismically-threatened states in the country.

    Instead, the USGS is currently developing a separate procedure, which they call a hazard model, to account for the hazard associated with induced seismicity. The new model may raise the earthquake hazard level substantially in some parts of the U.S. where it has previously been quite low, according to McGarr. But there are still open questions about how to account for induced seismicity in maps of earthquake shaking and in building codes, Luco says.

    McGarr believes that the new hazard calculations will result in more rigorous building codes for earthquake-resistant construction and that adhering to these changes will affect the construction as well as the oil, gas, and wastewater injection industries. “Unlike natural earthquakes, induced earthquakes are caused by man, not nature, and so the oil and gas industry may be required to provide at least some of the funds needed to accommodate the revised building codes,” he says.

    But Luco says it may not make sense to incorporate the induced seismicity hazard, which can change from year to year, into building codes that are updated every six years. Over-engineering is also a concern due to the transient nature of induced seismicity. “Engineering to a standard of earthquake hazard that could go away, that drives up cost,” says Justin Rubinstein, a seismologist with the USGS Earthquake Science Center. A further complication, according to Luco, is that building code changes only govern new construction, so they don’t upgrade vulnerable existing structures, for which retrofit is generally not mandatory.

    The occurrence of induced earthquakes clearly compounds the risk to the public. “The risk is higher. The question is, how much higher?” Luco asks. Building codes are designed to limit the risk of casualties associated with building collapse—“and that usually means bigger earthquakes,” he says. So the critical question, according to Luco, is, “Can we can get a really large induced earthquake that could cause building collapses?”

    Others are wondering the same thing. “Is it all leading up to a bigger one?” asks Keller, former director of the OGS. “I don’t think it’s clear that it is, but it’s not clear that it isn’t, either,” he says. Recalling a magnitude-4.8 tremor that shook southern Kansas in November, KGS’ Buchanan agrees. “I don’t think there’s any reason to believe that these things are going to magically stop at that magnitude,” he says.

    Coping with Quakes

    After assessing how much the risk to the public has increased, our society must decide upon the best way to cope with human-induced earthquakes. A common regulatory approach, one which Oklahoma has adopted, has been to implement “traffic light” control systems. Normal injection can proceed under a green light, but if induced earthquakes begin to occur, the light changes to yellow, at which point the operator must reduce the volume, rate of injection, or both to avoid triggering larger events. If larger earthquakes strike, the light turns red, and further injection is prohibited. Such systems have recently been implemented in Oklahoma, Colorado, and Texas.

    But how will we know if these systems are effective? The largest Rocky Mountain Arsenal-related earthquakes, three events between magnitudes 5.0 and 5.5, all occurred more than a year after injection had ceased, so it’s unclear for how long the systems should be evaluated. Their long-term effectiveness is also uncertain because the ability to control the seismic hazard decreases over time as the pore pressure effects move away from the well, according to Shemin Ge, a hydrogeologist at the University of Colorado, Boulder.

    Traffic light systems also rely on robust seismic monitoring networks that can detect the initial, very small injection-induced earthquakes, according to Ge. To identify hazards while there is still sufficient time to take corrective action, it’s ideal to identify events of magnitude 2.0 or less, wrote McGarr and his co-authors in Science. However, the current detection threshold across much of the contiguous U.S. is magnitude 3.0, he says.

    Kansas is about to implement a mitigation approach that focuses on reducing injection in multiple wells across areas believed to be underlain by faults, rather than focusing on individual wells, according to Buchanan. He already acknowledges that it will be difficult to assess the success of this new approach because in the past, the KGS has observed reductions in earthquake activity when no action has been taken. “How do you tease apart what works and what doesn’t when you get all this variability in the system?” he asks.

    This climate of uncertainty leaves regulators, industry, and the public on shaky ground. As Ladra’s case progresses, the judicial system will decide if two energy companies are to blame for the quake that damaged her home. But it’s our society that must ultimately decide how, and even if, we should cope with manmade quakes, and what level of risk we’re willing to accept.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 9:06 am on April 22, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “The EPA’s Natural Gas Problem” 

    PBS NOVA

    NOVA

    11 Feb 2015
    Phil McKenna

    When U.S. President Barack Obama recently announced plans to reign in greenhouse gas emissions from the oil and gas production, the opposing drum beats from industry and environmental groups were as fast as they were relentless. The industry group America’s Natural Gas Alliance bombarded Twitter with paid advertisements stating how little their industry actually emits. Press releases from leading environmental organizations deploring the plan’s reliance on largely voluntary actions flooded email inboxes.

    Opposition to any new regulation by industry, however, isn’t as lockstep as its lobbying groups would have us believe. At the same time, environmentalists’ focus on voluntary versus mandatory measures misses a much graver concern.

    1
    The White House and EPA are seeking to regulate methane emissions from the oil and gas industry.

    The joint White House and U.S. Environmental Protection Agency proposal would reduce emissions of methane, the primary component of natural gas, by 40–45% from 2012 levels in the coming decade. It’s a laudable goal. While natural gas is relatively clean burning—emitting roughly half the amount of carbon dioxide per unit of energy as coal—it is an incredibly potent greenhouse gas if it escapes into the atmosphere unburned.

    Methane emissions from the oil and gas sector are estimated to be equivalent to the pollution from 180 coal-fired power plants, according to studies done by the Environmental Defense Fund (EDF), an environmental organization. Yet there is a problem: despite that estimate, no one, including EDF, knows for certain how much methane the oil and gas industry actually emits.

    The EPA publishes an annual inventory of U.S. Greenhouse Gas emissions, which it describes as “the most comprehensive accounting of total greenhouse gas emissions for all man-made sources in the United States.” But their estimates for the natural gas industry are, by their own admission, outdated, based on limited data, and likely significantly lower than actual emissions.

    The Baseline

    Getting the number right is extremely important as it will serve as the baseline for any future reductions. “The smaller the number they start with, the smaller the amount they have to reduce in coming years by regulation,” says Anthony Ingraffea, a professor of engineering at Cornell University in Ithaca, New York. “A 45% reduction on a rate that is too low will be a very small reduction. From a scientific perspective, this doesn’t amount to a hill of beans.”

    Ingraffea says methane emissions are likely several times higher than what the EPA estimates. (Currently, the EPA says that up to 1.8% of the natural gas distributed and produced in the U.S. escapes to the atmosphere.) Even if Ingraffea is right, its still a small percentage, but methane’s potency as a greenhouse gas makes even a small release incredibly significant. Over 100 years, methane traps 34 times more heat in the atmosphere than carbon dioxide. If you are only looking 20 years into the future, a time frame given equal weight by the United Nation’s Intergovernmental Panel on Climate Change, methane is 86 times more potent than carbon dioxide.

    2
    After being damaged during Hurricane Ike in September 2008, a natural gas tank spews methane near Sabine Pass, Texas.

    If Ingraffea is right, the amount of methane released into the atmosphere from oil and gas wells, pipelines, processing and storage facilities has a warming affect approaching that of the country’s 557 coal fired power plants. Reducing such a high rate of emissions by 40–45% would certainly help stall climate change. It would also likely be much more difficult to achieve than the cuts industry and environmental groups are currently debating.

    Ingraffea first called attention to what he and others believe are EPA underestimates in 2011 when he published a highly controversial paper along with fellow Cornell professor Robert Howarth. Their research suggested the amount of methane emitted by the natural gas industry was so great that relying on natural gas was actually worse for the climate than burning coal.

    Following the recent White House and EPA announcement, industry group America’s Natural Gas Alliance (ANGA) stated that they have reduced emissions by 17% since 1990 while increasing production by 37%. “We question why the administration would single out our sector for regulation, given our demonstrated reductions,” the organization wrote in a press release following the White House’s proposed policies. ANGA bases its emissions reduction on the EPA’s own figures and stands by the data. “We like to have independent third party verification, and we use the EPA’s figures for that,” says ANGA spokesman Daniel Whitten.

    Shifting Estimates

    But are the EPA estimates correct, and are they sufficiently independent? To come up with its annual estimate, the EPA doesn’t make direct measurements of methane emissions each year. Rather they multiply emission factors, the volume of a gas thought to be emitted by a particular source—like a mile of pipeline or a belching cow—by the number of such sources in a given area. For the natural gas sector, emission factors are based on a limited number of measurements conducted in the early 1990s in industry-funded studies.

    In 2010 the EPA increased its emissions factors for methane from the oil and natural gas sector, citing “outdated and potentially understated” emissions. The end result was a more than doubling of its annual emissions estimate from the prior year. In 2013, however, the EPA reversed course, lowering estimates for key emissions factors for methane at wells and processing facilities by 25–30%. When reached for comment, the EPA pointed me to their existing reports.

    The change was not driven by better scientific understanding but by political pressure, Howarth says. “The EPA got huge pushback from industry and decreased their emissions again, and not by collecting new data.” The EPA states that the reduction in emissions factors was based on “a significant amount of new information” that the agency received about the natural gas industry.

    However, a 2013 study published in the journal Geophysical Research Letters concludes that “the main driver for the 2013 reduction in production emissions was a report prepared by the oil and gas industry.” The report was a non-peer reviewed survey of oil and gas companies conducted by ANGA and the American Petroleum Institute.

    The EPA’s own inspector general released a report that same year that was highly critical of the agency’s estimates of methane and other harmful gasses. “Many of EPA’s existing oil and gas production emission factors are of questionable quality because they are based on limited and/or low quality data.” The report concluded that the agency likely underestimates emissions, which “hampers [the] EPA’s ability to accurately assess risks and air quality impacts from oil and gas production activities.”

    Underestimated

    Soon after the EPA lowered its emissions estimates, a number of independent studies based on direct measurements found higher methane emissions. In November 2013, a study based on direct measurements of atmospheric methane concentrations across the United States concluded actual emissions from the oil and gas sector were 1.5 times higher than EPA estimates. The study authors noted, “the US EPA recently decreased its methane emission factors for fossil fuel extraction and processing by 25–30% but we find that [methane] data from across North America instead indicate the need for a larger adjustment of the opposite sign.”

    In February 2014, a study published in the journal Science reviewed 20 years of technical literature on natural gas emissions in the U.S. and Canada and concluded that “official inventories consistently underestimate actual CH4 emissions.”

    “When you actually go out and measure methane emissions directly, you tend to come back with measurements that are higher than the official inventory,” says Adam Brandt, lead author of the study and an assistant professor of energy resources engineering at Stanford University. Brandt and his colleagues did not attempt to make an estimate of their own, but stated that in a worst-case scenario total methane emissions from the oil and gas sector could be three times higher than the EPA’s estimate.

    On January 22, eight days after the White House’s announcement, another study found similarly high emissions from a sector of the natural gas industry that is often overlooked. The study made direction measurements of methane emissions from natural gas pipelines and storage facilities in and around Boston, Massachusetts, and found that they were 3.9 times higher than the EPA’s estimate for the “downstream” sector, or the parts of the system which transmit, distribute, and store natural gas.

    3
    Most natural gas leaks are small, but large ones can have catastrophic consequences. The wreckage above was caused by a leak in San Bruno, California, in 2010.

    Boston’s aging, leak-prone, cast-iron pipelines likely make the city more leaky than most, but the high volume of emissions—losses around the city total roughly $1 billion worth of natural gas per decade—are nonetheless surprising. The majority of methane emissions were previously believed to occur “upstream” at wells and processing facilities. Efforts to curb emissions including the recent goals set by the White House have overlooked the smaller pipelines that deliver gas to end users.

    “Emissions from end users have been only a very small part of conversation on emissions from natural gas,” says lead author Kathryn McKain, an atmospheric scientist at Harvard University. “Our findings suggest that we don’t understand the underlying emission processes which is essential for creating effective policy for reducing emissions.”

    The Boston study was one of 16 recent or ongoing studies coordinated by EDF to try to determine just how much methane is actually being emitted from the industry as a whole. Seven studies, focusing on different aspects of oil and gas industry infrastructure, have been published thus far. Two of the studies, including the recent Boston study, have found significantly higher emission rates. One study, conducted in close collaboration with industry, found lower emissions. EDF says it hopes to have all studies completed by the end of 2015. The EPA told me it will take the studies into account for possible changes in its current methane emission factors.

    Fraction of a Percent

    EDF is simultaneously working with industry to try to reduce methane emissions. A recent study commissioned by the environmental organization concluded the US oil and gas industry could cut methane emissions by 40% from projected 2018 levels at a cost of less than one cent per thousand cubic feet of natural gas, which today sells for about $5. The reductions could be achieved with existing emissions-control technologies and policies.

    “We are talking about one third or one fourth of a percent of the price of gas to meet these goals,” says Steven Hamburg chief scientist for EDF. The 40–45% reduction goal recently announced by the White House is nearly identical to the level of cuts analyzed by EDF. To achieve the reduction the White House proposes mandatory changes in new oil and gas infrastructure as well as voluntary measures for existing infrastructure.

    Thomas Pyle, president of the Institute for Energy Research, an industry organization, says industry is already reducing its methane emissions and doesn’t need additional rules. “It’s like regulating ice cream producers not to spill any ice cream during the ice cream making process,” he says. “It is self-evident for producers to want to capture this product with little or no emissions and make money from it.”

    Unlike making ice cream, however, natural gas producers often vent their product intentionally as part of the production process. One of the biggest sources of methane emissions in natural gas production is gas that is purposely vented from pneumatic devices which use pressurized methane to open and close valves and operate pumps. They typically release or “bleed” small amounts of gas during their operation.

    Such equipment is widely used throughout natural gas extraction, processing, and transmission process. A recent study by Natural Resources Defense Council (NRDC) estimates natural gas driven pneumatic equipment vents 1.6–1.9 million metric tons of methane each year. The figure accounts for nearly one-third of all methane lost by the natural gas industry, as estimated by the EPA.

    4
    A natural gas distribution facility

    “Low-bleed” or “zero-bleed” controllers are available, though they are more expensive. The latter use compressed air or electricity to operate instead of pressurized natural gas, or they capture methane that would otherwise be vented and reuse it. “Time and time again we see that we can operate this equipment without emissions or with very low emissions,” Hamburg says. Increased monitoring and repair of unintended leaks at natural gas facilities could reduce an additional third of the industry’s methane emissions according to the NRDC study.

    Environmentalist organizations have come out in strong opposition to the lack of mandatory regulations for existing infrastructure, which will account for nearly 90% of methane emissions in 2018 according to a recent EDF report.

    While industry groups oppose mandatory regulations on new infrastructure, at least one industry leader isn’t concerned. “I don’t believe the new regulations will hurt us at all,” says Mark Boling an executive vice president at Houston-based Southwestern Energy Company, the nation’s fourth largest producer of natural gas.

    Boling says leak monitoring and repair programs his company initiated starting in late 2013 will pay for themselves in 12 to 18 months through reduced methane emissions. Additionally, he says the company has also replaced a number of pneumatic devices with zero-bleed solar powered electric pumps. Southwestern Energy is now testing air compressors powered by fuel cells to replace additional methane-bleeding equipment Boling says. In November, Southwestern Energy launched ONE Future, a coalition of companies from across the natural gas industry. Their goal is to lower the industry’s methane emissions below one percent.

    Based on the EPA emissions rate of 1.8% and fixes identified by EDF and NRDC, their goal seems attainable. But what if the actual emissions rate is significantly higher, as Howarth and Ingraffea have long argued and recent studies seem to suggest? “We can sit here and debate whose numbers are right, ‘Is it 4%? 8%? Whatever,’ ” Boling says. “But there are cost effective opportunities out there to reduce emissions, and we need to step up and do it.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 9:13 am on April 9, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Quick Test That Measures a Patient’s Own Proteins Could Slash Antibiotic Overuse” 

    PBS NOVA

    NOVA

    19 Mar 2015
    R.A. Becker

    If you’ve ever been prescribed antibiotics to fight the flu, you’ve experienced first-hand how difficult it is for doctors to distinguish between bacterial infections and viral infections (the flu is the latter). Oftentimes, doctors will prescribe antibiotics just in case it’s a bacterial infection so the patient will recover sooner. Early administration of antibiotics can halt bacterial infections before they spiral out of control, but the practice has led to the overuse of our most precious drugs.

    Fortunately, a team of researchers announced yesterday that they may have solved this problem in the form of a blood test. It works by detecting the proteins produced by a patient’s own body in response to infection to quickly determine whether they have been sickened by a bacterial strain or a virus. It returns a result within minutes rather than the hours or days required with typical clinical tests.

    1
    The new test could lengthen the useful life of antibiotics such as clindamycin, one of the most essential drugs, according to the World Health Organization.

    Today’s tests aren’t just slow, they also require that the infectious agent has multiplied enough inside the patient’s body that the levels are high enough to be detected, and can misidentify the root cause when a person has concurrent infections. To overcome these hurdles, scientists from Israeli biotech company MeMed looked to the patient’s own body to see which molecules the immune system produces when fighting off different kinds of infections.

    The test performed well, properly identifying the type of infection most of the time. The researchers even report that it is more accurate than typical clinical diagnostics. Here’s Smitha Mundasad, reporting for BBC News on the new test:

    It relies on the fact that bacteria and viruses can trigger different protein pathways once they infect the body.

    A novel one, called TRAIL, was particularly high in viral infections and depleted during bacterial ones. They combined this with two other proteins – one is already used in routine practice.

    The rapid test could slow the rampant spread of antibiotic resistance in bacteria. Inappropriately prescribing antibiotics to to combat a virus like the flu or using too low of a dose of antibiotics encourages bacteria to evolve traits that protect them from commonly used drugs.

    Antibiotic misuse is not a small problem. The CDC estimates that nearly half of all antibiotics should never have been prescribed in the first place, and antibiotic resistant bacteria infect around 2 million people each year in the United States, killing over 23,000 of those infected.

    Virus expert Jonathan Ball from the University of Nottingham is cautiously optimistic, telling the BBC’s Mundasad that while MeMed’s new test might reduce inappropriate antibiotic use, “It will be important to see how it performs in the long term.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 7:51 am on April 6, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Silver Nanoparticles Could Give Millions Microbe-free Drinking Water” 

    PBS NOVA

    NOVA

    24 Mar 2015
    Cara Giaimo

    1
    Microbe-free drinking water is hard to come by in many areas of India.

    Chemists at the Indian Institute of Technology Madras have developed a portable, inexpensive water filtration system that is twice as efficient as existing filters. The filter doubles the well-known and oft-exploited antimicrobial effects of silver by employing nanotechnology. The team, led by Professor Thalappil Pradeep, plans to use it to bring clean water to underserved populations in India and beyond.

    Left alone, most water is teeming with scary things. A recent study showed that your average glass of West Bengali drinking water might contain E. coli, rotavirus, cryptosporidium, and arsenic. According to the World Health Organization, nearly a billion people worldwide lack access to clean water, and about 80% of illnesses in the developing world are water-related. India in particular has 16% of the world’s population and less than 3% of its fresh water supply. Ten percent of India’s population lacks water access, and every day about 1,600 people die of diarrhea, which is caused by waterborne microbes.

    Pradeep has spent over a decade using nanomaterials to chemically sift these pollutants out. He started by tackling endosulfan, a pesticide that was hugely popular until scientists determined that it destroyed ozone and brain cells in addition to its intended insect targets. Endosulfan is now banned in most places, but leftovers persist in dangerous amounts. After a bout of endosulfan poisoning in the southwest region of Kerala, Pradeep and his colleagues developed a drinking water filter that breaks the toxin down into harmless components. They licensed the design to a filtration company, who took it to market in 2007. It was “the first nano-chemistry based water product in the world,” he says.

    But Pradeep wanted to go bigger. “If pesticides can be removed by nanomaterials,” he remembers thinking, “can you also remove microbes without causing additional toxicity?” For this, Pradeep’s team put a new twist on a tried-and-true element: silver.

    Silver’s microbe-killing properties aren’t news—in fact, people have known about them for centuries, says Dr. David Barillo, a trauma surgeon and the editor of a recent silver-themed supplement of the journal Burns.

    “Alexander the Great stored and drank water in silver vessels when going on campaigns” in 335 BC, he says, and 19th century frontier-storming Americans dropped silver coins into their water barrels to suppress algae growth. During the space race, America and the Soviet Union both developed silver-based water purification techniques (NASA’s was “basically a silver wire sticking in the middle of a pipe that they were passing electricity through,” Barillo says). And new applications keep popping up: Barillo himself pioneered the use of silver-infused dressings to treat wounded soldiers in Afghanistan. “We’ve really run the gamut—we’ve gone from 300 BC to present day, and we’re still using it for the same stuff,” he says.

    No one knows exactly how small amounts of silver are able to kill huge swaths of microbes. According to Barillo, it’s probably a combination of attacks on the microbe’s enzymes, cell wall, and DNA, along with the buildup of silver free radicals, which are studded with unpaired electrons that gum up cellular systems. These microbe-mutilating strategies are so effective that they obscure our ability to study them, because we have nothing to compare them to. “It’s difficult to make something silver-resistant, even in the lab where you’re doing it intentionally,” Barillo says.

    But unlike equal-opportunity killers like endosulfan, silver knocks out the monsters and leaves the good guys alone. In low concentrations, it’s virtually harmless to humans. “It’s not a carcinogen, it’s not a mutagen, it’s not an allergen,” Barillo says. “It seems to have no purpose in human physiology—it’s not a metal that we need to have in our bodies like copper or magnesium. But it doesn’t seem to do anything bad either.”

    Though silver’s mysterious germ-killing properties are old news, Pradeep is taking advantage of them in new ways. The particles his team works with are less than 50 nanometers long on any one side—about four times smaller than the smallest bacteria. Working at this level allows him greater control over desired chemical reactions, and the ability to fine-tune his filters to improve efficiency or add specific effects. Two years ago, his team developed their biggest hit yet—a combination filter that kills microbes with silver and breaks down chemical toxins with other nanoparticles. It’s portable, works at room temperature, and doesn’t require electricity. Pradeep is working with the government to make these filters available to underserved communities. Currently 100,000 households have them; “by next year’s end,” he hopes, “it will reach 600,000 people.”

    The latest filter goes one better: it “tunes” the silver with carbonate, a negatively-charged ion that strips protective proteins from microbe cell membranes. This leaves the microbes even more vulnerable to silver’s attack. “In the presence of carbonate, silver is even more effective,” he explains, so he can use less of it: “Fifty parts per billion can be brought down to [25].” Unlike the earlier filter, this one kills viruses, too—good news, since according to the National Institute of Virology, most do not.

    Going from 50 parts per billion of silver to 25 may not seem like a huge leap. But for Pradeep—who aims to help a lot of people for a long time—every little bit counts. Filters that contain less silver are less expensive to produce. This is vital if you want to keep costs low enough for those who need them most to buy them, or to entice the government into giving them away. He estimates that one of his new filter units will cost about $2 per year, proportionately less than what the average American pays for water.

    Using less silver also improves sustainability. “Globally, silver is the most heavily used nanomaterial,” Pradeep says, and it’s not renewable: anything we use “is lost for the world.” If all filters used his carbonate trick, he points out, we could make twice as many of them before we run out of raw materials—and even more if, as he hopes, his future tunings bring the necessary amount down further. This will become especially important if his filters catch on in other places with no infrastructure and needy populations. “Ultimately, I want to use the very minimum quantity of silver,” he says.

    “Pradeep’s work shows enormous potential,” says Dr. Theresa Dankovich, a water filtration expert at the University of Virginia’s Center for Global Health. But, she points out, “carbonate anions are naturally occurring in groundwater and surface waters,” so “it warrants further study to determine how they are already enhancing the effect of silver ions and silver nanoparticles,” even without purposeful manipulation by chemists. Others see potential shortcomings. James Smith, a professor of environmental engineering at the University of Virginia and the inventor of a nanoparticle-coated clay filtering pot, worries that the nanotech-heavy production process “would not allow for manufacturing in a developing world setting,” especially if Pradeep’s continuous tweaking of the model deters large-scale companies from actually producing it.

    Nevertheless, Pradeep plans to continue scaling up. “If you can provide clean water, you have provided a solution for almost everything,” he says. When you have the lessons of history and the technology of the future, why settle for anything less?

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 1:43 pm on March 25, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “Stems Cells Finally Deliver, But Not on Their Original Promise” 

    PBS NOVA

    NOVA

    25 Mar 2015
    Carrie Arnold

    To scientists, stem cells represent the potential of the human body to heal itself. The cells are our body’s wide-eyed kindergarteners—they have the potential to do pretty much anything, from helping us obtain oxygen, digest food, or pump our blood. That flexibility has given scientists hope that they can coax stem cells to differentiate into and replace those damaged by illness.

    Almost immediately after scientists learned how to isolate stem cells from human embryos, the excitement was palpable. In the lab, they had already been coaxed into becoming heart muscle, bone marrow, and kidney cells. Entire companies were founded to translate therapies into clinical trials. Nearly 20 years on, though, only a handful of therapies using stem cells have been approved. Not quite the revolution we had envisioned back in 1998.

    But stem cells have delivered on another promise, one that is already having a broad impact on medical science. In their investigations into the potential therapeutic functions of stem cells, scientists have discovered another way to help those suffering from neurodegenerative and other incurable diseases. With stem cells, researchers can study how these diseases begin and even test the efficacy of drugs on cells from the very people they’re intended to treat.

    Getting to this point hasn’t been easy. Research into pluripotent stem cells, the most promising type, has faced a number of scientific and ethical hurdles. They were most readily found in developing embryos, but in 1995, Congress passed a bill that eliminated funding on embryonic stem cells. Since adult humans don’t have pluripotent stem cells, researchers were stuck.

    That changed in 2006, when Japanese scientist Shinya Yamanaka developed a way to create stem cells from a skin biopsy. Yamanaka’s process to create induced pluripotent stem cells (iPS cells) won him and his colleague John Gurdon a Nobel Prize in 2012. After years of setbacks, the stem cell revolution was back on.

    1
    A cluster of iPS cells has been induced to express neural proteins, which have been tagged with fluorescent antibodies.

    Biomedical scientists in fields from cancer to heart disease have turned to iPS cells in their research. But the technique has been especially popular among scientists studying neurodegenerative diseases like Alzheimer’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis (ALS) for two main reasons: One, since symptoms of these diseases don’t develop until rather late in the disease process, scientists haven’t had much knowledge about the early stages. IPS cells changed that by allowing scientists to study the very early stages of the disorder. And two, they provide novel ways of testing new drugs and potentially even personalizing treatment options.

    “It’s creating a sea change,” says Jeanne Loring, a stem cell biologist at the Scripps Research Institute in San Diego. “There will be tools available that have never been available before, and it will completely change drug development.”

    Beyond Animal Models

    Long before scientists knew that stem cells existed, they relied on animals to model diseases. Through careful breeding and, later, genetic engineering, researchers have developed rats, mice, fruit flies, roundworms, and other animals that display symptoms of the illness in question. Animal models remain useful, but they’re not perfect. While the biology of these animals often mimics humans’, they aren’t identical, and although some animals might share many of the overt symptoms of human illness, scientists can’t be sure that they experience the disease in the same way humans do.

    “Mouse models are useful research tools, but they rarely capture the disease process,” says Rick Livesey, a biologist at the University of Cambridge in the U.K. Many neurodegenerative diseases, like Alzheimer’s, he says, are perfect examples of the shortcomings of animal models. “No other species of animal actually gets Alzheimer’s disease, so any animal model is a compromise.”

    As a result, many drugs that seemed to be effective in animal models showed no benefit in humans. A study published in Alzheimer’s Research and Therapy in June 2014 estimated that 99.9% of Alzheimer’s clinical trials ended in failure, costing both money and lives. Scientists like Ole Isacson, a neuroscientist at Harvard University who studies Parkinson’s disease, were eager for a method that would let them investigate illnesses in a patient’s own cells, eliminating the need for expensive and imperfect animal models.

    Stem cells appeared to offer that potential, but when Congress banned federal funding in 1995 for research on embryos—and thus the development of new stem cell lines—scientists found their work had ground to a halt. As many researchers in the U.S. fretted over the future of stem cell research, scientists in Japan were developing a technique which would eliminate the need for embryonic stem cells. What’s more, it would allow researchers to create stem cells from the individuals who were suffering from the diseases they were studying.

    Cells in the body are able to specialize by turning on some sets of genes and switching off others. Every cell has a complete copy of the DNA, it’s just packed away in deep storage where the cell can’t easily access it. Yamanaka, the Nobel laureate, knew that finding the key to this storage locker and unpacking it could potentially turn any specialized cell back into a pluripotent stem cell. He focused in on a group of 24 genes that were active only in embryonic stem cells. If he could get adult, specialized cells to translate these genes into proteins, then they should revert to stem cells. Yamanaka settled on fibroblast cells as the source of iPS cells since these are easily obtained with a skin biopsy.

    Rather than trying to switch these genes back on, a difficult and time-consuming task, Yamanaka instead engineered a retrovirus to carry copies of these 24 genes to mouse fibroblast cells. Since many retroviruses insert their own genetic material into the genomes of the cells they infect, Yamanaka only had to deliver the virus once. All successive generations of cells inherited those 24 genes. Yamanaka first grew the fibroblasts in a dish, then infected them with his engineered retrovirus. Over repeated experiments, Yamanaka was able to narrow the suite of required genes from 24 down to just four.

    The process was far from perfect—it took several weeks to create the stem cells, and only around 0.01%–0.1% of the fibroblasts were actually converted to stem cells. But after Yamanaka published his results in Cell in 2006, scientists quickly began perfecting the procedure and developing other techniques. To say they have been successful would be an understatement. “The technology is so good now that I have the undergraduates in my lab doing the reprogramming,” Loring says.

    Accelerating Disease

    When he heard of Yamanaka’s discovery, Isacson, the Harvard neuroscientist studying Parkinson’s disease, had been using fetal neurons to try to replace diseased and dying neurons. Isacson realized “very quickly” that iPS cells could yield new discoveries about Parkinson’s. At the time, scientists were trying to determine exactly when the disease process started. It wasn’t easy. A person has to lose around 70% of their dopamine neurons before the first sign of movement disorder appears and Parkinson’s can be diagnosed. By that point, it’s too late to reverse that damage, a problem that is found in many if not all neurodegenerative diseases. Isacson wanted to know what was causing the neurons to die.

    Together with the National Institute of Neurological Disorders and Stroke consortium on iPS cells, Isacson obtained fibroblasts from patients with genetic mutations linked to Parkinson’s. Then, he reprogrammed these cells to become the specific type of neurons affected by Parkinson’s disease. “To our surprise, in the very strong hereditary forms of disease, we found that cells showed very strong signs of distress in the dish, even though they were newborn cells,” Isacson says.

    These experiments, published in Science Translational Medicine in 2012, showed that the disease process in Parkinson’s started far earlier than scientists expected. The distressed, differentiated neurons Isacson saw under the microscope were still just a few weeks old. People generally didn’t start showing symptoms for Parkinson’s disease until middle age or beyond.

    2
    A clump of stem cells, seen here in green

    Isacson and his colleagues then tried to determine what was different between different cells with different mutations. The cells showed the most distress in their mitochondria, the parts of the cell that act as power plants by creating energy from oxygen and glucose. How that distress manifested, though, varied slightly depending on which mutation the patient carried. Neurons derived from an individual with a mutation in the LRRK2 gene consumed lower than expected amounts of oxygen, whereas the neurons derived from those carrying a mutation in PINK1 had much higher oxygen consumption. Neurons with these mutations were also more susceptible to a type of cellular damage known as oxidative stress.

    After exposing both groups of cells to a variety of environmental toxins, such as oligomycin and valinomycin, both of which affect mitochondria, Isacson and colleagues attempted to rescue the cells by using several compounds that had been found effective in animal models. Both the LRRK2 and the PINK1 cells responded well to the antioxidant coenzyme Q10, but had very different responses to the immunosuppressant drug rapamycin. Whereas LRRK2 showed beneficial responses to rapamycin, the PINK1 cells did not.

    To Isacson, the different responses were profoundly important. “Most drugs don’t become blockbusters because they don’t work for everyone. Trials start too late, and they don’t know the genetic background of the patient,” Isacson says. He believes that iPS cells will one day help researchers match specific treatments with specific genotypes. There may not be a single blockbuster that can treat Parkinson’s, but there may be several drugs that make meaningful differences in patients’ lives.

    Cancer biologists have already begun culturing tumor cells and testing anti-cancer drugs before giving these medications to patients, and biologists studying neurodegenerative disease hope that iPS cells will one day allow them to do something similar for their patients. Scientists studying ALS have recently taken a step in that direction, using iPS cells to create motor neurons from fibroblasts of people carrying a mutation in the C9orf72 gene, the most common genetic cause of ALS. In a recent paper in Neuron, the scientists identified a small molecule which blocked the formation of toxic proteins caused by this mutation in cultured motor neurons.

    Adding More Dimensions

    It’s one thing to identify early disease in iPS cells, but these cells are generally obtained from people who have been diagnosed. At that point, it’s too late, in a way; drugs may be much less likely to work in later stages of the disease. To make many potential drugs more effective, the disease has to be diagnosed much, much earlier. Recent work by Harvard University stem cell biologist Rudolph Tanzi and colleagues may have taken a step in that direction, also using iPS cells.

    Doo Yeon Kim, Tanzi’s co-author, had grown frustrated with iPS cell models of neurodegenerative disease. The cell cultures were liquid, and the cells could only grow in a thin, two-dimensional layer. The brain, however, was more gel-like, and existed in three dimensions. So Kim created a 3D gel matrix on which the researchers grew human neural stem cells that carried extra copies of two genes—one which codes for amyloid precursor protein and another for presenilin 1, both of which were previously discovered in Tanzi’s lab—which are linked to familial forms of Alzheimer’s disease.

    After six weeks, the cells contained high levels of the harmful beta-amyloid protein as well as large numbers of toxic neurofibrillary tangles that damage and kill neurons. Both of these proteins had been found at high levels in the neurons of individuals who had died from Alzheimer’s disease, but researchers didn’t know for certain which protein built up first and which was more central to the disease process. Further experiments revealed that drugs preventing the formation of amyloid proteins also prevented the formation of neurofibrillary tangles, indicating that amyloid proteins likely formed first during Alzheimer’s disease.

    “When you stop amyloid, you stop cell death,” Tanzi says. Amyloid begins to build up long before people show signs of altered cognition, and Tanzi believes that drugs which stop amyloid or prevent the buildup of neurofibrillary tangles could prevent Alzheimer’s before it starts.

    The results were hailed in the media as a “major breakthrough,” although Larry Goldstein, a neuroscientist at the University of California, San Diego, takes a more nuanced perspective. “It’s a nice paper and an important step forward, but things got overblown. I don’t know that I would use the word ‘breakthrough’ because these, like all results, often have a very long history to them,” Goldstein says.

    The scientists who spoke with NOVA Next about iPS cells noted that the field is moving forward at a remarkable clip, but they all talked at length about the issues that still remain. One of the largest revolves around differences between the age of the iPS cells and the age of the humans who develop these neurodegenerative diseases. Although scientists are working with neurons that are technically “mature,” they are nonetheless only weeks or months old—far from the several decades that the sufferers of neurodegenerative diseases have. Since aging remains the strongest risk factor for developing these diseases, neuroscientists worry that some disease pathology might be missed in such young cells. “Is it possible to study a disease that takes 70 years to develop in a person using cells that have grown for just a few months in a dish?” Livesey asks.

    So far, the answer has been a tentative yes. Some scientists have begun to devise different strategies to accelerate the aging process in the lab so researchers don’t have to wait several decades before they develop their answers. Lorenz Studer, director of the Center for Stem Cell Biology at the Sloan-Kettering Institute, uses the protein that causes progeria, a disorder of extreme premature aging, to successfully age neurons derived from iPS cells from Parkinson’s disease patients.

    Robert Lanza, a stem cell biologist at Advanced Cell Technology, takes another approach, aging cells by taking small amounts of mature neurons and growing them up in a new dish. “Each time you do this, you are forcing the cells to divide,” Lanza says. “And cells can only divide so many times before they reach senescence and die.” This process, Lanza believes, will mimic aging. He has also been experimenting with stressing the cells to promote premature aging.

    All of these techniques, Livesey believes, will allow scientists to study which aspects of the aging process—such as number of cell divisions and different types of environmental stressors—affect neurodegenerative diseases and how they do so. Adding to the complexity of the experimental system will improve the results that come out at the end. “You can only capture as much biology in iPS cells as you plug into it in the beginning,” Livesey says.

    But as Isacson and Loring’s work, has shown, even very young cells can show hallmarks of neurodegenerative diseases. “If a disease has a genetic cause, if there’s an actual change in DNA, you should be able to find something in those iPS cells that is different,” Loring says.

    For these experiments and others, scientists have been relying on iPS cells derived from individuals with hereditary or familial forms of neurodegenerative disease. These individuals, however, only represent about 5–15% of individuals with neurodegenerative disease; the vast majority of neurodegenerative diseases is sporadic and has no known genetic cause. Scientists believe that environmental factors may play a much larger role in the onset of these forms of neurodegenerative disease.

    That heterogeneity means it’s not yet clear whether the iPS cells from individuals with hereditary forms of disease are a good model for what happens in sporadic disease. Although the resulting symptoms may be the same, different forms of disease may use the same biological pathways to end up in the same place. Isacson is in the process of identifying the range of genes and proteins that are altered in iPS cells that carry Parkinson’s disease mutations. He intends to determine whether any of these pathways are also disturbed in sporadically occurring Parkinson’s disease to pinpoint any similarities in both forms of disease.

    Livesey’s lab just received a large grant to study people with an early onset, sporadic form of Alzheimer’s. “Although sporadic Alzheimer’s disease isn’t caused by a mutation in a single gene, the condition is still strongly heritable. The environment, obviously, has an important role, but so does genetics,” Livesey says.

    Because the disease starts earlier in these individuals, researchers believe that it has a larger genetic link than other forms of sporadic Alzheimer’s disease, which will make it easier to identify any genetic or biological abnormalities. Livesey hopes that bridging sporadic and hereditary forms of Alzheimer’s disease will allow researchers to reach stronger conclusions using iPS cells.

    Though it will be years before any new drugs come out of Livesey’s stem cell studies—or any other stem cell study for that matter—the technology has nonetheless allowed scientists to refine their understanding of these and other diseases. And, scientists believe, this is just the start. “There are an endless series of discoveries that can be made in the next few decades,” Isacson says.

    Image credit: Ole Isacson, McLean Hospital and Harvard Medical School/NINDS

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 10:10 am on March 22, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From S and T: “Nova in Sagittarius Now 4th Magnitude!” 

    SKY&Telescope bloc

    Sky & Telescope

    March 22, 2015
    Alan MacRobert

    The nova that erupted in the Sagittarius Teapot on March 15th continues to brighten at a steady rate. As of the morning of March 22nd it’s about magnitude 4.3, plain as can be in binoculars before dawn, looking yellowish, and naked-eye in a moderately good sky.

    Update Sunday March 22: It’s still brightening — to about magnitude 4.3 this morning! That’s almost 2 magnitudes brighter than at its discovery a week ago. It’s now the brightest star inside the main body of the Sagittarius Teapot, and it continues to gain 0.3 magnitude per day. This seems to be the brightest nova in Sagittarius since at least 1898. And, Sagittarius is getting a little higher before dawn every morning.

    1
    The nova is right on the midline of the Sagittarius Teapot. The horizon here is drawn for the beginning of astronomical twilight in mid-March for a viewer near 40° north latitude. The nova is about 15° above this horizon. Stars are plotted to magnitude 6.5. For a more detailed chart with comparison-star magnitudes, see the bottom of this page. Sky & Telescope diagram.

    You never know. On Sunday March 15th, nova hunter John Seach of Chatsworth Island, NSW, Australia, found a new 6th-magnitude star shining in three search images taken by his DSLR patrol camera. The time of the photos was March 15.634 UT. One night earlier, the camera recorded nothing there to a limiting magnitude of 10.5.

    2
    Before and after. Adriano Valvasori imaged the nova at March 16.71, using the iTelescope robotic telescope “T9” — a 0.32-m (12.5-inch) reflector in Australia. His shot is blinked here with a similarly deep earlier image. One of the tiny dots at the right spot might be the progenitor star. The frames are 1⁄3° wide.

    A spectrum taken a day after the discovery confirmed that this is a bright classical nova — a white dwarf whose thin surface layer underwent a hydrogen-fusion explosion — of the type rich in ionized iron. The spectrum showed emission lines from debris expanding at about 2,800 km per second.

    The nova has been named Nova Sagittarii 2015 No. 2, after receiving the preliminary designation PNV J18365700-2855420. Here’s its up-to-date preliminary light curve from the American Association of Variable Star Observers (AAVSO). Here is the AAVSO’s list of recent observations.

    Although the nova is fairly far south (at declination –28° 55′ 40″, right ascension 18h 36m 56.8s), and although Sagittarius only recently emerged from the glow of sunrise, it’s still a good 15° above the horizon just before the beginning of dawn for observers near 40° north latitude. If you’re south of there it’ll be higher; if you’re north it’ll be lower. Binoculars are all you’ll need.

    It looks yellowish. Here’s a color image of its spectrum taken March 17th, by Jerome Jooste in South Africa using a Star Analyser spectrograph on an 8-inch reflector. Note the wide, bright emission lines. They’re flanked on their short-wavelength ends by blueshifted dark absorption lines: the classic P Cygni profile of a star with a thick, fast-expanding cooler shell or wind.

    To find when morning astronomical twilight begins at your location, you can use our online almanac. (If you’re on daylight time like most of North America, be sure to check the Daylight-Saving Time box.)

    Below is a comparison-star chart from the AAVSO. Stars’ visual magnitudes are given to the nearest tenth with the decimal points omitted.

    3
    The cross at center is Nova Sagittarii 2015 No. 2. Magnitudes of comparison stars are given to the nearest tenth with the decimal points omitted. The frame is 15° wide, two or three times the width of a typical binocular’s field of view. Courtesy AAVSO.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sky & Telescope magazine, founded in 1941 by Charles A. Federer Jr. and Helen Spence Federer, has the largest, most experienced staff of any astronomy magazine in the world. Its editors are virtually all amateur or professional astronomers, and every one has built a telescope, written a book, done original research, developed a new product, or otherwise distinguished him or herself.

    Sky & Telescope magazine, now in its eighth decade, came about because of some happy accidents. Its earliest known ancestor was a four-page bulletin called The Amateur Astronomer, which was begun in 1929 by the Amateur Astronomers Association in New York City. Then, in 1935, the American Museum of Natural History opened its Hayden Planetarium and began to issue a monthly bulletin that became a full-size magazine called The Sky within a year. Under the editorship of Hans Christian Adamson, The Sky featured large illustrations and articles from astronomers all over the globe. It immediately absorbed The Amateur Astronomer.

    Despite initial success, by 1939 the planetarium found itself unable to continue financial support of The Sky. Charles A. Federer, who would become the dominant force behind Sky & Telescope, was then working as a lecturer at the planetarium. He was asked to take over publishing The Sky. Federer agreed and started an independent publishing corporation in New York.

    “Our first issue came out in January 1940,” he noted. “We dropped from 32 to 24 pages, used cheaper quality paper…but editorially we further defined the departments and tried to squeeze as much information as possible between the covers.” Federer was The Sky’s editor, and his wife, Helen, served as managing editor. In that January 1940 issue, they stated their goal: “We shall try to make the magazine meet the needs of amateur astronomy, so that amateur astronomers will come to regard it as essential to their pursuit, and professionals to consider it a worthwhile medium in which to bring their work before the public.”

     
  • richardmitnick 4:01 am on March 21, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Genetically Engineering Almost Anything” 2014 and Very Important 

    PBS NOVA

    NOVA

    17 Jul 2014
    Tim De Chant and Eleanor Nelsen

    When it comes to genetic engineering, we’re amateurs. Sure, we’ve known about DNA’s structure for more than 60 years, we first sequenced every A, T, C, and G in our bodies more than a decade ago, and we’re becoming increasingly adept at modifying the genes of a growing number of organisms.

    But compared with what’s coming next, all that will seem like child’s play. A new technology just announced today has the potential to wipe out diseases, turn back evolutionary clocks, and reengineer entire ecosystems, for better or worse. Because of how deeply this could affect us all, the scientists behind it want to start a discussion now, before all the pieces come together over the next few months or years. This is a scientific discovery being played out in real time.

    1
    Scientists have figured out how to use a cell’s DNA repair mechanisms to spread traits throughout a population.

    Today, researchers aren’t just dropping in new genes, they’re deftly adding, subtracting, and rewriting them using a series of tools that have become ever more versatile and easier to use. In the last few years, our ability to edit genomes has improved at a shockingly rapid clip. So rapid, in fact, that one of the easiest and most popular tools, known as CRISPR-Cas9, is just two years old. Researchers once spent months, even years, attempting to rewrite an organism’s DNA. Now they spend days.

    Soon, though, scientists will begin combining gene editing with gene drives, so-called selfish genes that appear more frequently in offspring than normal genes, which have about a 50-50 chance of being passed on. With gene drives—so named because they drive a gene through a population—researchers just have to slip a new gene into a drive system and let nature take care of the rest. Subsequent generations of whatever species we choose to modify—frogs, weeds, mosquitoes—will have more and more individuals with that gene until, eventually, it’s everywhere.

    Cas9-based gene drives could be one of the most powerful technologies ever discovered by humankind. “This is one of the most exciting confluences of different theoretical approaches in science I’ve ever seen,” says Arthur Caplan, a bioethicist at New York University. “It merges population genetics, genetic engineering, molecular genetics, into an unbelievably powerful tool.”

    We’re not there yet, but we’re extraordinarily close. “Essentially, we have done all of the pieces, sometimes in the same relevant species.” says Kevin Esvelt, a postdoc at Harvard University and the wunderkind behind the new technology. “It’s just no one has put it all together.”

    It’s only a matter of time, though. The field is progressing rapidly. “We could easily have laboratory tests within the next few months and then field tests not long after that,” says George Church, a professor at Harvard University and Esvelt’s advisor. “That’s if everybody thinks it’s a good idea.”

    It’s likely not everyone will think this is a good idea. “There are clearly people who will object,” Caplan says. “I think the technique will be incredibly controversial.” Which is why Esvelt, Church, and their collaborators are publishing papers now, before the different parts of the puzzle have been assembled into a working whole.

    “If we’re going to talk about it at all in advance, rather than in the past tense,” Church says, “now is the time.”

    “Deleterious Genes”

    The first organism Esvelt wants to modify is the malaria-carrying mosquito Anopheles gambiae. While his approach is novel, the idea of controlling mosquito populations through genetic modification has actually been around since the late 1970s. Then, Edward F. Knipling, an entomologist with the U.S. Department of Agriculture, published a substantial handbook with a chapter titled “Use of Insects for Their Own Destruction.” One technique, he wrote, would be to modify certain individuals to carry “deleterious genes” that could be passed on generation after generation until they pervaded the entire population. It was an idea before its time. Knipling was on the right track, but he and his contemporaries lacked the tools to see it through.

    The concept surfaced a few more times before being picked up by Austin Burt, an evolutionary biologist and population geneticist at Imperial College London. It was the late 1990s, and Burt was busy with his yeast cells, studying their so-called homing endonucleases, enzymes that facilitate the copying of genes that code for themselves. Self-perpetuating genes, if you will. “Through those studies, gradually, I became more and more familiar with endonucleases, and I came across the idea that you might be able to change them to recognize new sequences,” Burt recalls.

    Other scientists were investigating endonucleases, too, but not in the way Burt was. “The people who were thinking along those lines, molecular biologists, were thinking about using these things for gene therapy,” Burt says. “My background in population biology led me to think about how they could be used to control populations that were particularly harmful.”

    In 2003, Burt penned an influential article that set the course for an entire field: We should be using homing endonucleases, a type of gene drive, to modify malaria-carrying mosquitoes, he said, not ourselves. Burt saw two ways of going about it—one, modify a mosquito’s genome to make it less hospitable to malaria, and two, skew the sex ratio of mosquito populations so there are no females for the males to reproduce with. In the following years, Burt and his collaborators tested both in the lab and with computer models before they settled on sex ratio distortion. (Making mosquitoes less hospitable to malaria would likely be a stopgap measure at best; the Plasmodium protozoans could evolve to cope with the genetic changes, just like they have evolved resistance to drugs.)

    Burt has spent the last 11 years refining various endonucleases, playing with different scenarios of inheritance, and surveying people in malaria-infested regions. Now, he finally feels like he is closing in on his ultimate goal. “There’s a lot to be done still,” he says. “But on the scale of years, not months or decades.”

    Cheating Natural Selection

    Cas9-based gene drives could compress that timeline even further. One half of the equation—gene drives—are the literal driving force behind proposed population-scale genetic engineering projects. They essentially let us exploit evolution to force a desired gene into every individual of a species. “To anthropomorphize horribly, the goal of a gene is to spread itself as much as possible,” Esvelt says. “And in order to do that, it wants to cheat inheritance as thoroughly as it can.” Gene drives are that cheat.

    Without gene drives, traits in genetically-engineered organisms released into the wild are vulnerable to dilution through natural selection. For organisms that have two parents and two sets of chromosomes (which includes humans, many plants, and most animals), traits typically have only a 50-50 chance of being inherited, give or take a few percent. Genes inserted by humans face those odds when it comes time to being passed on. But when it comes to survival in the wild, a genetically modified organism’s odds are often less than 50-50. Engineered traits may be beneficial to humans, but ultimately they tend to be detrimental to the organism without human assistance. Even some of the most painstakingly engineered transgenes will be gradually but inexorably eroded by natural selection.

    Some naturally occurring genes, though, have over millions of years learned how to cheat the system, inflating their odds of being inherited. Burt’s “selfish” endonucleases are one example. They take advantage of the cell’s own repair machinery to ensure that they show up on both chromosomes in a pair, giving them better than 50-50 odds when it comes time to reproduce.

    2
    A gene drive (blue) always ends up in all offspring, even if only one parent has it. That means that, given enough generations, it will eventually spread through the entire population.

    Here’s how it generally works. The term “gene drive” is fairly generic, describing a number of different systems, but one example involves genes that code for an endonuclease—an enzyme which acts like a pair of molecular scissors—sitting in the middle of a longer sequence of DNA that the endonculease is programmed to recognize. If one chromosome in a pair contains a gene drive but the other doesn’t, the endonuclease cuts the second chromosome’s DNA where the endonuclease code appears in the first.

    The broken strands of DNA trigger the cell’s repair mechanisms. In certain species and circumstances, the cell unwittingly uses the first chromosome as a template to repair the second. The repair machinery, seeing the loose ends that bookend the gene drive sequence, thinks the middle part—the code for the endonuclease—is missing and copies it onto the broken chromosome. Now both chromosomes have the complete gene drive. The next time the cell divides, splitting its chromosomes between the two new cells, both new cells will end up with a copy of the gene drive, too. If the entire process works properly, the gene drive’s odds of inheritance aren’t 50%, but 100%.

    3
    Here, a mosquito with a gene drive (blue) mates with a mosquito without one (grey). In the offspring, one chromosome will have the drive. The endonuclease then slices into the drive-free DNA. When the strand gets repaired, the cell’s machinery uses the drive chromosome as a template, unwittingly copying the drive into the break.

    Most natural gene drives are picky about where on a strand of DNA they’ll cut, so they need to be modified if they’re to be useful for genetic engineering. For the last few years, geneticists have tried using genome-editing tools to build custom gene drives, but the process was laborious and expensive. With the discovery of CRISPR-Cas9 as a genome editing tool in 2012, though, that barrier evaporated. CRISPR is an ancient bacterial immune system which identifies the DNA of invading viruses and sends in an endonuclease, like Cas9, to chew it up. Researchers quickly realized that Cas9 could easily be reprogrammed to recognize nearly any sequence of DNA. All that’s needed is the right RNA sequence—easily ordered and shipped overnight—which Cas9 uses to search a strand of DNA for where to cut. This flexibility, Esvelt says, “lets us target, and therefore edit, pretty much anything we want.” And quickly.

    Gene drives and Cas9 are each powerful on their own, but together they could significantly change biology. CRISRP-Cas9 allows researchers to edit genomes with unprecedented speed, and gene drives allow engineered genes to cheat the system, even if the altered gene weakens the organism. Simply by being coupled to a gene drive, an engineered gene can race throughout a population before it is weeded out. “Eventually, natural selection will win,” Esvelt says, but “gene drives just let us get ahead of the game.”

    Beyond Mosquitoes

    If there’s anywhere we could use a jump start, it’s in the fight against malaria. Each year, the disease kills over 200,000 people and sickens over 200 million more, most of whom are in Africa. The best new drugs we have to fight it are losing ground; the Plasmodium parasite is evolving resistance too quickly.

    3
    False-colored electron micrograph of a Plasmodium sp. sporozoite.

    And we’re nowhere close to releasing an effective vaccine. The direct costs of treating the disease are estimated at $12 billion, and the economies of affected countries grew 1.3% less per year, a substantial amount.

    Which is why Esvelt and Burt are both so intently focused on the disease. “If we target the mosquito, we don’t have to face resistance on the parasite itself. The idea is, we can just take out the vector and stop all transmission. It might even lead to eradication,” Esvelt says.

    Esvelt initially mulled over the idea of building Cas9-based gene drives in mosquitoes to do just that. He took the idea to to Flaminia Catteruccia, a professor who studies malaria at the Harvard School of Public Health, and the two grew increasingly certain that such a system would not only work, but work well. As their discussions progressed, though, Esvelt realized they were “missing the forest for the trees.” Controlling malaria-carrying mosquitoes was just the start. Cas9-based gene drives were the real breakthrough. “If it let’s us do this for mosquitos, what is to stop us from potentially doing it for almost anything that is sexually reproducing?” he realized.

    In theory, nothing. But in reality, the system works best on fast-reproducing species, Esvelt says. Short generation times allow the trait to spread throughout a population more quickly. Mosquitoes are a perfect test case. If everything were to work perfectly, deleterious traits could sweep through populations of malaria-carrying mosquitoes in as few as five years, wiping them off the map.

    Other noxious species could be candidates, too. Certain invasive species, like mosquitoes in Hawaii or Asian carp in the Great Lakes, could be targeted with Cas9-based gene drives to either reduce their numbers or eliminate them completely. Agricultural weeds like horseweed that have evolved resistance to glyphosate, a herbicide that is broken down quickly in the soil, could have their susceptibility to the compound reintroduced, enabling more farmers to adopt no-till practices, which help conserve topsoil. And in the more distant future, Esvelt says, weeds could even be engineered to introduce vulnerabilities to completely benign substances, eliminating the need for toxic pesticides. The possibilities seem endless.

    The Decision

    Before any of that can happen, though, Esvelt and Church are adamant that the public help decide whether the research should move forward. “What we have here is potentially a general tool for altering wild populations,” Esvelt says. “We really want to make sure that we proceed down this path—if we decide to proceed down this path—as safely and responsibly as possible.”

    To kickstart the conversation, they partnered with the MIT political scientist Kenneth Oye and others to convene a series of workshops on the technology. “I thought it might be useful to get into the room people with slightly different material interests,” Oye says, so they invited regulators, nonprofits, companies, and environmental groups. The idea, he says, was to get people to meet several times, to gain trust and before “decisions harden.” Despite the diverse viewpoints, Oye says there was surprising agreement among participants about what the important outstanding questions were.

    As the discussion enters the public sphere, tensions are certain to intensify. “I don’t care if it’s a weed or a blight, people still are going to say this is way too massive a genetic engineering project,” Caplan says. “Secondly, it’s altering things that are inherited, and that’s always been a bright line for genetic engineering.” Safety, too, will undoubtedly be a concern. As the power of a tool increases, so does its potential for catastrophe, and Cas9-based gene drives could be extraordinarily powerful.

    There’s also little in the way of precedent that we can use as a guide. Our experience with genetically modified foods would seem to be a good place to start, but they are relatively niche organisms that are heavily dependent on water and fertilizer. It’s pretty easy to keep them contained to a field. Not so with wild organisms; their potential to spread isn’t as limited.

    Aware of this, Esvelt and his colleagues are proposing a number of safeguards, including reversal drives that can undo earlier engineered genes. “We need to really make sure those work if we’re proposing to build a drive that is intended to modify a wild population,” Esvelt says.

    There are still other possible hurdles to surmount—lab-grown mosquitoes may not interbreed with wild ones, for example—but given how close this technology is to prime time, Caplan suggests researchers hew to a few initial ethical guidelines. One, use species that are detrimental to human health and don’t appear to fill a unique niche in the wild. (Malaria-carrying mosquitoes seem fit that description.) Two, do as much work as possible using computer models. And three, researchers should continue to be transparent about their progress, as they have been. “I think the whole thing is hugely exciting,” Caplan says. “But the time to really get cracking on the legal/ethical infrastructure for this technology is right now.”

    Church agrees, though he’s also optimistic about the potential for Cas9-based gene drives. “I think we need to be cautious with all new technologies, especially all new technologies that are messing with nature in some way or another. But there’s also a risk of doing nothing,” Church says. “We have a population of 7 billion people. You have to deal with the environmental consequences of that.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 9:11 am on March 17, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “An Inflammatory Theory of Brain Disease” 

    PBS NOVA

    NOVA

    25 Feb 2015
    Lauren Aguirre

    Beginning in March, 2010, 882 men and women who had suffered traumatic brain injury were enrolled in a clinical trial to test whether administering the human hormone progesterone within four hours would improve their outcome. While it’s often thought of as a one-time event, traumatic brain injury is better described as a disease: it’s irreversible, sometimes progressive, and often affects people for the rest of their lives. More than 5 million Americans—ranging from professional football players to Iraq war veterans and victims of car accidents—live with disabilities caused by traumatic brain injury.

    1
    Microglia, seen here stained green, are part of the brain’s specialized immune system.

    One striking hallmark of traumatic brain injury is inflammation in the brain, which occurs shortly after the trauma and can cause swelling, tissue breakdown, and cell death. Because it can be so debilitating, a lot of research has gone into finding ways to limit the damage in the hours immediately following injury. Progesterone can interfere with inflammation and is also thought to stimulate repair, so it was considered a promising candidate for reducing brain damage. Plus, the hormone is cheap and widely available.

    Experimental animal models and two early, small clinical trials had all shown positive results. After years of failing to find effective medications, hopes were high for this new approach.

    The Role of Inflammation

    While inflammation is harmful in the case of traumatic brain injury, it is also critical for our survival. When our immune system encounters a microbe or when we bruise our knees, the inflammation that results rushes key cells and proteins to the site to fight the infection or to encourage healing. But there are times when inflammation doesn’t know when to quit, and many doctors and researchers believe it plays a role in many chronic diseases. The growing list goes beyond autoimmune diseases, such as arthritis, diabetes, or multiple sclerosis, to include cardiovascular disease and possibly even brain diseases such as Alzheimer’s, Parkinson’s, epilepsy, or depression.

    Here’s how the immune system is supposed to work. Let’s say you slam a car door on your finger. That causes tissue damage and possibly infection—stuff that doesn’t belong there and looks foreign to the body. White blood cells and other molecules swarm in, wall off the damaged area, and attack the invaders and the damaged tissue. The area gets hot, red, swollen, and painful. Clean-up cells like macrophages—which means “big eaters” in Greek—gobble up the garbage. Once the damage has been contained, other immune molecules begin the repair process and the inflammation subsides.

    But inflammation also causes collateral damage, a sort of friendly fire. The same processes that get rid of foreign agents can damage good cells as well. The death of those cells can in turn trigger further inflammation. For reasons that remain unclear, sometimes this creates a vicious cycle that becomes self-sustaining. Steven Meier, a neuroscientist at the University of Colorado who researches how the brain regulates immune responses points out that, “like many, many other adaptive mechanisms that are adaptive when they’re activated briefly, they may not be so adaptive when they’re activated chronically.”

    For decades, researchers have noticed a link between ongoing inflammation and cardiovascular disease. Today it’s widely accepted that the immune system’s response to plaques of low-density lipoproteins, or LDL, on blood vessel walls plays a pivotal role in the progression of the disease. Sensing these plaques as foreign invaders, white blood cells and other molecules that are meant to protect the body turn into its own worst enemy. Instead of healing the body, white blood cells become trapped inside the plaques, provoking further inflammation and allowing the plaques to continue to grow. Eventually one of those plaques can break off and cause a clot, with potentially disastrous results.

    Though it may be going too far to call inflammation a grand unifying theory of chronic disease, the link between the two is a focus of labs around the world. “I do think inflammation is an important element, and maybe at the heart of a variety of disorders,” Meier says, “and does account for a lot of the comorbidity that occurs between disorders. Why on earth is there comorbidity between depression and heart disease? But once you start thinking about inflammation, you realize they may be both inflammatory disorders or at least involve an inflammatory element.”

    In the last decade, interest in the relationship between inflammation and brain disease in particular has exploded. Tantalizing associations abound. For example, some population-based studies of Alzheimer’s patients suggested that people who took non-steroidal anti-inflammatories—so-called NSAIDs like aspirin or ibuprofen—for long periods have a reduced risk of developing Alzheimer’s. Low-grade systemic inflammation, as measured by higher than normal levels of certain inflammatory molecules in the blood, have been found in people with depression. And in children with severe epilepsy, techniques to reduce inflammation have succeeded in stopping their seizures in cases where all other attempts had failed.

    The Brain’s Immune System

    The key is the brain’s unique immune system, which is slightly different from the rest of the body. For starters, it’s less heavy-handed. “The immune system, during evolution, learned that, ‘This is the brain, this is the nervous system. I cannot really live without it, so I have to be very, very, careful,’” says Bibiana Bielekova, chief of the Neuroimmunological Diseases Unit at the NIH.

    The first line of defense for the central nervous system is the blood brain barrier, which lines the thousands of miles of blood vessels in your brain. It is largely impermeable, for the most part letting in only glucose, oxygen, and other nutrients that brain cells need to function. This prevents most of the toxins and infectious agents we encounter daily from coming into contact with our brain’s delicate neurons and fragile microenvironment, preserving the brain’s balance of electrolytes—such as potassium—which if disturbed can wreak havoc on the electrical signaling required for normal brain function. Normally the blood brain barrier is very selective about what it invites inside the brain, but when the barrier gets damaged, for example because of a traumatic brain injury, dangerous molecules and immune cells that aren’t supposed to be there can slip inside.

    The second line of defense are microglia, the brain’s specialized macrophages, which migrate into the brain and take up permanent residence. Typically, microglia have a spindly, tree-like structure. Their branches are in constant motion, which allows them to scan the environment, but also delicate enough to do so without damaging neural circuits. However, when they’re activated by injury or infection, microglia multiply, shape-shift into blobby, amoeba-like structures, release inflammatory chemicals, and engulf damaged cells, tissue debris, or microbes.

    Mounting Failures in Clinical Trials

    Late last year, the results of the progesterone-traumatic brain injury study with 882 patients were announced. Despite the apparent promise, patients who took progesterone following the initial brain injury fared no better than those on placebo. In fact, those who took placebo had slightly better outcomes. Women who took progesterone fared slightly worse. And episodes of phlebitis, or blood clots, were significantly more frequent in patients taking progesterone. A second study that enrolled 1,195 patients was also shut down when it showed no benefit.

    These two efforts are far from the only disappointing clinical trials that have tested anti-inflammatory treatments to intervene in brain diseases. A trial that used the antibiotic minocycline in ALS patients to reduce inflammation and cell death wound up harming more than helping. Alzheimer’s trials that attempted to reproduce the population effect that had been seen using NSAIDs also failed. In fact, in older patients the drugs appeared to make their symptoms worse.

    Another trial that attempted to circumvent inflammation by “vaccinating” with amyloid beta, the plaques that are one of the hallmarks of the disease, had to be discontinued after it caused inflammation of the brain and the membranes surrounding it in some patients. “Any time you intervene in any of these complicated biological processes that involve multiple proteins, multiple pathways, multiple loops, it’s going to be very complicated,” Meier notes.

    One reason why ongoing inflammation is assumed to be driving—if not instigating—brain diseases is that activated microglia are seen in the brains of these patients. However, activated microglia are not always bad. They also help protect it by shielding damaged areas from healthy regions, clearing debris from the brain, and initiating other complex anti-inflammatory processes that are far from fully understood. Bielekova points out that, “if you just see immune cells in the tissue, it’s very hard to say if they are playing bad guys or good guys.” In fact, a recent study published by the Yale School of Medicine in Nature Communications shows that microglia, at least in mice, appear to protect the brain by walling off the plaques from the surrounding environment. It’s possible, then, that tamping down microglia activation could actually make things worse.

    The difficulty of figuring out how to intervene in an immune response without turning off necessary functions may be just one reason why we haven’t seen major successes yet. Experts have pointed out a number of other reasons why so many trials have failed, from animal models that don’t translate well to humans and clinical trials that some would argue were poorly designed to the fact that once an inflammatory immune response has been well established, it becomes much harder to resolve.

    “Once you have this fully established chronic inflammation, it’s much, much more difficult to deliver effective treatments to those areas,” Bielekova says. “In multiple sclerosis, it is very clear that whichever drug you take that is efficacious, the efficacy decreases as you delay the treatment. So if you use the treatment very, very early on, every drug looks like a winner. But you wait just a couple of years and you take patients that are now three four years longer in the disease duration you may lose 50% efficacy of your drug.”

    Glimmers of Hope

    Disappointing results aside, there are hints that intervening early to tamp down inflammation can be helpful. The same data analysis that showed NSAIDs can actually speed up the progression of Alzheimer’s in patients in the advanced stages of the disease also revealed that those who started taking NSAIDs regularly in midlife, when their brains were healthier, had slower cognitive decline.

    Other approaches built around intervening early have yielded similar results. For example, last summer Genentech announced the results of a phase II trial testing the efficacy of crenezumab to treat Alzheimer’s disease. Crenezumab is an antibody that binds to amyloid beta, the protein that makes up the plaques scattered throughout the brain that are one of the main visible features of Alzheimer’s. The theory behind this choice of antibody was that it would stimulate microglia just enough to begin clearing the plaques, but not so much so that these immune cells would launch a major inflammatory response. While this phase II trial failed to meet its targets, patients in the early stages of the disease who had been given large doses showed slower cognitive decline.

    Damir Janigro, a blood brain barrier researcher at Cleveland Clinic who studies traumatic brain injury and epilepsy, has a very different take on how to approach brain diseases linked with inflammation. He considers both of these diseases to be “blood brain barrier” diseases because repeated seizures and traumatic brain injury can damage the blood brain barrier, making it leakier. That means that not only can substances that don’t belong inside the brain slip through, materials from inside the brain can travel to the rest of the body.

    “The blood brain barrier shields your brain, which is good for you. But then it’s bad for you if you leak a piece of your brain and this is considered an enemy” by the rest of your immune system, he says. Janigro is part of what he calls a “vocal minority” of researchers who look at inflammation outside the brain as being another cause of inflammation inside the brain—and potentially even a better target for treatment. “Neuroinflammation is probably bad for you. But it’s a very hard target to go after. Everybody who does is surprised that it fails, like the Alzheimer’s trial in pulling amyloid from the brain.”

    We’re still early in our understanding of how the brain’s immune system works, when it is damaging, and when it is protective. If inflammation is the common element in brain diseases, it may turn out that understanding how to intervene successfully in one disease will make it possible to use similar therapeutic approaches across many. But, because we don’t fully understand how the unfathomably complex immune system works, it is likely to be a long and difficult journey before we find ways to intervene safely and effectively.

    “If you look at the range of disorders and diseases there’s probably a continuum where in some it plays little or no role, with some it’s in between,” Meier says. “You can go too far with any of these unifying themes. There’s a natural tendency to want to do so. But I do think the inflammation story is not going away. I think it’s real.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 12:20 pm on March 14, 2015 Permalink | Reply
    Tags: , Cosmic dust, NOVA   

    From NOVA: “In the Past 24 Hours, 60 Tons of Cosmic Dust Have Fallen to Earth” 

    PBS NOVA

    NOVA

    13 Mar 2015
    Allison Eck

    1
    Sunlight reflecting off cosmic dust particles creates an effect known as “zodiacal light.”

    Every day, bits of outer space rain down on the Earth.

    Leftover from our solar system’s birth 4.6 billion years ago, cosmic dust is pulled into our atmosphere as the planet passes through decayed comet tails and other regions of chunky space rock. Occasionally, it arrives on Earth in the form of visible shooting stars.

    But the amount of space dust that Earth accumulates is maddeningly difficult to determine. Some measures taken from spacecraft solar panels, polar ice cores, and meteoric smoke have attempted an answer, but the estimates vary widely, from 0.4 to 110 tons per day.

    But a new paper claims to have narrowed that range. Here’s Mary Beth Griggs, writing for Popular Science:

    [A] recent paper took a closer look at the levels of sodium and iron in the atmosphere using Doppler Lidar, an instrument that can measure changes in the composition of the atmosphere. Because the amount of sodium in the atmosphere is proportional to the amount of cosmic dust in the atmosphere, the researchers figured out that the actual amount of dust falling to the earth is along the lines of 60 tons per day.

    The scientists published their results in the Journal of Geophysical Research.

    It may sound like an academic exercise, but determining how much cosmic dust falls on the Earth can help us better understand a number of critical processes, such as cloud formation in the upper atmosphere and the fertilization of plankton in Antarctica. It also suggests that we may gain a better answer as to whether the Earth is gaining mass each year or losing it. (Our planet constantly leaks gases into space.)

    Some of that cosmic dust is probably in you and me. While many of the elements that rain down from the heavens settle to the ground, it’s likely that we consume it in our food or inhale a tiny fraction of it. “We are made of star stuff”—Carl Sagan’s famous quote—rings truer than ever.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 3:42 am on March 6, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Powerful, Promising New Molecule May Snuff Antibiotic Resistant Bacteria” 

    PBS NOVA

    NOVA

    09 Jan 2015
    R.A. Becker

    1
    Methicillin-resistant staph surround human immune cell.

    Antibiotic resistant bacteria pose one of greatest threats to public health. Without new weapons in our arsenal, these bugs could cause 10 million deaths and cost nearly $100 trillion worldwide each year by the year 2050, according to a recent study commissioned by the British government.

    But just this week, scientists announced that they have discovered a potent new weapon hiding in the ground beneath our feet—a molecule that kills drug resistant bacteria and might itself be resistant to resistance. The team published their results Wednesday in the journal Nature.

    Scientists have been coopting the arsenal of soil-dwelling microorganisms for some time, said Kim Lewis, professor at Northeastern University and senior investigator of the study. Earth-bound bacteria live tightly packed in an intensely competitive environment, which has led to a bacterial arms race. “The ones that can kill their neighbors are going to have an advantage,” Lewis said. “So they go to war with each other with antibiotics, and then we borrow their weapons to fight our own pathogens.”

    However, by the 1960s, the returns from these efforts were dwindling. Not all bacteria that grow in the soil are easy to culture in the lab, and so antibiotic discovery slowed. Lewis attributes this to the interdependence of many soil-dwelling microbes, which makes it difficult to grow only one strain in the lab when it has been separated from its neighbors. “They kill some, and then they depend on some others. It’s very complex, just like in the human community,” he said.

    But a new device called the iChip, developed by Lewis’s team in collaboration with NovoBiotic Pharmaceuticals and colleagues at the University of Bonn, enables researchers to isolate bacteria reluctant to grow in the lab and cultivate them instead where they’re comfortable—in the soil.

    Carl Nathan, chairman of microbiology and immunology at Weill Cornell Medical School and co-author of a recent New England Journal of Medicine commentary about the growing threat of antibiotic resistance, called the team’s discovery “welcome,” adding that it illustrates a point that Lewis has been making for several years, that soil’s well of antibiotic-producing microorganisms “is not tapped out.”

    The researchers began by growing colonies of formerly un-culturable bacteria on their home turf and then evaluating their antimicrobial defenses. They discovered that one bacterium in particular, which they named Eleftheria terrae, makes a molecule known as teixobactin which kills several different kinds of bacteria, including the ones that cause tuberculosis, anthrax, and even drug resistant staph infections.

    Teixobactin isn’t the first promising new antibiotic candidate, but it does have one quality that sets it apart from others. In many cases, even if a new antibiotic is able to kill bacteria resistant to our current roster of drugs, it may eventually succumb to the same resistance that felled its predecessors. (Resistance occurs when the few bacteria strong enough to evade a drug’s killing effects multiply and pass on their genes.)

    Unlike current antibiotic options, though, teixobactin attacks two lipid building blocks of the cell wall, which many bacteria strains can’t live without. By attacking such a key part of the cell, it becomes harder for a bacterium to mutate to escape being killed.

    “This is very hopeful,” Nathan said. “It makes sense that the frequency of resistance would be very low because there’s more than one essential target.” He added, however, that given the many ways in which bacteria can avoid being killed by pharmaceuticals, “Is this drug one against which no resistance will arise? I don’t think that’s actually proved.”

    Teixobactin has not yet been tested in humans. Lewis said the next steps will be to conduct detailed preclinical studies as well as work on improving teixobactin’s molecular structure to solve several practical problems. One they hope to address, for example, is its poor solubility; another is that it isn’t readily absorbed when given orally—as is, it will have to be administered via injection.

    While Lewis predicts that the drug will not be available for at least five years, this new method offers a promising new avenue of drug discovery. Nathan agrees, though he cautions it’s too soon to claim victory. The message of this recent finding, he said, “is not that the problem of antibiotic resistance has been solved and we can stop worrying about it. Instead it’s to say that there’s hope.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 434 other followers

%d bloggers like this: