Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:59 am on June 19, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Do We Need to Rewrite General Relativity?” 



    18 Jun 2015
    Matthew Francis

    A cosmological computer simulation shows dark matter density overlaid with a gas velocity field. Credit: Illustris Collaboration/Illustris Simulation

    General relativity, the theory of gravity Albert Einstein published 100 years ago, is one of the most successful theories we have. It has passed every experimental test; every observation from astronomy is consistent with its predictions. Physicists and astronomers have used the theory to understand the behavior of binary pulsars, predict the black holes we now know pepper every galaxy, and obtain deep insights into the structure of the entire universe.

    Yet most researchers think general relativity is wrong.

    To be more precise: most believe it is incomplete. After all, the other forces of nature are governed by quantum physics; gravity alone has stubbornly resisted a quantum description. Meanwhile, a small but vocal group of researchers thinks that phenomena such as dark matter are actually failures of general relativity, requiring us to look at alternative ideas.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 7:09 am on June 16, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “A Window into New Physics” 



    10 Jun 2015
    Kate Becker

    In 2007, David Narkevic was using a new algorithm to chug through 480 hours of archived data collected by the Parkes radio telescope in Australia. The data was already six years old and had been thoroughly combed for the repeating drumbeat signals that come from rapidly-rotating dead stars called pulsars.

    But Narkevic, a West Virginia University undergrad working under the supervision of astrophysicist Duncan Lorimer, was scouring these leftovers for a different animal: single pulses of unusually bright radio waves that are known to punctuate the rhythm of the most energetic pulsars.

    The Parkes Observatory hosts a large radio telescope in central New South Wales, Australia.

    Radio astronomers have an arsenal of well-honed tricks for teasing out faint signals including correcting for “dispersion.” Dispersion is when signals traveling through space arrive slightly earlier at high frequencies than they do at low frequencies according to a precise formula that describes how electromagnetic radiation is delayed by free-floating electrons. The more interstellar stuff the signals have to traverse, the more dispersed they are, so “dispersion measure” functions as a rough proxy for distance.

    Distant, and therefore highly dispersed signals, are difficult to pick up because their energy is smeared out across frequency and time. So, astrophysicists design search algorithms that apply one correction factor after another, with the hope that, by trial and error, they might hit on the right one and pluck a signal out from the noise. The process requires a lot of computing time, so astronomers typically only use common-sense dispersion corrections. But with all the common-sense results already wrung out from the data set, Narkevic was trying out correction factors corresponding to distances far beyond the Milky Way and its neighboring galaxies.

    To his surprise, it worked: He discovered a bright burst of radio waves, lasting less than five milliseconds, coming from a point on the sky a few degrees away from the Small Magellanic Cloud but that seemed to originate from far beyond it.

    NASA/ESA Hubble and Digitized Sky Survey 2

    It was impossible to pin down its precise location and distance but, based on the dispersion, Lorimer and his team calculated that it had to be far: billions of light years beyond the Milky Way.

    Lorimer’s team trained the Parkes telescope on the site for 90 more hours but never picked up another burst. Whatever Narkevic had found, it didn’t look like one of the pulsar pulses Lorimer had originally set out to find.

    That left plenty of other possibilities. It could be some human-made interference masquerading as a mysterious cosmic object: military radar, microwave ovens, bug zappers, and even electric blankets all produce electromagnetic radiation that can confuse readings from radio telescopes. But the “Lorimer burst” didn’t look like it was coming from one of these sources. For one thing, the dispersion was by-the-book: that is, the signal “swept” in at high frequencies first, and low frequencies later. For another, it was picked up by just three of the telescope’s 13 “beams,” each of which corresponds to a single pixel on a sky map, suggesting that it was localized out there, somewhere in the sky, rather than coming from a nearby source of interference, which would swamp the whole telescope.

    “We couldn’t think of any radio-frequency interference that would mimic those characteristics,” says astrophysicist Maura McLaughlin, also of West Virginia University, who was part of the discovery team. The researchers also ruled out some of the usual cosmic suspects: The burst was too bright to be a spasmodic eruption from a pulsar and too high-frequency to be the radio counterpart to a gamma-ray burst. Magnetars, highly magnetized neutron stars that sizzle with X-rays and gamma-rays, remained a strong possibility. “I tend to go with the least exotic things,” McLaughlin says, citing Occam’s razor: “The simplest thing is always the best. But I wouldn’t be surprised if it was something really strange and exotic, too.”

    Such observational puzzles are candy for theorists, and fast radio bursts, or FRBs as they are called, present a particularly sweet mystery: Their extreme properties hint that they might be able to reveal phenomena that push the boundaries of known physics, perhaps probing the properties of dark matter or quantum gravity theories beyond the Standard Model.

    Standard Model of Particle Physics. The diagram shows the elementary particles of the Standard Model (the Higgs boson, the three generations of quarks and leptons, and the gauge bosons), including their names, masses, spins, charges, chiralities, and interactions with the strong, weak and electromagnetic forces. It also depicts the crucial role of the Higgs boson in electroweak symmetry breaking, and shows how the properties of the various particles differ in the (high-energy) symmetric phase (top) and the (low-energy) broken-symmetry phase (bottom).

    So while observational astronomers kept searching for more FRBs, theorists began speculating about what they might be.

    Imploding Neutron Stars

    There were three clues: The burst was short, powerful, and distant. To astrophysicists, a short signal points to a small source—in this case, one so small that a light beam could cross it in the duration of the burst, just a few milliseconds. That means that FRB “progenitors,” whatever they are, probably measure less than one thousandth the width of the sun. What could pack such a huge amount of energy into that tiny package? “The only things that can produce that much energy are neutron stars and black holes,” says Jim Fuller, a theorist at Caltech.

    Fuller started thinking seriously about fast radio bursts in 2014, just as they were enjoying a scientific comeback. Studies of the Lorimer burst had languished for years after a group led by Sarah Burke-Spolaor, then a postdoc at the Commonwealth Scientific and Industrial Research Organisation in Australia, detected 16 similar bursts and was able to unambiguously chalk them up to earthly interference. But then, in 2013, Burke-Spolaor found a Lorimer burst of her own. A handful more followed. FRBs were back from the dead.

    Meanwhile, Fuller had a different astronomical mystery on his mind: the apparent scarcity of pulsars near the center of the Milky Way. There should be plenty of pulsars within a few light years of the galactic center, Fuller says, but despite years of searching, astronomers have found just one. What happened to the rest of them? Astrophysicists call this the “missing pulsar problem.”

    The FRBs seemed to be coming from a few degrees away from the Small Magellanic Cloud.

    Last year, a pair of astronomers proposed an unconventional answer: those missing pulsars might have “imploded” under the weight of dark matter, which is abundant in the center of the galaxy. Though dark matter passes easily through planets and stars, it could get trapped in the dense meat of a neutron star, they argued. Once there, it would slowly sink down to the star’s center. Over time, dark matter would pile up in the core, eventually collapsing into a tiny black hole that would eat away at the neutron star from the inside out. The star would gradually erode over thousands or millions of years until, in one great slurp, the black hole would devour nearly the whole mass of the neutron star in a matter of milliseconds.

    “Probably, it will be a very violent event, where the magnetic field is totally expelled from the black hole and reconnects with itself,” Fuller says. Some of the energy of the ravaged magnetic field would be turned into electromagnetic radiation: a blast of radio waves that might look a lot like an FRB.

    “It’s a pretty crazy idea,” Fuller admits. But it does make some predictions that we can observe. If Fuller’s model is right, neutron star implosions should have left behind lots of small black holes near the center of the galaxy, each holding about one-and-a-half times the mass of our sun. Though astronomers can’t see a black hole directly, if the black hole happens to be drawing matter from a companion star, as is relatively common, it will give off characteristic bursts of X-rays. A different kind of X-ray burst, on the other hand, could signal the presence of a neutron star, not a black hole. If there are lots of neutron stars hanging out around the galactic center, that would challenge Fuller’s scenario. (Some recent X-ray observations point toward the existence of those neutron stars, though the evidence is not yet definitive, Fuller says.)

    Fuller’s argument also predicts that FRBs should be coming from very close to the center of other galaxies. So far, astronomers haven’t pinpointed the location of a single FRB, and localizing one within a galaxy is an added challenge.

    If Fuller’s predictions hold up, they will yield fresh insight into the nature of dark matter, which is still almost totally a blank. First, it will mean that dark matter particles don’t annihilate each other, as some recent observations have hinted. It would also reveal dark matter’s “cross section”—that is, the likelihood that a particle of dark matter will interact with normal matter, as opposed to passing straight through it. For the neutron star implosion scenario to hold up, dark matter’s cross section must be just somewhere in a very narrow range of possibilities, Fuller says.

    Bouncing Black Holes

    Another possibility for what’s causing FRBs comes from the leading edge of black hole physics, where theorists are puzzling over the difficult answer to an apparently simple question: What happens to the stuff that falls into a black hole? Physicists once thought that it was inevitably compressed into an infinitely small, infinitely dense point called a singularity. But because the known laws of physics break down at this point, the singularity has always been a raw nerve for physicists.

    Many physicists would like to find a way to sidestep the singularity, and theorists working on a theory called loop quantum gravity think they have found a way to do so. Loop quantum gravity proposes that the fabric of spacetime is woven of tiny—you guessed it—loops. These loops can’t be compressed indefinitely—push them too far, and they push back. In the universe of loop quantum gravity, a would-be black hole can collapse only until gravity is overcome by the outward pressure generated by the loops, which then hurtles the black hole’s innards back out into space, transforming it into its mathematical opposite, a white hole.

    Abruptly, the contents of the black hole would be converted into a tremendous blast of energy concentrated at a wavelength of a few millimeters, according to Carlo Rovelli, a theorist at Aix-Marseille University, and his colleagues in France and the Netherlands. We might be able to pick up the first of these cosmic kabooms today, coming from some of the universe’s earliest black holes, Rovelli says, and they might look a lot like fast radio bursts. It’s not a perfect match: fast radio bursts emit at a lower frequency, corresponding to a wavelength of about 20 centimeters, and they don’t give off as much energy as the theorists predict for a “quantum bounce.” But, Rovelli says, the model’s predictions are still very crude and don’t account for the black hole’s motion, interactions between the matter it contains, or even the fact that the black hole has mass.

    Rovelli says the model does make one clear, testable prediction: a peculiar correlation between the wavelength at which the signal is received and the distance to the black hole. That’s because the wavelength of the emitted energy depends on two things: the size of the black hole and its distance from Earth. The most distant explosions should be coming from the youngest, and therefore smallest, black holes, meaning that their energy will be skewed toward shorter wavelengths. But as the radiation travels across the expanding universe, it will be stretched out, or “redshifted,” so that the signals we pick up on Earth register at a longer wavelength than they were emitted. Add up the effects and you should see the specific curve that Rovelli and his colleagues predict. As astronomers find more fast radio bursts, they will be able to test whether they match the predicted curve.

    It may sound like a long shot. But, if it’s right, the payoff would be huge: “If the observed Fast Radio Bursts are connected to this phenomenon, they represent the first known direct observation of a quantum gravity effect,” wrote Rovelli and his colleagues.

    It could also get physicists out of a theoretical jam called the black hole information paradox, which pits two unshakable tenets of physics against each other. On one side, the principle of unitarity holds that information can never be lost; on the other, according to the rules of black hole thermodynamics, the only thing that ever escapes from a black hole, Hawking radiation, is randomly scrambled and preserves no information. To solve the paradox, some physicists have proposed that the entanglement between incoming particles and those radiated out as Hawking radiation could be spontaneously broken, putting up a “firewall” of energy at the black hole’s horizon. But the concept is still controversial: plenty of ideas in modern physics violate our intuition about how the world is supposed to work, but a sizzling wall of energy floating around a black hole? Really?

    The quantum bounce effect could resolve the information paradox and neutralize the need for a firewall, argue Rovelli and his colleagues. The information inside the black hole isn’t lost: it just comes out later.

    Superconducting Cosmic Strings

    Fast radio bursts could also be a modern manifestation of something that happened 13.7 billion years ago, just after the Big Bang, when the baby universe was roiling with so much energy that all the fundamental physical forces acted as one. At this moment, the Higgs field had not yet switched on and nothing in the universe had mass. Then, on came the Higgs field, unfurling through space and pinging every particle it encountered with its magic wand, bestowing the gift of mass.

    Some theorists think that the field associated with the Higgs boson, discovered in 2012 at the Large Hadron Collider [LHC], is just one of many similar fields, each of which plays a role in giving particles mass.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles

    But many models predict that these fields would not diffuse perfectly through all of space. Instead, they would miss a few spots. These gaps, the thinking goes, would become defects called cosmic strings, skinny tubes of space that, like springy rubber bands, are tense with stored energy. Extending over millions of light years and traveling close to the speed of light, these hypothetical strings would be so massive that a single centimeter-long snippet would contain a mountain’s-worth of mass, says Tanmay Vachaspati, a physicist at Arizona State University who, along with Alexander Vilenkin at Tufts University, did early work on the formation and evolution of cosmic strings.

    Invisible to most telescopes, cosmic strings could be detected via the gravitational waves they emit as they shimmy through space and crash into other cosmic strings. So far, astronomers haven’t made any affirmative detection of these gravitational waves, though the fact that they haven’t shown up yet allows physicists to put some limits on the maximum mass of the strings.

    A still-more-exotic breed of cosmic strings called superconducting cosmic strings, which carry an electrical current, could turn out to be easier for astronomers to observe. First proposed by theorist Edward Witten, these electrified strings should give off detectable electromagnetic radiation as they move through space, Vachaspati says. The emission would look like a constant hum of very-low-frequency radio waves, occasionally spiked with brief, higher-frequency bursts from dramatic events called kinks and cusps. Kinks happen when two strings meet and reconnect at their point of intersection, Vachaspati says. Cusps are like the end of a whip, lashing out into space at close to the speed of light. What, exactly, their radio emission might look like depends on many still-unknown parameters of the strings, Vachaspati says. But it is possible that they would look very much like fast radio bursts.

    There is one problem, though. Vachaspati and his colleagues predict that the radio emission from superconducting cosmic strings should be linearly polarized: that is, it should oscillate in a plane. So far, polarization has only been measured for one fast radio burst, but that was circularly polarized, meaning that its electric field draws out a spiral around the direction its traveling.

    Some theorists, including Vilenkin, think it might be possible for a superconducting cosmic string to produce a circularly polarized signal under certain conditions. And with polarization measured for just one FRB so far, it’s too soon to discount the hypothesis entirely.

    Future Observations

    Today, astronomers have detected about a dozen fast radio bursts. (A group of apparently similar signals, curiously clustered around lunchtime, were recently traced to a more mundane source: the Parkes observatory microwave oven.) But observers and theorists in every camp agree on this: to figure out what is causing FRBs, they need to find more of them.

    “Right now, there are far more theories about what’s causing FRBs than FRBs themselves,” says Burke-Spolaor, who is now leading up a search for FRBs with the Very Large Array (VLA), a network of radio telescopes in New Mexico.


    With more bursts in their catalog, astronomers will be able to draw more meaningful conclusions about how common FRBs are and how they are distributed across the sky. They will also be able to answer two critical questions: where the bursts are coming from, and what they look like in other parts of the electromagnetic spectrum.

    So far, astronomers have localized each Parkes burst to a disc of sky that’s about a half-degree across—about the size of the full moon. To astronomers, that’s an enormous region: extend your vision out to the distance at which FRBs are expecting to be going off, and that little patch of sky could contain hundreds of galaxies. Using the VLA, Burke-Spolaor should be able to pin down a burst’s location to a single galaxy. But first, she has to find one. Based on the number of FRBs that have been seen so far, she estimates that it will take about 600 hours of skywatching to have a solid chance of observing one. So far, she has a little under 200 hours down.

    Unlike the archival search that turned up the first FRB, Burke-Spolaor’s search campaign is attempting to catch FRBs in the act. That will give astronomers a chance to quickly swivel other telescopes to the same spot and potentially see the bursts giving off energy at other wavelengths. So far, only three FRBs have been caught in real time, including a May 14, 2014, burst observed at Parkes by a team of astronomers including Emily Petroff, a PhD student in astrophysics at Swinburne University of Technology in Melbourne, Australia. Within a few hours, a dozen other telescopes were watching the source of the burst at wavelengths ranging from X-rays to radio waves. But not one of them saw anything unusual. Papers on two more bursts, observed in February and April of this year, are currently being prepared for publication; astronomers followed up on those bursts with observations at multiple wavelengths, too, but haven’t yet announced the result of those studies.

    Meanwhile, Jayanth Chennamangalam, a former student of Lorimer’s who is now a post-doc at Oxford, is putting the finishes touches on a system that will scan every 100 microseconds of incoming radio data at the Arecibo dish in Puerto Rico for sudden, short pulses.


    The system, called ALFABURST, will piggyback on the latest iteration of SERENDIP, a spectrometer that’s been tapping the Arecibo’s feed for years, listening for signals from extraterrestrial civilizations. Ultimately, it will be able to alert astronomers to unusual bursts within seconds—fast enough for rapid follow-up at other wavelengths.

    Will fast radio bursts turn out to be a window into new physics or just a new perspective on something more familiar? It’s too early to say. But for now, researchers can relish the moment of being maybe, just possibly, on the verge of finding something genuinely new to science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:29 pm on May 28, 2015 Permalink | Reply
    Tags: , Classical Mechanics, NOVA, ,   

    From NOVA: “Ultracold Experiment Could Solve One of Physics’s Biggest Contradictions” 



    28 May 2015
    Allison Eck

    A vortex structure emerges within a rotating Bose-Einstein condensate.

    There’s a mysterious threshold that’s predicted to exist beyond the limits of what we can see. It’s called the quantum-classical transition.

    If scientists were to find it, they’d be able to solve one of the most baffling questions in physics: why is it that a soccer ball or a ballet dancer both obey the Newtonian laws while the subatomic particles they’re made of behave according to quantum rules? Finding the bridge between the two could usher in a new era in physics.

    We don’t yet know how the transition from the quantum world to the classical one occurs, but a new experiment, detailed in Physical Review Letters, might give us the opportunity to learn more.

    The experiment involves cooling a cloud of rubidium atoms to the point that they become virtually motionless. Theoretically, if a cloud of atoms becomes cold enough, the wave-like (quantum) nature of the individual atoms will start to expand and overlap with one another. It’s sort of like circular ripples in a pond that, as they get bigger, merge to form one large ring. This phenomenon is more commonly known as a Bose-Einstein condensate, a state of matter in which subatomic particles are chilled to near absolute zero (0 Kelvin or −273.15° C) and coalesce into a single quantum object. That quantum object is so big (compared to the individual atoms) that it’s almost macroscopic—in other words, it’s encroaching on the classical world.

    The team of physicists cooled their cloud of atoms down to the nano-Kelvin range by trapping them in a magnetic “bowl.” To attempt further cooling, they then shot the cloud of atoms upward in a 10-meter-long pipe and let them free-fall from there, during which time the atom cloud expanded thermally. Then the scientists contained that expansion by sending another laser down onto the atoms, creating an electromagnetic field that kept the cloud from expanding further as it dropped. It created a kind of “cooling” effect, but not in the traditional way you might think—rather, the atoms have a lowered “effective temperature,” which is a measure of how quickly the atom cloud is spreading outward. At this point, then, the atom cloud can be described in terms of two separate temperatures: one in the direction of downward travel, and another in the transverse direction (perpendicular to the direction of travel).

    Here’s Chris Lee, writing for ArsTechnica:

    “This is only the start though. Like all lenses, a magnetic lens has an intrinsic limit to how well it can focus (or, in this case, collimate) the atoms. Ultimately, this limitation is given by the quantum uncertainty in the atom’s momentum and position. If the lensing technique performed at these physical limits, then the cloud’s transverse temperature would end up at a few femtoKelvin (10-15). That would be absolutely incredible.

    A really nice side effect is that combinations of lenses can be used like telescopes to compress or expand the cloud while leaving the transverse temperature very cold. It may then be possible to tune how strongly the atoms’ waves overlap and control the speed at which the transition from quantum to classical occurs. This would allow the researchers to explore the transition over a large range of conditions and make their findings more general.”

    Jason Hogan, assistant professor of physics at Stanford University and one of the study’s authors, told NOVA Next that you can understand this last part by using the Heisenberg Uncertainty Principle. As a quantum object’s uncertainty in momentum goes down, its uncertainty in position goes up. Hogan and his colleagues are essentially fine-tuning these parameters along two dimensions. If they can find a minimum uncertainty in the momentum (by cooling the particles as much as they can), then they could find the point at which the quantum-to-classical transition occurs. And that would be a spectacular discovery for the field of particle physics.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:28 am on May 8, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Sneaking into the Brain with Nanoparticles” 



    12 Mar 2015
    Teal Burrell

    About a decade ago, Beverly Rzigalinski, a molecular biologist now at Virginia College of Osteopathic Medicine, was asked by a colleague to look into nanoparticles. “Nanoparticles? Yuck,” she thought. She off-handedly told a student to throw them on some neurons growing in the lab and take notes on what happened. She had no hope for the experiment, sure the nanoparticles would kill all the neurons, but at least she could say she tried.

    Rzigalinski was given cerium oxide nanoparticles to work with, ten-nanometer wide particles derived from a rare earth metal. (Human hair, by comparison, is 100,000 nanometers wide.) No one had looked at their biological applications, and Rzigalinski was not particularly impressed with their résumé. Cerium oxide nanoparticles’ listed industrial uses included glass polishing and fuel combustion, nothing that seemed promising for neuroscience.

    A month and a half later, Rzigalinski noticed the dishes still sitting in the lab’s incubator. She marched straight over to the student, launching into a lecture about not wasting expensive resources on cells that were surely long dead. (Neurons in the lab typically stayed alive for only three weeks.) But the student assured her the cells treated with nanoparticles were still alive. Skeptically, she peered into the microscope and was surprised to find living, flourishing neurons. Rzigalinski has been studying nanoparticles ever since.

    Other neuroscientists are joining her, taking advantage of nanoparticles’ unique properties to identify new therapies, shuttle existing therapies into the brain, and examine the brain on a level and depth never before possible.

    Recyclable Antioxidants

    When treated with cerium oxide particles, Rzigalinski’s neurons survived for up to six months, nine times longer than usual. Cerium oxide nanoparticles may extend life in this way by neutralizing free radicals, unpaired electrons that are highly reactive and can damage DNA. The body’s defenses against free radicals may wear down with time; aging may be due in part to free radicals slowly accumulating unchecked.

    Damage induced by free radicals also contributes to a number of neurological diseases. Rzigalinski’s work is revealing how cerium oxide nanoparticles can prevent or reverse this destruction as well. Treating mouse models of Parkinson’s disease with cerium oxide particles rescued the loss of dopaminergic cells, the death of which leads to the disease’s characteristic tremors and slow shuffling gait. Cerium oxide nanoparticles also seemed to halt the free radical-triggered cascade of damage that typically follows traumatic brain injury; after injury, nanoparticle-treated mice had fewer signs of free radical damage and better memories compared to control-treated mice. Finally, when flies were administered nanoparticles following a stroke (in a timeframe analogous to receiving treatment upon arrival to a hospital), the treated flies not only lived longer but also had improved performance on fly-specific tasks, like quickly buzzing to the top of the cage.

    Antioxidants like vitamins C and E also sop up free radicals, but each antioxidant molecule only destroys one free radical. As Rzigalinski points out, the advantage of cerium oxide nanoparticles is that, “These nanoparticles are regenerative, so they’ll scavenge thousands, or hundreds of thousands, of free radicals.” Cerium oxide nanoparticles neutralize free radicals by snatching the electrons, shuffling them around, and eventually depositing them as water, restoring the nanoparticles to their original state, ready to abolish more free radicals. This recycling means the nanoparticles will continue working after a single dose. Rzigalinski found nanoparticles present as long as six months after injection in mice and, crucially, toxicity has not been an issue, since the dosage is so low. Single doses, or even low doses, can both prevent harmful side effects and keep costs down.

    Cerium oxide nanoparticles are effective because, after injection, they immediately get coated with proteins that help carry them into the heart, lungs, and brain—where they need to be to slash disease-causing free radicals. Not all drugs are so lucky.

    Trojan Horses

    The trouble with treating brain diseases is the brain exists in a separate world, sealed off from the rest of the body. Cells are tightly packed around the brain’s blood vessels, forming the blood brain barrier, a heavily guarded barricade separating the blood and its contents from the brain and spinal cord. This security system works to keep any bacterial infections and toxins in the blood from getting into the ultrasensitive brain. If small or fat-soluble enough, certain approved entities—like water, gases, alcohol, and some hormones—can leak across the border. Larger molecules require exclusive receptors to allow them through, a unique key that unlocks a particular door and grants them access.

    While creating an extra level of protection from diseases outside the brain, the blood brain barrier causes trouble for trying to solve diseases within the brain. It’s a notorious nemesis of drug development, preventing an estimated 98% of potential treatments from getting in. Tomas Skrinskas of Precision NanoSystems—a biotechnology company specializing in delivering materials to cells—lamented, “The blood-brain barrier is probably the trickiest [challenge] there is.”

    In this image of the blood-brain barrier, green-stained glial cells surround the blood vessels (seen here in black), providing support for red-stained neurons. No image credit.

    To overcome this hurdle, one current solution involves flooding the blood with drugs, hoping a small proportion passes through by sheer force of will or strength in numbers. But ingesting lots of drugs can trigger nasty side effects elsewhere in the body. Another way to crack through the defenses is to hack into systems already in place for transporting small molecules. Enter nanoparticles.

    While some nanoparticles act as treatments, others play the role of Trojan horse: they pretend to be ordinary, recognized molecules, gain access through special receptors, and sneak the drugs with them as they pass through the restricted entry gates. Nate Vinzant, an undergraduate in Gina Forster’s lab at the University of South Dakota, is using iron oxide to smuggle anti-anxiety drugs into the brain.

    When injected directly into the brain, antisauvagine decreases anxiety in rats. However, direct injection into the brain isn’t a feasible treatment option for humans and antisauvagine is incapable of passing from the blood to the brain on its own. To sneak it in, Vinzant attached antisauvagine to iron oxide nanoparticles, which are regularly taken into the brain via specific receptors. When hitched to iron, antisauvagine goes along for the ride because “the brain thinks it’s iron,” Vinzant says. Indeed, typically anxious rats given iron-bound antisauvagine displayed less signs of stress than untreated rats, confirming that the drug made its way from the injection site in the abdomen, through the blood, and across the barrier.

    More than just a boon for anxiety treatment, this research is a proof of principle. Other drugs can be tethered to nanoparticles like iron and use the same uptake mechanism.

    Remote Controls

    In addition to improving treatments, nanoparticles can also help researchers understand diseases and the brain in general. President Obama’s BRAIN Initiative, a program aiming to map the neurons and connections within the human brain, is initially focused on the development of novel technologies that may lead to future breakthroughs. This fall, Sarah Stanley, a post-doctoral researcher in Jeffrey Friedman’s lab at Rockefeller University, received one of the initiative’s first grants to develop technology that uses nanoparticles to control neurons.

    Stanley’s goal is to examine a diffuse network of neurons distributed throughout the brain. “We were really looking for a way of remotely modulating cells,” Stanley explains, but existing tools weren’t able to go deep or dispersed enough. For example, one popular new technique known as optogenetics, which uses light to activate neurons, wouldn’t work for Stanley’s project because light can’t penetrate very far into tissue. Another method involving uniquely designed drugs and receptors can’t be quickly turned on and off. So Stanley turned to nanoparticles.

    Ferritin nanoparticles bind and store iron, and Stanley genetically tweaked the nanoparticles to also associate with a temperature-sensitive channel. When the channel is heated, it opens, leading to the activation of certain genes.

    To generate heat, she used radio waves. Unlike light, radio waves freely penetrate tissue. They hit the ferritin nanoparticle, heating the iron core. The hot iron then heats the associated channel, causing it to open. Stanley tested the system by linking it to the production of insulin; when the radio waves heated the iron, the channel opened and the insulin gene was turned on, leading to a measurable increase in insulin. The nanoparticle is “basically acting as a sensor for radio waves,” says Stanley. It’s “transducing what would be entirely innocuous signals into enough energy to open the channel.”

    To optimize the system, Stanley first tested it in liver and stem cells of mice, but she is now moving into mouse neurons, intending to turn them on and off with her nanoparticle remote control. The radio waves’ penetration should allow researchers to use this technique to manipulate cells that are both deep and spread throughout the brain. “This tool will allow us to be able to modulate any cells in any [central nervous system] region at the same time in a freely moving mouse,” Stanley notes.

    For now, remotely controlling neurons in this way will only be used in research to discover more about these deep or dispersed networks. But eventually, it could potentially be combined with gene therapy to fine-tune protein levels. For example, in diseases with a mutated or dysfunctional gene, like Rett Syndrome, a developmental disorder causing movement and communication difficulties, gene therapy aims to replace the defective gene. Adding a functional gene isn’t always enough, however, as it must be adjusted to produce the appropriate amount of protein. Controlling the gene with radio waves and nanoparticles would allow doctors to carefully tweak the protein production.

    Although that’s a long way off, nanoparticles are claiming their spot in the future of neuroscience. In a press conference on innovative technologies at November’s Society for Neuroscience Conference in Washington, D.C., David Van Essen, a neuroscientist at Washington University in St. Louis, indirectly praised Stanley’s project. “It was really exciting to see earlier this fall that the [National Institutes of Health] has awarded about 50 new grants for some amazing, innovative ideas.” He then went on to introduce Rzigalinski’s research on Parkinson’s disease, mentioning how nanotechnology is a new tool providing hope for reversing devastating diseases.

    Neuroscientists may need to temper their excitement, however. Clinical trials for cancer treatments have stalled as some nanoparticles—including iron—have been found to generate free radicals, which can trigger cell death. But a compromise may be possible: iron nanoparticles are also being studied to enhance magnetic resonance imaging (MRI) signals and toxicity doesn’t seem to be an issue so long as the doses are kept low. If the drugs the nanoparticles carry with them are powerful enough, lower doses can be used and harmful side effects prevented.

    So far, cerium oxide nanoparticles have avoided this issue, but their relentless crusade against free radicals may lead to a different problem: free radicals are crucial to certain cellular processes, including the regulation of blood pressure and function of the immune system. The question of how much free radical scavenging is too much remains to be answered. But, considering the elevated levels of free radicals in disease, perhaps the nanoparticles will have their hands full lowering levels to a healthy range, let alone reducing them enough to cause trouble.

    It’s still too early to know whether nanoparticles will usher in a new wave of clinical treatments, but to many researchers, it’s clear that they show great promise. Rzigalinski, for example, has long since apologized to her student for her disbelieving rant. Small as they may be, nanoparticles have the ability to take on Goliath-sized tasks, bringing researchers deep inside the brain, past its defenses, ready to fight destructive forces in new ways.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:28 am on April 27, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Fracking’s Hidden Hazards” 



    22 Apr 2015
    Terri Cook

    Late on a Saturday evening in November 2011, Sandra Ladra was reclining in a chair in her living room in Prague, Oklahoma, watching television with her family. Suddenly, the house started to shake, and rocks began to fall off her stone-faced fireplace, onto the floor and into Ladra’s lap, onto her legs, and causing significant injuries that required immediate medical treatment.

    The first tremor that shook Ladra’s home was a magnitude-5.0 earthquake, an unusual event in what used to be a relatively calm state, seismically speaking. Two more struck the area over the next two days. More noteworthy, though, are her claims that the events were manmade. In a petition filed in the Lincoln County District Court, she alleges that the earthquake was the direct result of the actions of two energy companies, New Dominion and Spress Oil Company, that had injected wastewater fluids deep underground in the area.

    House damage in central Oklahoma from a magnitude 5.7 earthquake on November 6, 2011. No image credit

    Ladra’s claim is not as preposterous as it may seem. Scientists have recognized since the 1960s that humans can cause earthquakes by injecting fluids at high pressure into the ground. This was first established near Denver, Colorado, at the federal chemical weapons manufacturing facility known as the Rocky Mountain Arsenal. Faced with the thorny issue of how to get rid of the arsenal’s chemical waste, the U.S. Army drilled a 12,044-feet-deep disposal well and began routinely injecting wastewater into it in March 1962.

    Less than seven weeks later, earthquakes were reported in the area, a region that had last felt an earthquake in 1882. Although the Army initially denied any link, when geologist David Evans demonstrated a strong correlation between the Arsenal’s average injection rate and the frequency of earthquakes, the Army agreed to halt its injections.

    Since then direct measurements, hydrologic modeling, and other studies have shown that earthquakes like those at the Rocky Mountain Arsenal occur when injection increases the fluid pressure in the pores and fractures of rocks or soil. By reducing the frictional force that resists fault slip, the increased pore pressure can lubricate preexisting faults. This increase alters the ambient stress level, potentially triggering earthquakes on favorably oriented faults.

    Although injection-induced earthquakes have become commonplace across broad swaths of the central and eastern U.S over the last few years, building codes—and the national seismic hazard maps used to update them—don’t currently take this increased hazard into account. Meanwhile, nagging questions—such as how to definitively diagnose an induced earthquake, whether manmade quakes will continue to increase in size, and how to judge whether mitigation measures are effective—have regulators, industry, and the public on shaky ground.

    Surge in Seismicity

    The quake that shook Ladra’s home is one example of the dramatic increase in seismicity that began across the central and eastern U.S. in 2001. Once considered geologically stable, the midcontinent has grown increasingly feisty, recording an 11-fold increase in the number of quakes between 2008 and 2011 compared with the previous 31 years, according to a study published in Geology in 2013.

    The increase has been especially dramatic in Oklahoma, which in 2014 recorded 585 earthquakes of magnitude 3.0 or greater—more than in the previous 35 years combined. “The increase in seismicity is huge relative to the past,” says Randy Keller, who retired in December after serving for seven years as the director of the Oklahoma Geological Survey (OGS).

    Yesterday, Oklahoma finally acknowledged that the uptick in earthquakes is likely due to wastewater disposal. “The Oklahoma Geological Survey has determined that the majority of recent earthquakes in central and north-central Oklahoma are very likely triggered by the injection of produced water in disposal wells,” the state reported on a new website. While the admission is an about-face for the government, which had previously questioned any link between the two, it doesn’t coincide with any new regulations intended to stop the earthquakes or improve building codes to cope with the tremors. For now, residents of Oklahoma may be just as vulnerable as they have been.

    This surge in seismicity has been accompanied by a spike in the number of injection wells and the corresponding amount of wastewater disposed via those wells. According to the Railroad Commission of Texas, underground wastewater injection in Texas increased from 46 million barrels in 2005 to nearly 3.5 billion barrels in 2011. Much of that fluid has been injected in the Dallas area, where prior to 2008, only one possible earthquake large enough to be noticed by people had occurred in recorded history. Since 2008, the U.S. Geological Survey (USGS) has documented over 120 quakes in the area.

    The increase in injection wells is due in large part to the rapid expansion of the shale-gas industry, which has unlocked vast new supplies of natural gas and oil that would otherwise be trapped in impermeable shale formations. The oil and gas is released by a process known as fracking, which injects a mix of water, chemicals, and sand at high enough pressure to fracture the surrounding rock, forming cracks through which the hydrocarbons, mixed with large volumes of fluid, can flow. The resulting mixture is pumped to the surface, where the hydrocarbons are separated out, leaving behind billions of gallons of wastewater, much of which is injected back underground.

    Many scientists, including Keller, believe there is a correlation between the two increases. “It’s hard to look at where the earthquakes are, and where the injection wells are, and not conclude there’s got to be some connection,” he says. Rex Buchanan, interim director of the Kansas Geological Survey (KGS), agrees there’s a correlation for most of the recent tremors in his state. “Certainly we’re seeing a huge spike in earthquakes in an area where we’ve also got big disposal wells,” he says. But there have been other earthquakes whose cause “we’re just not sure about,” Buchanan says.

    Diagnosing an Earthquake

    Buchanan’s uncertainty stems in part from the fact that determining whether a specific earthquake was natural or induced by human activity is highly controversial. Yet this is the fundamental scientific question at the core of Ladra’s lawsuit and dozens of similar cases that have been filed across the heartland over the last few years. Beyond assessing legal liability, this determination is also important for assessing potential seismic hazard as well as for developing effective methods of mitigation.

    One reason it’s difficult to assess whether a given earthquake was human-induced is that both types of earthquakes look similar on seismograms; they can’t be distinguished by casual observation. A second is that manmade earthquakes are unusual events; only about 0.1 percent of injection wells in the U.S. have been linked to induced earthquakes large enough to be felt, according to Arthur McGarr, a geologist at the USGS Earthquake Science Center. Finally, scientists have comparatively few unambiguous examples of induced earthquakes. That makes it difficult to create a yardstick against which potential “suspects” can be compared. Like a team of doctors attempting to diagnose a rare disease, scientists must examine all the “symptoms” of an earthquake to make the best possible pronouncement.

    To accomplish this, two University of Texas seismologists developed a checklist of seven “yes” and “no” questions that focus on four key characteristics: the area’s background seismicity, the proximity of an earthquake to an active injection well, the timing of the seismicity relative to the onset of injection, and the injection practices. Ultimately, “if an injection activity and an earthquake sequence correlate in space and time, with no known previous earthquake activity in the area, the earthquakes were likely induced,” wrote McGarr and co-authors in Science earlier this year.

    Oilfield waste arrives by tanker truck at a wastewater disposal facility near Platteville, Colorado.

    These criteria, however, remain open to interpretation, as the Prague example illustrates. Ladra’s petition cites three scientific studies that have linked the increase in seismicity in central Oklahoma to wastewater injection operations. A Cornell University-led study, which specifically examined the earthquake in which Ladra claims she was injured, concluded that event began within about 200 meters of active injection wells—closely correlating in space—and was therefore induced.

    In a March 2013 written statement, the OGS had concluded that this earthquake was the result of natural causes, as were two subsequent tremors that shook Prague over the next few days. The second earthquake, a magnitude-5.7 event that struck less than 24 hours later, was the largest earthquake ever recorded in Oklahoma.

    The controversy hinged on several of the “symptoms,” including the timing of the seismicity. Prior to the Prague sequence, scientists believed that a lag time of weeks to months between the initiation of injection and the onset of seismicity was typical. But in Prague, the fluid injection has been occurring for nearly 20 years. The OGS therefore concluded that there was no clear temporal correlation. By contrast, the Cornell researchers decided that the diagnostic time scale of induced seismicity needs to be reconsidered.

    Another key issue that has been raised by the OGS is that of background seismicity. Oklahoma has experienced relatively large earthquakes in the past, including a magnitude-5.0 event that occurred in 1952 and more than 10 earthquakes of magnitude 4.0 or greater since then, so the Prague sequence was hardly the first bout of shaking in the region.

    The uncertainty associated with both these characteristics places the Prague earthquakes in an uncomfortable middle ground between earthquakes that are “clearly not induced” and “clearly induced” on the University of Texas checklist, making a definitive diagnosis unlikely. Meanwhile, the increasing frequency of earthquakes across the midcontinent and the significant size of the Prague earthquakes are causing scientists to rethink the region’s potential seismic hazard.

    Is the Public at Risk?

    Earthquake hazard is a function of multiple factors, including event magnitude and depth, recurrence interval, and the material through which the seismic waves propagate. These data are incorporated into calculations the USGS uses to generate the National Seismic Hazard Maps.

    Updated every six years, these maps indicate the potential for severe ground shaking across the country over a 50-year period and are used to set design standards for earthquake-resistant construction. The maps influence decisions about building codes, insurance rates, and disaster management strategies, with a combined estimated economic impact totaling hundreds of billions of dollars per year.

    When the latest version of the maps was released in July, the USGS intentionally excluded the hazard from manmade earthquakes. Part of the reason was the timing, according to Nicolas Luco, a research structural engineer at the USGS. The maps are released on a schedule that dovetails with building code revisions, so they couldn’t delay the charts even though the induced seismicity update wasn’t ready, he says.

    Such changes, however, may take years to implement. Luco notes that the building code revisions based upon the previous version of the USGS hazard maps, released in 2008, just became law in California in 2014, a six-year lag in one of the most seismically-threatened states in the country.

    Instead, the USGS is currently developing a separate procedure, which they call a hazard model, to account for the hazard associated with induced seismicity. The new model may raise the earthquake hazard level substantially in some parts of the U.S. where it has previously been quite low, according to McGarr. But there are still open questions about how to account for induced seismicity in maps of earthquake shaking and in building codes, Luco says.

    McGarr believes that the new hazard calculations will result in more rigorous building codes for earthquake-resistant construction and that adhering to these changes will affect the construction as well as the oil, gas, and wastewater injection industries. “Unlike natural earthquakes, induced earthquakes are caused by man, not nature, and so the oil and gas industry may be required to provide at least some of the funds needed to accommodate the revised building codes,” he says.

    But Luco says it may not make sense to incorporate the induced seismicity hazard, which can change from year to year, into building codes that are updated every six years. Over-engineering is also a concern due to the transient nature of induced seismicity. “Engineering to a standard of earthquake hazard that could go away, that drives up cost,” says Justin Rubinstein, a seismologist with the USGS Earthquake Science Center. A further complication, according to Luco, is that building code changes only govern new construction, so they don’t upgrade vulnerable existing structures, for which retrofit is generally not mandatory.

    The occurrence of induced earthquakes clearly compounds the risk to the public. “The risk is higher. The question is, how much higher?” Luco asks. Building codes are designed to limit the risk of casualties associated with building collapse—“and that usually means bigger earthquakes,” he says. So the critical question, according to Luco, is, “Can we can get a really large induced earthquake that could cause building collapses?”

    Others are wondering the same thing. “Is it all leading up to a bigger one?” asks Keller, former director of the OGS. “I don’t think it’s clear that it is, but it’s not clear that it isn’t, either,” he says. Recalling a magnitude-4.8 tremor that shook southern Kansas in November, KGS’ Buchanan agrees. “I don’t think there’s any reason to believe that these things are going to magically stop at that magnitude,” he says.

    Coping with Quakes

    After assessing how much the risk to the public has increased, our society must decide upon the best way to cope with human-induced earthquakes. A common regulatory approach, one which Oklahoma has adopted, has been to implement “traffic light” control systems. Normal injection can proceed under a green light, but if induced earthquakes begin to occur, the light changes to yellow, at which point the operator must reduce the volume, rate of injection, or both to avoid triggering larger events. If larger earthquakes strike, the light turns red, and further injection is prohibited. Such systems have recently been implemented in Oklahoma, Colorado, and Texas.

    But how will we know if these systems are effective? The largest Rocky Mountain Arsenal-related earthquakes, three events between magnitudes 5.0 and 5.5, all occurred more than a year after injection had ceased, so it’s unclear for how long the systems should be evaluated. Their long-term effectiveness is also uncertain because the ability to control the seismic hazard decreases over time as the pore pressure effects move away from the well, according to Shemin Ge, a hydrogeologist at the University of Colorado, Boulder.

    Traffic light systems also rely on robust seismic monitoring networks that can detect the initial, very small injection-induced earthquakes, according to Ge. To identify hazards while there is still sufficient time to take corrective action, it’s ideal to identify events of magnitude 2.0 or less, wrote McGarr and his co-authors in Science. However, the current detection threshold across much of the contiguous U.S. is magnitude 3.0, he says.

    Kansas is about to implement a mitigation approach that focuses on reducing injection in multiple wells across areas believed to be underlain by faults, rather than focusing on individual wells, according to Buchanan. He already acknowledges that it will be difficult to assess the success of this new approach because in the past, the KGS has observed reductions in earthquake activity when no action has been taken. “How do you tease apart what works and what doesn’t when you get all this variability in the system?” he asks.

    This climate of uncertainty leaves regulators, industry, and the public on shaky ground. As Ladra’s case progresses, the judicial system will decide if two energy companies are to blame for the quake that damaged her home. But it’s our society that must ultimately decide how, and even if, we should cope with manmade quakes, and what level of risk we’re willing to accept.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:06 am on April 22, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “The EPA’s Natural Gas Problem” 



    11 Feb 2015
    Phil McKenna

    When U.S. President Barack Obama recently announced plans to reign in greenhouse gas emissions from the oil and gas production, the opposing drum beats from industry and environmental groups were as fast as they were relentless. The industry group America’s Natural Gas Alliance bombarded Twitter with paid advertisements stating how little their industry actually emits. Press releases from leading environmental organizations deploring the plan’s reliance on largely voluntary actions flooded email inboxes.

    Opposition to any new regulation by industry, however, isn’t as lockstep as its lobbying groups would have us believe. At the same time, environmentalists’ focus on voluntary versus mandatory measures misses a much graver concern.

    The White House and EPA are seeking to regulate methane emissions from the oil and gas industry.

    The joint White House and U.S. Environmental Protection Agency proposal would reduce emissions of methane, the primary component of natural gas, by 40–45% from 2012 levels in the coming decade. It’s a laudable goal. While natural gas is relatively clean burning—emitting roughly half the amount of carbon dioxide per unit of energy as coal—it is an incredibly potent greenhouse gas if it escapes into the atmosphere unburned.

    Methane emissions from the oil and gas sector are estimated to be equivalent to the pollution from 180 coal-fired power plants, according to studies done by the Environmental Defense Fund (EDF), an environmental organization. Yet there is a problem: despite that estimate, no one, including EDF, knows for certain how much methane the oil and gas industry actually emits.

    The EPA publishes an annual inventory of U.S. Greenhouse Gas emissions, which it describes as “the most comprehensive accounting of total greenhouse gas emissions for all man-made sources in the United States.” But their estimates for the natural gas industry are, by their own admission, outdated, based on limited data, and likely significantly lower than actual emissions.

    The Baseline

    Getting the number right is extremely important as it will serve as the baseline for any future reductions. “The smaller the number they start with, the smaller the amount they have to reduce in coming years by regulation,” says Anthony Ingraffea, a professor of engineering at Cornell University in Ithaca, New York. “A 45% reduction on a rate that is too low will be a very small reduction. From a scientific perspective, this doesn’t amount to a hill of beans.”

    Ingraffea says methane emissions are likely several times higher than what the EPA estimates. (Currently, the EPA says that up to 1.8% of the natural gas distributed and produced in the U.S. escapes to the atmosphere.) Even if Ingraffea is right, its still a small percentage, but methane’s potency as a greenhouse gas makes even a small release incredibly significant. Over 100 years, methane traps 34 times more heat in the atmosphere than carbon dioxide. If you are only looking 20 years into the future, a time frame given equal weight by the United Nation’s Intergovernmental Panel on Climate Change, methane is 86 times more potent than carbon dioxide.

    After being damaged during Hurricane Ike in September 2008, a natural gas tank spews methane near Sabine Pass, Texas.

    If Ingraffea is right, the amount of methane released into the atmosphere from oil and gas wells, pipelines, processing and storage facilities has a warming affect approaching that of the country’s 557 coal fired power plants. Reducing such a high rate of emissions by 40–45% would certainly help stall climate change. It would also likely be much more difficult to achieve than the cuts industry and environmental groups are currently debating.

    Ingraffea first called attention to what he and others believe are EPA underestimates in 2011 when he published a highly controversial paper along with fellow Cornell professor Robert Howarth. Their research suggested the amount of methane emitted by the natural gas industry was so great that relying on natural gas was actually worse for the climate than burning coal.

    Following the recent White House and EPA announcement, industry group America’s Natural Gas Alliance (ANGA) stated that they have reduced emissions by 17% since 1990 while increasing production by 37%. “We question why the administration would single out our sector for regulation, given our demonstrated reductions,” the organization wrote in a press release following the White House’s proposed policies. ANGA bases its emissions reduction on the EPA’s own figures and stands by the data. “We like to have independent third party verification, and we use the EPA’s figures for that,” says ANGA spokesman Daniel Whitten.

    Shifting Estimates

    But are the EPA estimates correct, and are they sufficiently independent? To come up with its annual estimate, the EPA doesn’t make direct measurements of methane emissions each year. Rather they multiply emission factors, the volume of a gas thought to be emitted by a particular source—like a mile of pipeline or a belching cow—by the number of such sources in a given area. For the natural gas sector, emission factors are based on a limited number of measurements conducted in the early 1990s in industry-funded studies.

    In 2010 the EPA increased its emissions factors for methane from the oil and natural gas sector, citing “outdated and potentially understated” emissions. The end result was a more than doubling of its annual emissions estimate from the prior year. In 2013, however, the EPA reversed course, lowering estimates for key emissions factors for methane at wells and processing facilities by 25–30%. When reached for comment, the EPA pointed me to their existing reports.

    The change was not driven by better scientific understanding but by political pressure, Howarth says. “The EPA got huge pushback from industry and decreased their emissions again, and not by collecting new data.” The EPA states that the reduction in emissions factors was based on “a significant amount of new information” that the agency received about the natural gas industry.

    However, a 2013 study published in the journal Geophysical Research Letters concludes that “the main driver for the 2013 reduction in production emissions was a report prepared by the oil and gas industry.” The report was a non-peer reviewed survey of oil and gas companies conducted by ANGA and the American Petroleum Institute.

    The EPA’s own inspector general released a report that same year that was highly critical of the agency’s estimates of methane and other harmful gasses. “Many of EPA’s existing oil and gas production emission factors are of questionable quality because they are based on limited and/or low quality data.” The report concluded that the agency likely underestimates emissions, which “hampers [the] EPA’s ability to accurately assess risks and air quality impacts from oil and gas production activities.”


    Soon after the EPA lowered its emissions estimates, a number of independent studies based on direct measurements found higher methane emissions. In November 2013, a study based on direct measurements of atmospheric methane concentrations across the United States concluded actual emissions from the oil and gas sector were 1.5 times higher than EPA estimates. The study authors noted, “the US EPA recently decreased its methane emission factors for fossil fuel extraction and processing by 25–30% but we find that [methane] data from across North America instead indicate the need for a larger adjustment of the opposite sign.”

    In February 2014, a study published in the journal Science reviewed 20 years of technical literature on natural gas emissions in the U.S. and Canada and concluded that “official inventories consistently underestimate actual CH4 emissions.”

    “When you actually go out and measure methane emissions directly, you tend to come back with measurements that are higher than the official inventory,” says Adam Brandt, lead author of the study and an assistant professor of energy resources engineering at Stanford University. Brandt and his colleagues did not attempt to make an estimate of their own, but stated that in a worst-case scenario total methane emissions from the oil and gas sector could be three times higher than the EPA’s estimate.

    On January 22, eight days after the White House’s announcement, another study found similarly high emissions from a sector of the natural gas industry that is often overlooked. The study made direction measurements of methane emissions from natural gas pipelines and storage facilities in and around Boston, Massachusetts, and found that they were 3.9 times higher than the EPA’s estimate for the “downstream” sector, or the parts of the system which transmit, distribute, and store natural gas.

    Most natural gas leaks are small, but large ones can have catastrophic consequences. The wreckage above was caused by a leak in San Bruno, California, in 2010.

    Boston’s aging, leak-prone, cast-iron pipelines likely make the city more leaky than most, but the high volume of emissions—losses around the city total roughly $1 billion worth of natural gas per decade—are nonetheless surprising. The majority of methane emissions were previously believed to occur “upstream” at wells and processing facilities. Efforts to curb emissions including the recent goals set by the White House have overlooked the smaller pipelines that deliver gas to end users.

    “Emissions from end users have been only a very small part of conversation on emissions from natural gas,” says lead author Kathryn McKain, an atmospheric scientist at Harvard University. “Our findings suggest that we don’t understand the underlying emission processes which is essential for creating effective policy for reducing emissions.”

    The Boston study was one of 16 recent or ongoing studies coordinated by EDF to try to determine just how much methane is actually being emitted from the industry as a whole. Seven studies, focusing on different aspects of oil and gas industry infrastructure, have been published thus far. Two of the studies, including the recent Boston study, have found significantly higher emission rates. One study, conducted in close collaboration with industry, found lower emissions. EDF says it hopes to have all studies completed by the end of 2015. The EPA told me it will take the studies into account for possible changes in its current methane emission factors.

    Fraction of a Percent

    EDF is simultaneously working with industry to try to reduce methane emissions. A recent study commissioned by the environmental organization concluded the US oil and gas industry could cut methane emissions by 40% from projected 2018 levels at a cost of less than one cent per thousand cubic feet of natural gas, which today sells for about $5. The reductions could be achieved with existing emissions-control technologies and policies.

    “We are talking about one third or one fourth of a percent of the price of gas to meet these goals,” says Steven Hamburg chief scientist for EDF. The 40–45% reduction goal recently announced by the White House is nearly identical to the level of cuts analyzed by EDF. To achieve the reduction the White House proposes mandatory changes in new oil and gas infrastructure as well as voluntary measures for existing infrastructure.

    Thomas Pyle, president of the Institute for Energy Research, an industry organization, says industry is already reducing its methane emissions and doesn’t need additional rules. “It’s like regulating ice cream producers not to spill any ice cream during the ice cream making process,” he says. “It is self-evident for producers to want to capture this product with little or no emissions and make money from it.”

    Unlike making ice cream, however, natural gas producers often vent their product intentionally as part of the production process. One of the biggest sources of methane emissions in natural gas production is gas that is purposely vented from pneumatic devices which use pressurized methane to open and close valves and operate pumps. They typically release or “bleed” small amounts of gas during their operation.

    Such equipment is widely used throughout natural gas extraction, processing, and transmission process. A recent study by Natural Resources Defense Council (NRDC) estimates natural gas driven pneumatic equipment vents 1.6–1.9 million metric tons of methane each year. The figure accounts for nearly one-third of all methane lost by the natural gas industry, as estimated by the EPA.

    A natural gas distribution facility

    “Low-bleed” or “zero-bleed” controllers are available, though they are more expensive. The latter use compressed air or electricity to operate instead of pressurized natural gas, or they capture methane that would otherwise be vented and reuse it. “Time and time again we see that we can operate this equipment without emissions or with very low emissions,” Hamburg says. Increased monitoring and repair of unintended leaks at natural gas facilities could reduce an additional third of the industry’s methane emissions according to the NRDC study.

    Environmentalist organizations have come out in strong opposition to the lack of mandatory regulations for existing infrastructure, which will account for nearly 90% of methane emissions in 2018 according to a recent EDF report.

    While industry groups oppose mandatory regulations on new infrastructure, at least one industry leader isn’t concerned. “I don’t believe the new regulations will hurt us at all,” says Mark Boling an executive vice president at Houston-based Southwestern Energy Company, the nation’s fourth largest producer of natural gas.

    Boling says leak monitoring and repair programs his company initiated starting in late 2013 will pay for themselves in 12 to 18 months through reduced methane emissions. Additionally, he says the company has also replaced a number of pneumatic devices with zero-bleed solar powered electric pumps. Southwestern Energy is now testing air compressors powered by fuel cells to replace additional methane-bleeding equipment Boling says. In November, Southwestern Energy launched ONE Future, a coalition of companies from across the natural gas industry. Their goal is to lower the industry’s methane emissions below one percent.

    Based on the EPA emissions rate of 1.8% and fixes identified by EDF and NRDC, their goal seems attainable. But what if the actual emissions rate is significantly higher, as Howarth and Ingraffea have long argued and recent studies seem to suggest? “We can sit here and debate whose numbers are right, ‘Is it 4%? 8%? Whatever,’ ” Boling says. “But there are cost effective opportunities out there to reduce emissions, and we need to step up and do it.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:13 am on April 9, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Quick Test That Measures a Patient’s Own Proteins Could Slash Antibiotic Overuse” 



    19 Mar 2015
    R.A. Becker

    If you’ve ever been prescribed antibiotics to fight the flu, you’ve experienced first-hand how difficult it is for doctors to distinguish between bacterial infections and viral infections (the flu is the latter). Oftentimes, doctors will prescribe antibiotics just in case it’s a bacterial infection so the patient will recover sooner. Early administration of antibiotics can halt bacterial infections before they spiral out of control, but the practice has led to the overuse of our most precious drugs.

    Fortunately, a team of researchers announced yesterday that they may have solved this problem in the form of a blood test. It works by detecting the proteins produced by a patient’s own body in response to infection to quickly determine whether they have been sickened by a bacterial strain or a virus. It returns a result within minutes rather than the hours or days required with typical clinical tests.

    The new test could lengthen the useful life of antibiotics such as clindamycin, one of the most essential drugs, according to the World Health Organization.

    Today’s tests aren’t just slow, they also require that the infectious agent has multiplied enough inside the patient’s body that the levels are high enough to be detected, and can misidentify the root cause when a person has concurrent infections. To overcome these hurdles, scientists from Israeli biotech company MeMed looked to the patient’s own body to see which molecules the immune system produces when fighting off different kinds of infections.

    The test performed well, properly identifying the type of infection most of the time. The researchers even report that it is more accurate than typical clinical diagnostics. Here’s Smitha Mundasad, reporting for BBC News on the new test:

    It relies on the fact that bacteria and viruses can trigger different protein pathways once they infect the body.

    A novel one, called TRAIL, was particularly high in viral infections and depleted during bacterial ones. They combined this with two other proteins – one is already used in routine practice.

    The rapid test could slow the rampant spread of antibiotic resistance in bacteria. Inappropriately prescribing antibiotics to to combat a virus like the flu or using too low of a dose of antibiotics encourages bacteria to evolve traits that protect them from commonly used drugs.

    Antibiotic misuse is not a small problem. The CDC estimates that nearly half of all antibiotics should never have been prescribed in the first place, and antibiotic resistant bacteria infect around 2 million people each year in the United States, killing over 23,000 of those infected.

    Virus expert Jonathan Ball from the University of Nottingham is cautiously optimistic, telling the BBC’s Mundasad that while MeMed’s new test might reduce inappropriate antibiotic use, “It will be important to see how it performs in the long term.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 7:51 am on April 6, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Silver Nanoparticles Could Give Millions Microbe-free Drinking Water” 



    24 Mar 2015
    Cara Giaimo

    Microbe-free drinking water is hard to come by in many areas of India.

    Chemists at the Indian Institute of Technology Madras have developed a portable, inexpensive water filtration system that is twice as efficient as existing filters. The filter doubles the well-known and oft-exploited antimicrobial effects of silver by employing nanotechnology. The team, led by Professor Thalappil Pradeep, plans to use it to bring clean water to underserved populations in India and beyond.

    Left alone, most water is teeming with scary things. A recent study showed that your average glass of West Bengali drinking water might contain E. coli, rotavirus, cryptosporidium, and arsenic. According to the World Health Organization, nearly a billion people worldwide lack access to clean water, and about 80% of illnesses in the developing world are water-related. India in particular has 16% of the world’s population and less than 3% of its fresh water supply. Ten percent of India’s population lacks water access, and every day about 1,600 people die of diarrhea, which is caused by waterborne microbes.

    Pradeep has spent over a decade using nanomaterials to chemically sift these pollutants out. He started by tackling endosulfan, a pesticide that was hugely popular until scientists determined that it destroyed ozone and brain cells in addition to its intended insect targets. Endosulfan is now banned in most places, but leftovers persist in dangerous amounts. After a bout of endosulfan poisoning in the southwest region of Kerala, Pradeep and his colleagues developed a drinking water filter that breaks the toxin down into harmless components. They licensed the design to a filtration company, who took it to market in 2007. It was “the first nano-chemistry based water product in the world,” he says.

    But Pradeep wanted to go bigger. “If pesticides can be removed by nanomaterials,” he remembers thinking, “can you also remove microbes without causing additional toxicity?” For this, Pradeep’s team put a new twist on a tried-and-true element: silver.

    Silver’s microbe-killing properties aren’t news—in fact, people have known about them for centuries, says Dr. David Barillo, a trauma surgeon and the editor of a recent silver-themed supplement of the journal Burns.

    “Alexander the Great stored and drank water in silver vessels when going on campaigns” in 335 BC, he says, and 19th century frontier-storming Americans dropped silver coins into their water barrels to suppress algae growth. During the space race, America and the Soviet Union both developed silver-based water purification techniques (NASA’s was “basically a silver wire sticking in the middle of a pipe that they were passing electricity through,” Barillo says). And new applications keep popping up: Barillo himself pioneered the use of silver-infused dressings to treat wounded soldiers in Afghanistan. “We’ve really run the gamut—we’ve gone from 300 BC to present day, and we’re still using it for the same stuff,” he says.

    No one knows exactly how small amounts of silver are able to kill huge swaths of microbes. According to Barillo, it’s probably a combination of attacks on the microbe’s enzymes, cell wall, and DNA, along with the buildup of silver free radicals, which are studded with unpaired electrons that gum up cellular systems. These microbe-mutilating strategies are so effective that they obscure our ability to study them, because we have nothing to compare them to. “It’s difficult to make something silver-resistant, even in the lab where you’re doing it intentionally,” Barillo says.

    But unlike equal-opportunity killers like endosulfan, silver knocks out the monsters and leaves the good guys alone. In low concentrations, it’s virtually harmless to humans. “It’s not a carcinogen, it’s not a mutagen, it’s not an allergen,” Barillo says. “It seems to have no purpose in human physiology—it’s not a metal that we need to have in our bodies like copper or magnesium. But it doesn’t seem to do anything bad either.”

    Though silver’s mysterious germ-killing properties are old news, Pradeep is taking advantage of them in new ways. The particles his team works with are less than 50 nanometers long on any one side—about four times smaller than the smallest bacteria. Working at this level allows him greater control over desired chemical reactions, and the ability to fine-tune his filters to improve efficiency or add specific effects. Two years ago, his team developed their biggest hit yet—a combination filter that kills microbes with silver and breaks down chemical toxins with other nanoparticles. It’s portable, works at room temperature, and doesn’t require electricity. Pradeep is working with the government to make these filters available to underserved communities. Currently 100,000 households have them; “by next year’s end,” he hopes, “it will reach 600,000 people.”

    The latest filter goes one better: it “tunes” the silver with carbonate, a negatively-charged ion that strips protective proteins from microbe cell membranes. This leaves the microbes even more vulnerable to silver’s attack. “In the presence of carbonate, silver is even more effective,” he explains, so he can use less of it: “Fifty parts per billion can be brought down to [25].” Unlike the earlier filter, this one kills viruses, too—good news, since according to the National Institute of Virology, most do not.

    Going from 50 parts per billion of silver to 25 may not seem like a huge leap. But for Pradeep—who aims to help a lot of people for a long time—every little bit counts. Filters that contain less silver are less expensive to produce. This is vital if you want to keep costs low enough for those who need them most to buy them, or to entice the government into giving them away. He estimates that one of his new filter units will cost about $2 per year, proportionately less than what the average American pays for water.

    Using less silver also improves sustainability. “Globally, silver is the most heavily used nanomaterial,” Pradeep says, and it’s not renewable: anything we use “is lost for the world.” If all filters used his carbonate trick, he points out, we could make twice as many of them before we run out of raw materials—and even more if, as he hopes, his future tunings bring the necessary amount down further. This will become especially important if his filters catch on in other places with no infrastructure and needy populations. “Ultimately, I want to use the very minimum quantity of silver,” he says.

    “Pradeep’s work shows enormous potential,” says Dr. Theresa Dankovich, a water filtration expert at the University of Virginia’s Center for Global Health. But, she points out, “carbonate anions are naturally occurring in groundwater and surface waters,” so “it warrants further study to determine how they are already enhancing the effect of silver ions and silver nanoparticles,” even without purposeful manipulation by chemists. Others see potential shortcomings. James Smith, a professor of environmental engineering at the University of Virginia and the inventor of a nanoparticle-coated clay filtering pot, worries that the nanotech-heavy production process “would not allow for manufacturing in a developing world setting,” especially if Pradeep’s continuous tweaking of the model deters large-scale companies from actually producing it.

    Nevertheless, Pradeep plans to continue scaling up. “If you can provide clean water, you have provided a solution for almost everything,” he says. When you have the lessons of history and the technology of the future, why settle for anything less?

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 1:43 pm on March 25, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “Stems Cells Finally Deliver, But Not on Their Original Promise” 



    25 Mar 2015
    Carrie Arnold

    To scientists, stem cells represent the potential of the human body to heal itself. The cells are our body’s wide-eyed kindergarteners—they have the potential to do pretty much anything, from helping us obtain oxygen, digest food, or pump our blood. That flexibility has given scientists hope that they can coax stem cells to differentiate into and replace those damaged by illness.

    Almost immediately after scientists learned how to isolate stem cells from human embryos, the excitement was palpable. In the lab, they had already been coaxed into becoming heart muscle, bone marrow, and kidney cells. Entire companies were founded to translate therapies into clinical trials. Nearly 20 years on, though, only a handful of therapies using stem cells have been approved. Not quite the revolution we had envisioned back in 1998.

    But stem cells have delivered on another promise, one that is already having a broad impact on medical science. In their investigations into the potential therapeutic functions of stem cells, scientists have discovered another way to help those suffering from neurodegenerative and other incurable diseases. With stem cells, researchers can study how these diseases begin and even test the efficacy of drugs on cells from the very people they’re intended to treat.

    Getting to this point hasn’t been easy. Research into pluripotent stem cells, the most promising type, has faced a number of scientific and ethical hurdles. They were most readily found in developing embryos, but in 1995, Congress passed a bill that eliminated funding on embryonic stem cells. Since adult humans don’t have pluripotent stem cells, researchers were stuck.

    That changed in 2006, when Japanese scientist Shinya Yamanaka developed a way to create stem cells from a skin biopsy. Yamanaka’s process to create induced pluripotent stem cells (iPS cells) won him and his colleague John Gurdon a Nobel Prize in 2012. After years of setbacks, the stem cell revolution was back on.

    A cluster of iPS cells has been induced to express neural proteins, which have been tagged with fluorescent antibodies.

    Biomedical scientists in fields from cancer to heart disease have turned to iPS cells in their research. But the technique has been especially popular among scientists studying neurodegenerative diseases like Alzheimer’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis (ALS) for two main reasons: One, since symptoms of these diseases don’t develop until rather late in the disease process, scientists haven’t had much knowledge about the early stages. IPS cells changed that by allowing scientists to study the very early stages of the disorder. And two, they provide novel ways of testing new drugs and potentially even personalizing treatment options.

    “It’s creating a sea change,” says Jeanne Loring, a stem cell biologist at the Scripps Research Institute in San Diego. “There will be tools available that have never been available before, and it will completely change drug development.”

    Beyond Animal Models

    Long before scientists knew that stem cells existed, they relied on animals to model diseases. Through careful breeding and, later, genetic engineering, researchers have developed rats, mice, fruit flies, roundworms, and other animals that display symptoms of the illness in question. Animal models remain useful, but they’re not perfect. While the biology of these animals often mimics humans’, they aren’t identical, and although some animals might share many of the overt symptoms of human illness, scientists can’t be sure that they experience the disease in the same way humans do.

    “Mouse models are useful research tools, but they rarely capture the disease process,” says Rick Livesey, a biologist at the University of Cambridge in the U.K. Many neurodegenerative diseases, like Alzheimer’s, he says, are perfect examples of the shortcomings of animal models. “No other species of animal actually gets Alzheimer’s disease, so any animal model is a compromise.”

    As a result, many drugs that seemed to be effective in animal models showed no benefit in humans. A study published in Alzheimer’s Research and Therapy in June 2014 estimated that 99.9% of Alzheimer’s clinical trials ended in failure, costing both money and lives. Scientists like Ole Isacson, a neuroscientist at Harvard University who studies Parkinson’s disease, were eager for a method that would let them investigate illnesses in a patient’s own cells, eliminating the need for expensive and imperfect animal models.

    Stem cells appeared to offer that potential, but when Congress banned federal funding in 1995 for research on embryos—and thus the development of new stem cell lines—scientists found their work had ground to a halt. As many researchers in the U.S. fretted over the future of stem cell research, scientists in Japan were developing a technique which would eliminate the need for embryonic stem cells. What’s more, it would allow researchers to create stem cells from the individuals who were suffering from the diseases they were studying.

    Cells in the body are able to specialize by turning on some sets of genes and switching off others. Every cell has a complete copy of the DNA, it’s just packed away in deep storage where the cell can’t easily access it. Yamanaka, the Nobel laureate, knew that finding the key to this storage locker and unpacking it could potentially turn any specialized cell back into a pluripotent stem cell. He focused in on a group of 24 genes that were active only in embryonic stem cells. If he could get adult, specialized cells to translate these genes into proteins, then they should revert to stem cells. Yamanaka settled on fibroblast cells as the source of iPS cells since these are easily obtained with a skin biopsy.

    Rather than trying to switch these genes back on, a difficult and time-consuming task, Yamanaka instead engineered a retrovirus to carry copies of these 24 genes to mouse fibroblast cells. Since many retroviruses insert their own genetic material into the genomes of the cells they infect, Yamanaka only had to deliver the virus once. All successive generations of cells inherited those 24 genes. Yamanaka first grew the fibroblasts in a dish, then infected them with his engineered retrovirus. Over repeated experiments, Yamanaka was able to narrow the suite of required genes from 24 down to just four.

    The process was far from perfect—it took several weeks to create the stem cells, and only around 0.01%–0.1% of the fibroblasts were actually converted to stem cells. But after Yamanaka published his results in Cell in 2006, scientists quickly began perfecting the procedure and developing other techniques. To say they have been successful would be an understatement. “The technology is so good now that I have the undergraduates in my lab doing the reprogramming,” Loring says.

    Accelerating Disease

    When he heard of Yamanaka’s discovery, Isacson, the Harvard neuroscientist studying Parkinson’s disease, had been using fetal neurons to try to replace diseased and dying neurons. Isacson realized “very quickly” that iPS cells could yield new discoveries about Parkinson’s. At the time, scientists were trying to determine exactly when the disease process started. It wasn’t easy. A person has to lose around 70% of their dopamine neurons before the first sign of movement disorder appears and Parkinson’s can be diagnosed. By that point, it’s too late to reverse that damage, a problem that is found in many if not all neurodegenerative diseases. Isacson wanted to know what was causing the neurons to die.

    Together with the National Institute of Neurological Disorders and Stroke consortium on iPS cells, Isacson obtained fibroblasts from patients with genetic mutations linked to Parkinson’s. Then, he reprogrammed these cells to become the specific type of neurons affected by Parkinson’s disease. “To our surprise, in the very strong hereditary forms of disease, we found that cells showed very strong signs of distress in the dish, even though they were newborn cells,” Isacson says.

    These experiments, published in Science Translational Medicine in 2012, showed that the disease process in Parkinson’s started far earlier than scientists expected. The distressed, differentiated neurons Isacson saw under the microscope were still just a few weeks old. People generally didn’t start showing symptoms for Parkinson’s disease until middle age or beyond.

    A clump of stem cells, seen here in green

    Isacson and his colleagues then tried to determine what was different between different cells with different mutations. The cells showed the most distress in their mitochondria, the parts of the cell that act as power plants by creating energy from oxygen and glucose. How that distress manifested, though, varied slightly depending on which mutation the patient carried. Neurons derived from an individual with a mutation in the LRRK2 gene consumed lower than expected amounts of oxygen, whereas the neurons derived from those carrying a mutation in PINK1 had much higher oxygen consumption. Neurons with these mutations were also more susceptible to a type of cellular damage known as oxidative stress.

    After exposing both groups of cells to a variety of environmental toxins, such as oligomycin and valinomycin, both of which affect mitochondria, Isacson and colleagues attempted to rescue the cells by using several compounds that had been found effective in animal models. Both the LRRK2 and the PINK1 cells responded well to the antioxidant coenzyme Q10, but had very different responses to the immunosuppressant drug rapamycin. Whereas LRRK2 showed beneficial responses to rapamycin, the PINK1 cells did not.

    To Isacson, the different responses were profoundly important. “Most drugs don’t become blockbusters because they don’t work for everyone. Trials start too late, and they don’t know the genetic background of the patient,” Isacson says. He believes that iPS cells will one day help researchers match specific treatments with specific genotypes. There may not be a single blockbuster that can treat Parkinson’s, but there may be several drugs that make meaningful differences in patients’ lives.

    Cancer biologists have already begun culturing tumor cells and testing anti-cancer drugs before giving these medications to patients, and biologists studying neurodegenerative disease hope that iPS cells will one day allow them to do something similar for their patients. Scientists studying ALS have recently taken a step in that direction, using iPS cells to create motor neurons from fibroblasts of people carrying a mutation in the C9orf72 gene, the most common genetic cause of ALS. In a recent paper in Neuron, the scientists identified a small molecule which blocked the formation of toxic proteins caused by this mutation in cultured motor neurons.

    Adding More Dimensions

    It’s one thing to identify early disease in iPS cells, but these cells are generally obtained from people who have been diagnosed. At that point, it’s too late, in a way; drugs may be much less likely to work in later stages of the disease. To make many potential drugs more effective, the disease has to be diagnosed much, much earlier. Recent work by Harvard University stem cell biologist Rudolph Tanzi and colleagues may have taken a step in that direction, also using iPS cells.

    Doo Yeon Kim, Tanzi’s co-author, had grown frustrated with iPS cell models of neurodegenerative disease. The cell cultures were liquid, and the cells could only grow in a thin, two-dimensional layer. The brain, however, was more gel-like, and existed in three dimensions. So Kim created a 3D gel matrix on which the researchers grew human neural stem cells that carried extra copies of two genes—one which codes for amyloid precursor protein and another for presenilin 1, both of which were previously discovered in Tanzi’s lab—which are linked to familial forms of Alzheimer’s disease.

    After six weeks, the cells contained high levels of the harmful beta-amyloid protein as well as large numbers of toxic neurofibrillary tangles that damage and kill neurons. Both of these proteins had been found at high levels in the neurons of individuals who had died from Alzheimer’s disease, but researchers didn’t know for certain which protein built up first and which was more central to the disease process. Further experiments revealed that drugs preventing the formation of amyloid proteins also prevented the formation of neurofibrillary tangles, indicating that amyloid proteins likely formed first during Alzheimer’s disease.

    “When you stop amyloid, you stop cell death,” Tanzi says. Amyloid begins to build up long before people show signs of altered cognition, and Tanzi believes that drugs which stop amyloid or prevent the buildup of neurofibrillary tangles could prevent Alzheimer’s before it starts.

    The results were hailed in the media as a “major breakthrough,” although Larry Goldstein, a neuroscientist at the University of California, San Diego, takes a more nuanced perspective. “It’s a nice paper and an important step forward, but things got overblown. I don’t know that I would use the word ‘breakthrough’ because these, like all results, often have a very long history to them,” Goldstein says.

    The scientists who spoke with NOVA Next about iPS cells noted that the field is moving forward at a remarkable clip, but they all talked at length about the issues that still remain. One of the largest revolves around differences between the age of the iPS cells and the age of the humans who develop these neurodegenerative diseases. Although scientists are working with neurons that are technically “mature,” they are nonetheless only weeks or months old—far from the several decades that the sufferers of neurodegenerative diseases have. Since aging remains the strongest risk factor for developing these diseases, neuroscientists worry that some disease pathology might be missed in such young cells. “Is it possible to study a disease that takes 70 years to develop in a person using cells that have grown for just a few months in a dish?” Livesey asks.

    So far, the answer has been a tentative yes. Some scientists have begun to devise different strategies to accelerate the aging process in the lab so researchers don’t have to wait several decades before they develop their answers. Lorenz Studer, director of the Center for Stem Cell Biology at the Sloan-Kettering Institute, uses the protein that causes progeria, a disorder of extreme premature aging, to successfully age neurons derived from iPS cells from Parkinson’s disease patients.

    Robert Lanza, a stem cell biologist at Advanced Cell Technology, takes another approach, aging cells by taking small amounts of mature neurons and growing them up in a new dish. “Each time you do this, you are forcing the cells to divide,” Lanza says. “And cells can only divide so many times before they reach senescence and die.” This process, Lanza believes, will mimic aging. He has also been experimenting with stressing the cells to promote premature aging.

    All of these techniques, Livesey believes, will allow scientists to study which aspects of the aging process—such as number of cell divisions and different types of environmental stressors—affect neurodegenerative diseases and how they do so. Adding to the complexity of the experimental system will improve the results that come out at the end. “You can only capture as much biology in iPS cells as you plug into it in the beginning,” Livesey says.

    But as Isacson and Loring’s work, has shown, even very young cells can show hallmarks of neurodegenerative diseases. “If a disease has a genetic cause, if there’s an actual change in DNA, you should be able to find something in those iPS cells that is different,” Loring says.

    For these experiments and others, scientists have been relying on iPS cells derived from individuals with hereditary or familial forms of neurodegenerative disease. These individuals, however, only represent about 5–15% of individuals with neurodegenerative disease; the vast majority of neurodegenerative diseases is sporadic and has no known genetic cause. Scientists believe that environmental factors may play a much larger role in the onset of these forms of neurodegenerative disease.

    That heterogeneity means it’s not yet clear whether the iPS cells from individuals with hereditary forms of disease are a good model for what happens in sporadic disease. Although the resulting symptoms may be the same, different forms of disease may use the same biological pathways to end up in the same place. Isacson is in the process of identifying the range of genes and proteins that are altered in iPS cells that carry Parkinson’s disease mutations. He intends to determine whether any of these pathways are also disturbed in sporadically occurring Parkinson’s disease to pinpoint any similarities in both forms of disease.

    Livesey’s lab just received a large grant to study people with an early onset, sporadic form of Alzheimer’s. “Although sporadic Alzheimer’s disease isn’t caused by a mutation in a single gene, the condition is still strongly heritable. The environment, obviously, has an important role, but so does genetics,” Livesey says.

    Because the disease starts earlier in these individuals, researchers believe that it has a larger genetic link than other forms of sporadic Alzheimer’s disease, which will make it easier to identify any genetic or biological abnormalities. Livesey hopes that bridging sporadic and hereditary forms of Alzheimer’s disease will allow researchers to reach stronger conclusions using iPS cells.

    Though it will be years before any new drugs come out of Livesey’s stem cell studies—or any other stem cell study for that matter—the technology has nonetheless allowed scientists to refine their understanding of these and other diseases. And, scientists believe, this is just the start. “There are an endless series of discoveries that can be made in the next few decades,” Isacson says.

    Image credit: Ole Isacson, McLean Hospital and Harvard Medical School/NINDS

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:10 am on March 22, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From S and T: “Nova in Sagittarius Now 4th Magnitude!” 

    SKY&Telescope bloc

    Sky & Telescope

    March 22, 2015
    Alan MacRobert

    The nova that erupted in the Sagittarius Teapot on March 15th continues to brighten at a steady rate. As of the morning of March 22nd it’s about magnitude 4.3, plain as can be in binoculars before dawn, looking yellowish, and naked-eye in a moderately good sky.

    Update Sunday March 22: It’s still brightening — to about magnitude 4.3 this morning! That’s almost 2 magnitudes brighter than at its discovery a week ago. It’s now the brightest star inside the main body of the Sagittarius Teapot, and it continues to gain 0.3 magnitude per day. This seems to be the brightest nova in Sagittarius since at least 1898. And, Sagittarius is getting a little higher before dawn every morning.

    The nova is right on the midline of the Sagittarius Teapot. The horizon here is drawn for the beginning of astronomical twilight in mid-March for a viewer near 40° north latitude. The nova is about 15° above this horizon. Stars are plotted to magnitude 6.5. For a more detailed chart with comparison-star magnitudes, see the bottom of this page. Sky & Telescope diagram.

    You never know. On Sunday March 15th, nova hunter John Seach of Chatsworth Island, NSW, Australia, found a new 6th-magnitude star shining in three search images taken by his DSLR patrol camera. The time of the photos was March 15.634 UT. One night earlier, the camera recorded nothing there to a limiting magnitude of 10.5.

    Before and after. Adriano Valvasori imaged the nova at March 16.71, using the iTelescope robotic telescope “T9” — a 0.32-m (12.5-inch) reflector in Australia. His shot is blinked here with a similarly deep earlier image. One of the tiny dots at the right spot might be the progenitor star. The frames are 1⁄3° wide.

    A spectrum taken a day after the discovery confirmed that this is a bright classical nova — a white dwarf whose thin surface layer underwent a hydrogen-fusion explosion — of the type rich in ionized iron. The spectrum showed emission lines from debris expanding at about 2,800 km per second.

    The nova has been named Nova Sagittarii 2015 No. 2, after receiving the preliminary designation PNV J18365700-2855420. Here’s its up-to-date preliminary light curve from the American Association of Variable Star Observers (AAVSO). Here is the AAVSO’s list of recent observations.

    Although the nova is fairly far south (at declination –28° 55′ 40″, right ascension 18h 36m 56.8s), and although Sagittarius only recently emerged from the glow of sunrise, it’s still a good 15° above the horizon just before the beginning of dawn for observers near 40° north latitude. If you’re south of there it’ll be higher; if you’re north it’ll be lower. Binoculars are all you’ll need.

    It looks yellowish. Here’s a color image of its spectrum taken March 17th, by Jerome Jooste in South Africa using a Star Analyser spectrograph on an 8-inch reflector. Note the wide, bright emission lines. They’re flanked on their short-wavelength ends by blueshifted dark absorption lines: the classic P Cygni profile of a star with a thick, fast-expanding cooler shell or wind.

    To find when morning astronomical twilight begins at your location, you can use our online almanac. (If you’re on daylight time like most of North America, be sure to check the Daylight-Saving Time box.)

    Below is a comparison-star chart from the AAVSO. Stars’ visual magnitudes are given to the nearest tenth with the decimal points omitted.

    The cross at center is Nova Sagittarii 2015 No. 2. Magnitudes of comparison stars are given to the nearest tenth with the decimal points omitted. The frame is 15° wide, two or three times the width of a typical binocular’s field of view. Courtesy AAVSO.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sky & Telescope magazine, founded in 1941 by Charles A. Federer Jr. and Helen Spence Federer, has the largest, most experienced staff of any astronomy magazine in the world. Its editors are virtually all amateur or professional astronomers, and every one has built a telescope, written a book, done original research, developed a new product, or otherwise distinguished him or herself.

    Sky & Telescope magazine, now in its eighth decade, came about because of some happy accidents. Its earliest known ancestor was a four-page bulletin called The Amateur Astronomer, which was begun in 1929 by the Amateur Astronomers Association in New York City. Then, in 1935, the American Museum of Natural History opened its Hayden Planetarium and began to issue a monthly bulletin that became a full-size magazine called The Sky within a year. Under the editorship of Hans Christian Adamson, The Sky featured large illustrations and articles from astronomers all over the globe. It immediately absorbed The Amateur Astronomer.

    Despite initial success, by 1939 the planetarium found itself unable to continue financial support of The Sky. Charles A. Federer, who would become the dominant force behind Sky & Telescope, was then working as a lecturer at the planetarium. He was asked to take over publishing The Sky. Federer agreed and started an independent publishing corporation in New York.

    “Our first issue came out in January 1940,” he noted. “We dropped from 32 to 24 pages, used cheaper quality paper…but editorially we further defined the departments and tried to squeeze as much information as possible between the covers.” Federer was The Sky’s editor, and his wife, Helen, served as managing editor. In that January 1940 issue, they stated their goal: “We shall try to make the magazine meet the needs of amateur astronomy, so that amateur astronomers will come to regard it as essential to their pursuit, and professionals to consider it a worthwhile medium in which to bring their work before the public.”

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 446 other followers

%d bloggers like this: