Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:43 pm on March 25, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “Stems Cells Finally Deliver, But Not on Their Original Promise” 

    PBS NOVA

    NOVA

    25 Mar 2015
    Carrie Arnold

    To scientists, stem cells represent the potential of the human body to heal itself. The cells are our body’s wide-eyed kindergarteners—they have the potential to do pretty much anything, from helping us obtain oxygen, digest food, or pump our blood. That flexibility has given scientists hope that they can coax stem cells to differentiate into and replace those damaged by illness.

    Almost immediately after scientists learned how to isolate stem cells from human embryos, the excitement was palpable. In the lab, they had already been coaxed into becoming heart muscle, bone marrow, and kidney cells. Entire companies were founded to translate therapies into clinical trials. Nearly 20 years on, though, only a handful of therapies using stem cells have been approved. Not quite the revolution we had envisioned back in 1998.

    But stem cells have delivered on another promise, one that is already having a broad impact on medical science. In their investigations into the potential therapeutic functions of stem cells, scientists have discovered another way to help those suffering from neurodegenerative and other incurable diseases. With stem cells, researchers can study how these diseases begin and even test the efficacy of drugs on cells from the very people they’re intended to treat.

    Getting to this point hasn’t been easy. Research into pluripotent stem cells, the most promising type, has faced a number of scientific and ethical hurdles. They were most readily found in developing embryos, but in 1995, Congress passed a bill that eliminated funding on embryonic stem cells. Since adult humans don’t have pluripotent stem cells, researchers were stuck.

    That changed in 2006, when Japanese scientist Shinya Yamanaka developed a way to create stem cells from a skin biopsy. Yamanaka’s process to create induced pluripotent stem cells (iPS cells) won him and his colleague John Gurdon a Nobel Prize in 2012. After years of setbacks, the stem cell revolution was back on.

    1
    A cluster of iPS cells has been induced to express neural proteins, which have been tagged with fluorescent antibodies.

    Biomedical scientists in fields from cancer to heart disease have turned to iPS cells in their research. But the technique has been especially popular among scientists studying neurodegenerative diseases like Alzheimer’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis (ALS) for two main reasons: One, since symptoms of these diseases don’t develop until rather late in the disease process, scientists haven’t had much knowledge about the early stages. IPS cells changed that by allowing scientists to study the very early stages of the disorder. And two, they provide novel ways of testing new drugs and potentially even personalizing treatment options.

    “It’s creating a sea change,” says Jeanne Loring, a stem cell biologist at the Scripps Research Institute in San Diego. “There will be tools available that have never been available before, and it will completely change drug development.”

    Beyond Animal Models

    Long before scientists knew that stem cells existed, they relied on animals to model diseases. Through careful breeding and, later, genetic engineering, researchers have developed rats, mice, fruit flies, roundworms, and other animals that display symptoms of the illness in question. Animal models remain useful, but they’re not perfect. While the biology of these animals often mimics humans’, they aren’t identical, and although some animals might share many of the overt symptoms of human illness, scientists can’t be sure that they experience the disease in the same way humans do.

    “Mouse models are useful research tools, but they rarely capture the disease process,” says Rick Livesey, a biologist at the University of Cambridge in the U.K. Many neurodegenerative diseases, like Alzheimer’s, he says, are perfect examples of the shortcomings of animal models. “No other species of animal actually gets Alzheimer’s disease, so any animal model is a compromise.”

    As a result, many drugs that seemed to be effective in animal models showed no benefit in humans. A study published in Alzheimer’s Research and Therapy in June 2014 estimated that 99.9% of Alzheimer’s clinical trials ended in failure, costing both money and lives. Scientists like Ole Isacson, a neuroscientist at Harvard University who studies Parkinson’s disease, were eager for a method that would let them investigate illnesses in a patient’s own cells, eliminating the need for expensive and imperfect animal models.

    Stem cells appeared to offer that potential, but when Congress banned federal funding in 1995 for research on embryos—and thus the development of new stem cell lines—scientists found their work had ground to a halt. As many researchers in the U.S. fretted over the future of stem cell research, scientists in Japan were developing a technique which would eliminate the need for embryonic stem cells. What’s more, it would allow researchers to create stem cells from the individuals who were suffering from the diseases they were studying.

    Cells in the body are able to specialize by turning on some sets of genes and switching off others. Every cell has a complete copy of the DNA, it’s just packed away in deep storage where the cell can’t easily access it. Yamanaka, the Nobel laureate, knew that finding the key to this storage locker and unpacking it could potentially turn any specialized cell back into a pluripotent stem cell. He focused in on a group of 24 genes that were active only in embryonic stem cells. If he could get adult, specialized cells to translate these genes into proteins, then they should revert to stem cells. Yamanaka settled on fibroblast cells as the source of iPS cells since these are easily obtained with a skin biopsy.

    Rather than trying to switch these genes back on, a difficult and time-consuming task, Yamanaka instead engineered a retrovirus to carry copies of these 24 genes to mouse fibroblast cells. Since many retroviruses insert their own genetic material into the genomes of the cells they infect, Yamanaka only had to deliver the virus once. All successive generations of cells inherited those 24 genes. Yamanaka first grew the fibroblasts in a dish, then infected them with his engineered retrovirus. Over repeated experiments, Yamanaka was able to narrow the suite of required genes from 24 down to just four.

    The process was far from perfect—it took several weeks to create the stem cells, and only around 0.01%–0.1% of the fibroblasts were actually converted to stem cells. But after Yamanaka published his results in Cell in 2006, scientists quickly began perfecting the procedure and developing other techniques. To say they have been successful would be an understatement. “The technology is so good now that I have the undergraduates in my lab doing the reprogramming,” Loring says.

    Accelerating Disease

    When he heard of Yamanaka’s discovery, Isacson, the Harvard neuroscientist studying Parkinson’s disease, had been using fetal neurons to try to replace diseased and dying neurons. Isacson realized “very quickly” that iPS cells could yield new discoveries about Parkinson’s. At the time, scientists were trying to determine exactly when the disease process started. It wasn’t easy. A person has to lose around 70% of their dopamine neurons before the first sign of movement disorder appears and Parkinson’s can be diagnosed. By that point, it’s too late to reverse that damage, a problem that is found in many if not all neurodegenerative diseases. Isacson wanted to know what was causing the neurons to die.

    Together with the National Institute of Neurological Disorders and Stroke consortium on iPS cells, Isacson obtained fibroblasts from patients with genetic mutations linked to Parkinson’s. Then, he reprogrammed these cells to become the specific type of neurons affected by Parkinson’s disease. “To our surprise, in the very strong hereditary forms of disease, we found that cells showed very strong signs of distress in the dish, even though they were newborn cells,” Isacson says.

    These experiments, published in Science Translational Medicine in 2012, showed that the disease process in Parkinson’s started far earlier than scientists expected. The distressed, differentiated neurons Isacson saw under the microscope were still just a few weeks old. People generally didn’t start showing symptoms for Parkinson’s disease until middle age or beyond.

    2
    A clump of stem cells, seen here in green

    Isacson and his colleagues then tried to determine what was different between different cells with different mutations. The cells showed the most distress in their mitochondria, the parts of the cell that act as power plants by creating energy from oxygen and glucose. How that distress manifested, though, varied slightly depending on which mutation the patient carried. Neurons derived from an individual with a mutation in the LRRK2 gene consumed lower than expected amounts of oxygen, whereas the neurons derived from those carrying a mutation in PINK1 had much higher oxygen consumption. Neurons with these mutations were also more susceptible to a type of cellular damage known as oxidative stress.

    After exposing both groups of cells to a variety of environmental toxins, such as oligomycin and valinomycin, both of which affect mitochondria, Isacson and colleagues attempted to rescue the cells by using several compounds that had been found effective in animal models. Both the LRRK2 and the PINK1 cells responded well to the antioxidant coenzyme Q10, but had very different responses to the immunosuppressant drug rapamycin. Whereas LRRK2 showed beneficial responses to rapamycin, the PINK1 cells did not.

    To Isacson, the different responses were profoundly important. “Most drugs don’t become blockbusters because they don’t work for everyone. Trials start too late, and they don’t know the genetic background of the patient,” Isacson says. He believes that iPS cells will one day help researchers match specific treatments with specific genotypes. There may not be a single blockbuster that can treat Parkinson’s, but there may be several drugs that make meaningful differences in patients’ lives.

    Cancer biologists have already begun culturing tumor cells and testing anti-cancer drugs before giving these medications to patients, and biologists studying neurodegenerative disease hope that iPS cells will one day allow them to do something similar for their patients. Scientists studying ALS have recently taken a step in that direction, using iPS cells to create motor neurons from fibroblasts of people carrying a mutation in the C9orf72 gene, the most common genetic cause of ALS. In a recent paper in Neuron, the scientists identified a small molecule which blocked the formation of toxic proteins caused by this mutation in cultured motor neurons.

    Adding More Dimensions

    It’s one thing to identify early disease in iPS cells, but these cells are generally obtained from people who have been diagnosed. At that point, it’s too late, in a way; drugs may be much less likely to work in later stages of the disease. To make many potential drugs more effective, the disease has to be diagnosed much, much earlier. Recent work by Harvard University stem cell biologist Rudolph Tanzi and colleagues may have taken a step in that direction, also using iPS cells.

    Doo Yeon Kim, Tanzi’s co-author, had grown frustrated with iPS cell models of neurodegenerative disease. The cell cultures were liquid, and the cells could only grow in a thin, two-dimensional layer. The brain, however, was more gel-like, and existed in three dimensions. So Kim created a 3D gel matrix on which the researchers grew human neural stem cells that carried extra copies of two genes—one which codes for amyloid precursor protein and another for presenilin 1, both of which were previously discovered in Tanzi’s lab—which are linked to familial forms of Alzheimer’s disease.

    After six weeks, the cells contained high levels of the harmful beta-amyloid protein as well as large numbers of toxic neurofibrillary tangles that damage and kill neurons. Both of these proteins had been found at high levels in the neurons of individuals who had died from Alzheimer’s disease, but researchers didn’t know for certain which protein built up first and which was more central to the disease process. Further experiments revealed that drugs preventing the formation of amyloid proteins also prevented the formation of neurofibrillary tangles, indicating that amyloid proteins likely formed first during Alzheimer’s disease.

    “When you stop amyloid, you stop cell death,” Tanzi says. Amyloid begins to build up long before people show signs of altered cognition, and Tanzi believes that drugs which stop amyloid or prevent the buildup of neurofibrillary tangles could prevent Alzheimer’s before it starts.

    The results were hailed in the media as a “major breakthrough,” although Larry Goldstein, a neuroscientist at the University of California, San Diego, takes a more nuanced perspective. “It’s a nice paper and an important step forward, but things got overblown. I don’t know that I would use the word ‘breakthrough’ because these, like all results, often have a very long history to them,” Goldstein says.

    The scientists who spoke with NOVA Next about iPS cells noted that the field is moving forward at a remarkable clip, but they all talked at length about the issues that still remain. One of the largest revolves around differences between the age of the iPS cells and the age of the humans who develop these neurodegenerative diseases. Although scientists are working with neurons that are technically “mature,” they are nonetheless only weeks or months old—far from the several decades that the sufferers of neurodegenerative diseases have. Since aging remains the strongest risk factor for developing these diseases, neuroscientists worry that some disease pathology might be missed in such young cells. “Is it possible to study a disease that takes 70 years to develop in a person using cells that have grown for just a few months in a dish?” Livesey asks.

    So far, the answer has been a tentative yes. Some scientists have begun to devise different strategies to accelerate the aging process in the lab so researchers don’t have to wait several decades before they develop their answers. Lorenz Studer, director of the Center for Stem Cell Biology at the Sloan-Kettering Institute, uses the protein that causes progeria, a disorder of extreme premature aging, to successfully age neurons derived from iPS cells from Parkinson’s disease patients.

    Robert Lanza, a stem cell biologist at Advanced Cell Technology, takes another approach, aging cells by taking small amounts of mature neurons and growing them up in a new dish. “Each time you do this, you are forcing the cells to divide,” Lanza says. “And cells can only divide so many times before they reach senescence and die.” This process, Lanza believes, will mimic aging. He has also been experimenting with stressing the cells to promote premature aging.

    All of these techniques, Livesey believes, will allow scientists to study which aspects of the aging process—such as number of cell divisions and different types of environmental stressors—affect neurodegenerative diseases and how they do so. Adding to the complexity of the experimental system will improve the results that come out at the end. “You can only capture as much biology in iPS cells as you plug into it in the beginning,” Livesey says.

    But as Isacson and Loring’s work, has shown, even very young cells can show hallmarks of neurodegenerative diseases. “If a disease has a genetic cause, if there’s an actual change in DNA, you should be able to find something in those iPS cells that is different,” Loring says.

    For these experiments and others, scientists have been relying on iPS cells derived from individuals with hereditary or familial forms of neurodegenerative disease. These individuals, however, only represent about 5–15% of individuals with neurodegenerative disease; the vast majority of neurodegenerative diseases is sporadic and has no known genetic cause. Scientists believe that environmental factors may play a much larger role in the onset of these forms of neurodegenerative disease.

    That heterogeneity means it’s not yet clear whether the iPS cells from individuals with hereditary forms of disease are a good model for what happens in sporadic disease. Although the resulting symptoms may be the same, different forms of disease may use the same biological pathways to end up in the same place. Isacson is in the process of identifying the range of genes and proteins that are altered in iPS cells that carry Parkinson’s disease mutations. He intends to determine whether any of these pathways are also disturbed in sporadically occurring Parkinson’s disease to pinpoint any similarities in both forms of disease.

    Livesey’s lab just received a large grant to study people with an early onset, sporadic form of Alzheimer’s. “Although sporadic Alzheimer’s disease isn’t caused by a mutation in a single gene, the condition is still strongly heritable. The environment, obviously, has an important role, but so does genetics,” Livesey says.

    Because the disease starts earlier in these individuals, researchers believe that it has a larger genetic link than other forms of sporadic Alzheimer’s disease, which will make it easier to identify any genetic or biological abnormalities. Livesey hopes that bridging sporadic and hereditary forms of Alzheimer’s disease will allow researchers to reach stronger conclusions using iPS cells.

    Though it will be years before any new drugs come out of Livesey’s stem cell studies—or any other stem cell study for that matter—the technology has nonetheless allowed scientists to refine their understanding of these and other diseases. And, scientists believe, this is just the start. “There are an endless series of discoveries that can be made in the next few decades,” Isacson says.

    Image credit: Ole Isacson, McLean Hospital and Harvard Medical School/NINDS

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 10:10 am on March 22, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From S and T: “Nova in Sagittarius Now 4th Magnitude!” 

    SKY&Telescope bloc

    Sky & Telescope

    March 22, 2015
    Alan MacRobert

    The nova that erupted in the Sagittarius Teapot on March 15th continues to brighten at a steady rate. As of the morning of March 22nd it’s about magnitude 4.3, plain as can be in binoculars before dawn, looking yellowish, and naked-eye in a moderately good sky.

    Update Sunday March 22: It’s still brightening — to about magnitude 4.3 this morning! That’s almost 2 magnitudes brighter than at its discovery a week ago. It’s now the brightest star inside the main body of the Sagittarius Teapot, and it continues to gain 0.3 magnitude per day. This seems to be the brightest nova in Sagittarius since at least 1898. And, Sagittarius is getting a little higher before dawn every morning.

    1
    The nova is right on the midline of the Sagittarius Teapot. The horizon here is drawn for the beginning of astronomical twilight in mid-March for a viewer near 40° north latitude. The nova is about 15° above this horizon. Stars are plotted to magnitude 6.5. For a more detailed chart with comparison-star magnitudes, see the bottom of this page. Sky & Telescope diagram.

    You never know. On Sunday March 15th, nova hunter John Seach of Chatsworth Island, NSW, Australia, found a new 6th-magnitude star shining in three search images taken by his DSLR patrol camera. The time of the photos was March 15.634 UT. One night earlier, the camera recorded nothing there to a limiting magnitude of 10.5.

    2
    Before and after. Adriano Valvasori imaged the nova at March 16.71, using the iTelescope robotic telescope “T9” — a 0.32-m (12.5-inch) reflector in Australia. His shot is blinked here with a similarly deep earlier image. One of the tiny dots at the right spot might be the progenitor star. The frames are 1⁄3° wide.

    A spectrum taken a day after the discovery confirmed that this is a bright classical nova — a white dwarf whose thin surface layer underwent a hydrogen-fusion explosion — of the type rich in ionized iron. The spectrum showed emission lines from debris expanding at about 2,800 km per second.

    The nova has been named Nova Sagittarii 2015 No. 2, after receiving the preliminary designation PNV J18365700-2855420. Here’s its up-to-date preliminary light curve from the American Association of Variable Star Observers (AAVSO). Here is the AAVSO’s list of recent observations.

    Although the nova is fairly far south (at declination –28° 55′ 40″, right ascension 18h 36m 56.8s), and although Sagittarius only recently emerged from the glow of sunrise, it’s still a good 15° above the horizon just before the beginning of dawn for observers near 40° north latitude. If you’re south of there it’ll be higher; if you’re north it’ll be lower. Binoculars are all you’ll need.

    It looks yellowish. Here’s a color image of its spectrum taken March 17th, by Jerome Jooste in South Africa using a Star Analyser spectrograph on an 8-inch reflector. Note the wide, bright emission lines. They’re flanked on their short-wavelength ends by blueshifted dark absorption lines: the classic P Cygni profile of a star with a thick, fast-expanding cooler shell or wind.

    To find when morning astronomical twilight begins at your location, you can use our online almanac. (If you’re on daylight time like most of North America, be sure to check the Daylight-Saving Time box.)

    Below is a comparison-star chart from the AAVSO. Stars’ visual magnitudes are given to the nearest tenth with the decimal points omitted.

    3
    The cross at center is Nova Sagittarii 2015 No. 2. Magnitudes of comparison stars are given to the nearest tenth with the decimal points omitted. The frame is 15° wide, two or three times the width of a typical binocular’s field of view. Courtesy AAVSO.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sky & Telescope magazine, founded in 1941 by Charles A. Federer Jr. and Helen Spence Federer, has the largest, most experienced staff of any astronomy magazine in the world. Its editors are virtually all amateur or professional astronomers, and every one has built a telescope, written a book, done original research, developed a new product, or otherwise distinguished him or herself.

    Sky & Telescope magazine, now in its eighth decade, came about because of some happy accidents. Its earliest known ancestor was a four-page bulletin called The Amateur Astronomer, which was begun in 1929 by the Amateur Astronomers Association in New York City. Then, in 1935, the American Museum of Natural History opened its Hayden Planetarium and began to issue a monthly bulletin that became a full-size magazine called The Sky within a year. Under the editorship of Hans Christian Adamson, The Sky featured large illustrations and articles from astronomers all over the globe. It immediately absorbed The Amateur Astronomer.

    Despite initial success, by 1939 the planetarium found itself unable to continue financial support of The Sky. Charles A. Federer, who would become the dominant force behind Sky & Telescope, was then working as a lecturer at the planetarium. He was asked to take over publishing The Sky. Federer agreed and started an independent publishing corporation in New York.

    “Our first issue came out in January 1940,” he noted. “We dropped from 32 to 24 pages, used cheaper quality paper…but editorially we further defined the departments and tried to squeeze as much information as possible between the covers.” Federer was The Sky’s editor, and his wife, Helen, served as managing editor. In that January 1940 issue, they stated their goal: “We shall try to make the magazine meet the needs of amateur astronomy, so that amateur astronomers will come to regard it as essential to their pursuit, and professionals to consider it a worthwhile medium in which to bring their work before the public.”

     
  • richardmitnick 4:01 am on March 21, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Genetically Engineering Almost Anything” 2014 and Very Important 

    PBS NOVA

    NOVA

    17 Jul 2014
    Tim De Chant and Eleanor Nelsen

    When it comes to genetic engineering, we’re amateurs. Sure, we’ve known about DNA’s structure for more than 60 years, we first sequenced every A, T, C, and G in our bodies more than a decade ago, and we’re becoming increasingly adept at modifying the genes of a growing number of organisms.

    But compared with what’s coming next, all that will seem like child’s play. A new technology just announced today has the potential to wipe out diseases, turn back evolutionary clocks, and reengineer entire ecosystems, for better or worse. Because of how deeply this could affect us all, the scientists behind it want to start a discussion now, before all the pieces come together over the next few months or years. This is a scientific discovery being played out in real time.

    1
    Scientists have figured out how to use a cell’s DNA repair mechanisms to spread traits throughout a population.

    Today, researchers aren’t just dropping in new genes, they’re deftly adding, subtracting, and rewriting them using a series of tools that have become ever more versatile and easier to use. In the last few years, our ability to edit genomes has improved at a shockingly rapid clip. So rapid, in fact, that one of the easiest and most popular tools, known as CRISPR-Cas9, is just two years old. Researchers once spent months, even years, attempting to rewrite an organism’s DNA. Now they spend days.

    Soon, though, scientists will begin combining gene editing with gene drives, so-called selfish genes that appear more frequently in offspring than normal genes, which have about a 50-50 chance of being passed on. With gene drives—so named because they drive a gene through a population—researchers just have to slip a new gene into a drive system and let nature take care of the rest. Subsequent generations of whatever species we choose to modify—frogs, weeds, mosquitoes—will have more and more individuals with that gene until, eventually, it’s everywhere.

    Cas9-based gene drives could be one of the most powerful technologies ever discovered by humankind. “This is one of the most exciting confluences of different theoretical approaches in science I’ve ever seen,” says Arthur Caplan, a bioethicist at New York University. “It merges population genetics, genetic engineering, molecular genetics, into an unbelievably powerful tool.”

    We’re not there yet, but we’re extraordinarily close. “Essentially, we have done all of the pieces, sometimes in the same relevant species.” says Kevin Esvelt, a postdoc at Harvard University and the wunderkind behind the new technology. “It’s just no one has put it all together.”

    It’s only a matter of time, though. The field is progressing rapidly. “We could easily have laboratory tests within the next few months and then field tests not long after that,” says George Church, a professor at Harvard University and Esvelt’s advisor. “That’s if everybody thinks it’s a good idea.”

    It’s likely not everyone will think this is a good idea. “There are clearly people who will object,” Caplan says. “I think the technique will be incredibly controversial.” Which is why Esvelt, Church, and their collaborators are publishing papers now, before the different parts of the puzzle have been assembled into a working whole.

    “If we’re going to talk about it at all in advance, rather than in the past tense,” Church says, “now is the time.”

    “Deleterious Genes”

    The first organism Esvelt wants to modify is the malaria-carrying mosquito Anopheles gambiae. While his approach is novel, the idea of controlling mosquito populations through genetic modification has actually been around since the late 1970s. Then, Edward F. Knipling, an entomologist with the U.S. Department of Agriculture, published a substantial handbook with a chapter titled “Use of Insects for Their Own Destruction.” One technique, he wrote, would be to modify certain individuals to carry “deleterious genes” that could be passed on generation after generation until they pervaded the entire population. It was an idea before its time. Knipling was on the right track, but he and his contemporaries lacked the tools to see it through.

    The concept surfaced a few more times before being picked up by Austin Burt, an evolutionary biologist and population geneticist at Imperial College London. It was the late 1990s, and Burt was busy with his yeast cells, studying their so-called homing endonucleases, enzymes that facilitate the copying of genes that code for themselves. Self-perpetuating genes, if you will. “Through those studies, gradually, I became more and more familiar with endonucleases, and I came across the idea that you might be able to change them to recognize new sequences,” Burt recalls.

    Other scientists were investigating endonucleases, too, but not in the way Burt was. “The people who were thinking along those lines, molecular biologists, were thinking about using these things for gene therapy,” Burt says. “My background in population biology led me to think about how they could be used to control populations that were particularly harmful.”

    In 2003, Burt penned an influential article that set the course for an entire field: We should be using homing endonucleases, a type of gene drive, to modify malaria-carrying mosquitoes, he said, not ourselves. Burt saw two ways of going about it—one, modify a mosquito’s genome to make it less hospitable to malaria, and two, skew the sex ratio of mosquito populations so there are no females for the males to reproduce with. In the following years, Burt and his collaborators tested both in the lab and with computer models before they settled on sex ratio distortion. (Making mosquitoes less hospitable to malaria would likely be a stopgap measure at best; the Plasmodium protozoans could evolve to cope with the genetic changes, just like they have evolved resistance to drugs.)

    Burt has spent the last 11 years refining various endonucleases, playing with different scenarios of inheritance, and surveying people in malaria-infested regions. Now, he finally feels like he is closing in on his ultimate goal. “There’s a lot to be done still,” he says. “But on the scale of years, not months or decades.”

    Cheating Natural Selection

    Cas9-based gene drives could compress that timeline even further. One half of the equation—gene drives—are the literal driving force behind proposed population-scale genetic engineering projects. They essentially let us exploit evolution to force a desired gene into every individual of a species. “To anthropomorphize horribly, the goal of a gene is to spread itself as much as possible,” Esvelt says. “And in order to do that, it wants to cheat inheritance as thoroughly as it can.” Gene drives are that cheat.

    Without gene drives, traits in genetically-engineered organisms released into the wild are vulnerable to dilution through natural selection. For organisms that have two parents and two sets of chromosomes (which includes humans, many plants, and most animals), traits typically have only a 50-50 chance of being inherited, give or take a few percent. Genes inserted by humans face those odds when it comes time to being passed on. But when it comes to survival in the wild, a genetically modified organism’s odds are often less than 50-50. Engineered traits may be beneficial to humans, but ultimately they tend to be detrimental to the organism without human assistance. Even some of the most painstakingly engineered transgenes will be gradually but inexorably eroded by natural selection.

    Some naturally occurring genes, though, have over millions of years learned how to cheat the system, inflating their odds of being inherited. Burt’s “selfish” endonucleases are one example. They take advantage of the cell’s own repair machinery to ensure that they show up on both chromosomes in a pair, giving them better than 50-50 odds when it comes time to reproduce.

    2
    A gene drive (blue) always ends up in all offspring, even if only one parent has it. That means that, given enough generations, it will eventually spread through the entire population.

    Here’s how it generally works. The term “gene drive” is fairly generic, describing a number of different systems, but one example involves genes that code for an endonuclease—an enzyme which acts like a pair of molecular scissors—sitting in the middle of a longer sequence of DNA that the endonculease is programmed to recognize. If one chromosome in a pair contains a gene drive but the other doesn’t, the endonuclease cuts the second chromosome’s DNA where the endonuclease code appears in the first.

    The broken strands of DNA trigger the cell’s repair mechanisms. In certain species and circumstances, the cell unwittingly uses the first chromosome as a template to repair the second. The repair machinery, seeing the loose ends that bookend the gene drive sequence, thinks the middle part—the code for the endonuclease—is missing and copies it onto the broken chromosome. Now both chromosomes have the complete gene drive. The next time the cell divides, splitting its chromosomes between the two new cells, both new cells will end up with a copy of the gene drive, too. If the entire process works properly, the gene drive’s odds of inheritance aren’t 50%, but 100%.

    3
    Here, a mosquito with a gene drive (blue) mates with a mosquito without one (grey). In the offspring, one chromosome will have the drive. The endonuclease then slices into the drive-free DNA. When the strand gets repaired, the cell’s machinery uses the drive chromosome as a template, unwittingly copying the drive into the break.

    Most natural gene drives are picky about where on a strand of DNA they’ll cut, so they need to be modified if they’re to be useful for genetic engineering. For the last few years, geneticists have tried using genome-editing tools to build custom gene drives, but the process was laborious and expensive. With the discovery of CRISPR-Cas9 as a genome editing tool in 2012, though, that barrier evaporated. CRISPR is an ancient bacterial immune system which identifies the DNA of invading viruses and sends in an endonuclease, like Cas9, to chew it up. Researchers quickly realized that Cas9 could easily be reprogrammed to recognize nearly any sequence of DNA. All that’s needed is the right RNA sequence—easily ordered and shipped overnight—which Cas9 uses to search a strand of DNA for where to cut. This flexibility, Esvelt says, “lets us target, and therefore edit, pretty much anything we want.” And quickly.

    Gene drives and Cas9 are each powerful on their own, but together they could significantly change biology. CRISRP-Cas9 allows researchers to edit genomes with unprecedented speed, and gene drives allow engineered genes to cheat the system, even if the altered gene weakens the organism. Simply by being coupled to a gene drive, an engineered gene can race throughout a population before it is weeded out. “Eventually, natural selection will win,” Esvelt says, but “gene drives just let us get ahead of the game.”

    Beyond Mosquitoes

    If there’s anywhere we could use a jump start, it’s in the fight against malaria. Each year, the disease kills over 200,000 people and sickens over 200 million more, most of whom are in Africa. The best new drugs we have to fight it are losing ground; the Plasmodium parasite is evolving resistance too quickly.

    3
    False-colored electron micrograph of a Plasmodium sp. sporozoite.

    And we’re nowhere close to releasing an effective vaccine. The direct costs of treating the disease are estimated at $12 billion, and the economies of affected countries grew 1.3% less per year, a substantial amount.

    Which is why Esvelt and Burt are both so intently focused on the disease. “If we target the mosquito, we don’t have to face resistance on the parasite itself. The idea is, we can just take out the vector and stop all transmission. It might even lead to eradication,” Esvelt says.

    Esvelt initially mulled over the idea of building Cas9-based gene drives in mosquitoes to do just that. He took the idea to to Flaminia Catteruccia, a professor who studies malaria at the Harvard School of Public Health, and the two grew increasingly certain that such a system would not only work, but work well. As their discussions progressed, though, Esvelt realized they were “missing the forest for the trees.” Controlling malaria-carrying mosquitoes was just the start. Cas9-based gene drives were the real breakthrough. “If it let’s us do this for mosquitos, what is to stop us from potentially doing it for almost anything that is sexually reproducing?” he realized.

    In theory, nothing. But in reality, the system works best on fast-reproducing species, Esvelt says. Short generation times allow the trait to spread throughout a population more quickly. Mosquitoes are a perfect test case. If everything were to work perfectly, deleterious traits could sweep through populations of malaria-carrying mosquitoes in as few as five years, wiping them off the map.

    Other noxious species could be candidates, too. Certain invasive species, like mosquitoes in Hawaii or Asian carp in the Great Lakes, could be targeted with Cas9-based gene drives to either reduce their numbers or eliminate them completely. Agricultural weeds like horseweed that have evolved resistance to glyphosate, a herbicide that is broken down quickly in the soil, could have their susceptibility to the compound reintroduced, enabling more farmers to adopt no-till practices, which help conserve topsoil. And in the more distant future, Esvelt says, weeds could even be engineered to introduce vulnerabilities to completely benign substances, eliminating the need for toxic pesticides. The possibilities seem endless.

    The Decision

    Before any of that can happen, though, Esvelt and Church are adamant that the public help decide whether the research should move forward. “What we have here is potentially a general tool for altering wild populations,” Esvelt says. “We really want to make sure that we proceed down this path—if we decide to proceed down this path—as safely and responsibly as possible.”

    To kickstart the conversation, they partnered with the MIT political scientist Kenneth Oye and others to convene a series of workshops on the technology. “I thought it might be useful to get into the room people with slightly different material interests,” Oye says, so they invited regulators, nonprofits, companies, and environmental groups. The idea, he says, was to get people to meet several times, to gain trust and before “decisions harden.” Despite the diverse viewpoints, Oye says there was surprising agreement among participants about what the important outstanding questions were.

    As the discussion enters the public sphere, tensions are certain to intensify. “I don’t care if it’s a weed or a blight, people still are going to say this is way too massive a genetic engineering project,” Caplan says. “Secondly, it’s altering things that are inherited, and that’s always been a bright line for genetic engineering.” Safety, too, will undoubtedly be a concern. As the power of a tool increases, so does its potential for catastrophe, and Cas9-based gene drives could be extraordinarily powerful.

    There’s also little in the way of precedent that we can use as a guide. Our experience with genetically modified foods would seem to be a good place to start, but they are relatively niche organisms that are heavily dependent on water and fertilizer. It’s pretty easy to keep them contained to a field. Not so with wild organisms; their potential to spread isn’t as limited.

    Aware of this, Esvelt and his colleagues are proposing a number of safeguards, including reversal drives that can undo earlier engineered genes. “We need to really make sure those work if we’re proposing to build a drive that is intended to modify a wild population,” Esvelt says.

    There are still other possible hurdles to surmount—lab-grown mosquitoes may not interbreed with wild ones, for example—but given how close this technology is to prime time, Caplan suggests researchers hew to a few initial ethical guidelines. One, use species that are detrimental to human health and don’t appear to fill a unique niche in the wild. (Malaria-carrying mosquitoes seem fit that description.) Two, do as much work as possible using computer models. And three, researchers should continue to be transparent about their progress, as they have been. “I think the whole thing is hugely exciting,” Caplan says. “But the time to really get cracking on the legal/ethical infrastructure for this technology is right now.”

    Church agrees, though he’s also optimistic about the potential for Cas9-based gene drives. “I think we need to be cautious with all new technologies, especially all new technologies that are messing with nature in some way or another. But there’s also a risk of doing nothing,” Church says. “We have a population of 7 billion people. You have to deal with the environmental consequences of that.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 9:11 am on March 17, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “An Inflammatory Theory of Brain Disease” 

    PBS NOVA

    NOVA

    25 Feb 2015
    Lauren Aguirre

    Beginning in March, 2010, 882 men and women who had suffered traumatic brain injury were enrolled in a clinical trial to test whether administering the human hormone progesterone within four hours would improve their outcome. While it’s often thought of as a one-time event, traumatic brain injury is better described as a disease: it’s irreversible, sometimes progressive, and often affects people for the rest of their lives. More than 5 million Americans—ranging from professional football players to Iraq war veterans and victims of car accidents—live with disabilities caused by traumatic brain injury.

    1
    Microglia, seen here stained green, are part of the brain’s specialized immune system.

    One striking hallmark of traumatic brain injury is inflammation in the brain, which occurs shortly after the trauma and can cause swelling, tissue breakdown, and cell death. Because it can be so debilitating, a lot of research has gone into finding ways to limit the damage in the hours immediately following injury. Progesterone can interfere with inflammation and is also thought to stimulate repair, so it was considered a promising candidate for reducing brain damage. Plus, the hormone is cheap and widely available.

    Experimental animal models and two early, small clinical trials had all shown positive results. After years of failing to find effective medications, hopes were high for this new approach.

    The Role of Inflammation

    While inflammation is harmful in the case of traumatic brain injury, it is also critical for our survival. When our immune system encounters a microbe or when we bruise our knees, the inflammation that results rushes key cells and proteins to the site to fight the infection or to encourage healing. But there are times when inflammation doesn’t know when to quit, and many doctors and researchers believe it plays a role in many chronic diseases. The growing list goes beyond autoimmune diseases, such as arthritis, diabetes, or multiple sclerosis, to include cardiovascular disease and possibly even brain diseases such as Alzheimer’s, Parkinson’s, epilepsy, or depression.

    Here’s how the immune system is supposed to work. Let’s say you slam a car door on your finger. That causes tissue damage and possibly infection—stuff that doesn’t belong there and looks foreign to the body. White blood cells and other molecules swarm in, wall off the damaged area, and attack the invaders and the damaged tissue. The area gets hot, red, swollen, and painful. Clean-up cells like macrophages—which means “big eaters” in Greek—gobble up the garbage. Once the damage has been contained, other immune molecules begin the repair process and the inflammation subsides.

    But inflammation also causes collateral damage, a sort of friendly fire. The same processes that get rid of foreign agents can damage good cells as well. The death of those cells can in turn trigger further inflammation. For reasons that remain unclear, sometimes this creates a vicious cycle that becomes self-sustaining. Steven Meier, a neuroscientist at the University of Colorado who researches how the brain regulates immune responses points out that, “like many, many other adaptive mechanisms that are adaptive when they’re activated briefly, they may not be so adaptive when they’re activated chronically.”

    For decades, researchers have noticed a link between ongoing inflammation and cardiovascular disease. Today it’s widely accepted that the immune system’s response to plaques of low-density lipoproteins, or LDL, on blood vessel walls plays a pivotal role in the progression of the disease. Sensing these plaques as foreign invaders, white blood cells and other molecules that are meant to protect the body turn into its own worst enemy. Instead of healing the body, white blood cells become trapped inside the plaques, provoking further inflammation and allowing the plaques to continue to grow. Eventually one of those plaques can break off and cause a clot, with potentially disastrous results.

    Though it may be going too far to call inflammation a grand unifying theory of chronic disease, the link between the two is a focus of labs around the world. “I do think inflammation is an important element, and maybe at the heart of a variety of disorders,” Meier says, “and does account for a lot of the comorbidity that occurs between disorders. Why on earth is there comorbidity between depression and heart disease? But once you start thinking about inflammation, you realize they may be both inflammatory disorders or at least involve an inflammatory element.”

    In the last decade, interest in the relationship between inflammation and brain disease in particular has exploded. Tantalizing associations abound. For example, some population-based studies of Alzheimer’s patients suggested that people who took non-steroidal anti-inflammatories—so-called NSAIDs like aspirin or ibuprofen—for long periods have a reduced risk of developing Alzheimer’s. Low-grade systemic inflammation, as measured by higher than normal levels of certain inflammatory molecules in the blood, have been found in people with depression. And in children with severe epilepsy, techniques to reduce inflammation have succeeded in stopping their seizures in cases where all other attempts had failed.

    The Brain’s Immune System

    The key is the brain’s unique immune system, which is slightly different from the rest of the body. For starters, it’s less heavy-handed. “The immune system, during evolution, learned that, ‘This is the brain, this is the nervous system. I cannot really live without it, so I have to be very, very, careful,’” says Bibiana Bielekova, chief of the Neuroimmunological Diseases Unit at the NIH.

    The first line of defense for the central nervous system is the blood brain barrier, which lines the thousands of miles of blood vessels in your brain. It is largely impermeable, for the most part letting in only glucose, oxygen, and other nutrients that brain cells need to function. This prevents most of the toxins and infectious agents we encounter daily from coming into contact with our brain’s delicate neurons and fragile microenvironment, preserving the brain’s balance of electrolytes—such as potassium—which if disturbed can wreak havoc on the electrical signaling required for normal brain function. Normally the blood brain barrier is very selective about what it invites inside the brain, but when the barrier gets damaged, for example because of a traumatic brain injury, dangerous molecules and immune cells that aren’t supposed to be there can slip inside.

    The second line of defense are microglia, the brain’s specialized macrophages, which migrate into the brain and take up permanent residence. Typically, microglia have a spindly, tree-like structure. Their branches are in constant motion, which allows them to scan the environment, but also delicate enough to do so without damaging neural circuits. However, when they’re activated by injury or infection, microglia multiply, shape-shift into blobby, amoeba-like structures, release inflammatory chemicals, and engulf damaged cells, tissue debris, or microbes.

    Mounting Failures in Clinical Trials

    Late last year, the results of the progesterone-traumatic brain injury study with 882 patients were announced. Despite the apparent promise, patients who took progesterone following the initial brain injury fared no better than those on placebo. In fact, those who took placebo had slightly better outcomes. Women who took progesterone fared slightly worse. And episodes of phlebitis, or blood clots, were significantly more frequent in patients taking progesterone. A second study that enrolled 1,195 patients was also shut down when it showed no benefit.

    These two efforts are far from the only disappointing clinical trials that have tested anti-inflammatory treatments to intervene in brain diseases. A trial that used the antibiotic minocycline in ALS patients to reduce inflammation and cell death wound up harming more than helping. Alzheimer’s trials that attempted to reproduce the population effect that had been seen using NSAIDs also failed. In fact, in older patients the drugs appeared to make their symptoms worse.

    Another trial that attempted to circumvent inflammation by “vaccinating” with amyloid beta, the plaques that are one of the hallmarks of the disease, had to be discontinued after it caused inflammation of the brain and the membranes surrounding it in some patients. “Any time you intervene in any of these complicated biological processes that involve multiple proteins, multiple pathways, multiple loops, it’s going to be very complicated,” Meier notes.

    One reason why ongoing inflammation is assumed to be driving—if not instigating—brain diseases is that activated microglia are seen in the brains of these patients. However, activated microglia are not always bad. They also help protect it by shielding damaged areas from healthy regions, clearing debris from the brain, and initiating other complex anti-inflammatory processes that are far from fully understood. Bielekova points out that, “if you just see immune cells in the tissue, it’s very hard to say if they are playing bad guys or good guys.” In fact, a recent study published by the Yale School of Medicine in Nature Communications shows that microglia, at least in mice, appear to protect the brain by walling off the plaques from the surrounding environment. It’s possible, then, that tamping down microglia activation could actually make things worse.

    The difficulty of figuring out how to intervene in an immune response without turning off necessary functions may be just one reason why we haven’t seen major successes yet. Experts have pointed out a number of other reasons why so many trials have failed, from animal models that don’t translate well to humans and clinical trials that some would argue were poorly designed to the fact that once an inflammatory immune response has been well established, it becomes much harder to resolve.

    “Once you have this fully established chronic inflammation, it’s much, much more difficult to deliver effective treatments to those areas,” Bielekova says. “In multiple sclerosis, it is very clear that whichever drug you take that is efficacious, the efficacy decreases as you delay the treatment. So if you use the treatment very, very early on, every drug looks like a winner. But you wait just a couple of years and you take patients that are now three four years longer in the disease duration you may lose 50% efficacy of your drug.”

    Glimmers of Hope

    Disappointing results aside, there are hints that intervening early to tamp down inflammation can be helpful. The same data analysis that showed NSAIDs can actually speed up the progression of Alzheimer’s in patients in the advanced stages of the disease also revealed that those who started taking NSAIDs regularly in midlife, when their brains were healthier, had slower cognitive decline.

    Other approaches built around intervening early have yielded similar results. For example, last summer Genentech announced the results of a phase II trial testing the efficacy of crenezumab to treat Alzheimer’s disease. Crenezumab is an antibody that binds to amyloid beta, the protein that makes up the plaques scattered throughout the brain that are one of the main visible features of Alzheimer’s. The theory behind this choice of antibody was that it would stimulate microglia just enough to begin clearing the plaques, but not so much so that these immune cells would launch a major inflammatory response. While this phase II trial failed to meet its targets, patients in the early stages of the disease who had been given large doses showed slower cognitive decline.

    Damir Janigro, a blood brain barrier researcher at Cleveland Clinic who studies traumatic brain injury and epilepsy, has a very different take on how to approach brain diseases linked with inflammation. He considers both of these diseases to be “blood brain barrier” diseases because repeated seizures and traumatic brain injury can damage the blood brain barrier, making it leakier. That means that not only can substances that don’t belong inside the brain slip through, materials from inside the brain can travel to the rest of the body.

    “The blood brain barrier shields your brain, which is good for you. But then it’s bad for you if you leak a piece of your brain and this is considered an enemy” by the rest of your immune system, he says. Janigro is part of what he calls a “vocal minority” of researchers who look at inflammation outside the brain as being another cause of inflammation inside the brain—and potentially even a better target for treatment. “Neuroinflammation is probably bad for you. But it’s a very hard target to go after. Everybody who does is surprised that it fails, like the Alzheimer’s trial in pulling amyloid from the brain.”

    We’re still early in our understanding of how the brain’s immune system works, when it is damaging, and when it is protective. If inflammation is the common element in brain diseases, it may turn out that understanding how to intervene successfully in one disease will make it possible to use similar therapeutic approaches across many. But, because we don’t fully understand how the unfathomably complex immune system works, it is likely to be a long and difficult journey before we find ways to intervene safely and effectively.

    “If you look at the range of disorders and diseases there’s probably a continuum where in some it plays little or no role, with some it’s in between,” Meier says. “You can go too far with any of these unifying themes. There’s a natural tendency to want to do so. But I do think the inflammation story is not going away. I think it’s real.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 12:20 pm on March 14, 2015 Permalink | Reply
    Tags: , Cosmic dust, NOVA   

    From NOVA: “In the Past 24 Hours, 60 Tons of Cosmic Dust Have Fallen to Earth” 

    PBS NOVA

    NOVA

    13 Mar 2015
    Allison Eck

    1
    Sunlight reflecting off cosmic dust particles creates an effect known as “zodiacal light.”

    Every day, bits of outer space rain down on the Earth.

    Leftover from our solar system’s birth 4.6 billion years ago, cosmic dust is pulled into our atmosphere as the planet passes through decayed comet tails and other regions of chunky space rock. Occasionally, it arrives on Earth in the form of visible shooting stars.

    But the amount of space dust that Earth accumulates is maddeningly difficult to determine. Some measures taken from spacecraft solar panels, polar ice cores, and meteoric smoke have attempted an answer, but the estimates vary widely, from 0.4 to 110 tons per day.

    But a new paper claims to have narrowed that range. Here’s Mary Beth Griggs, writing for Popular Science:

    [A] recent paper took a closer look at the levels of sodium and iron in the atmosphere using Doppler Lidar, an instrument that can measure changes in the composition of the atmosphere. Because the amount of sodium in the atmosphere is proportional to the amount of cosmic dust in the atmosphere, the researchers figured out that the actual amount of dust falling to the earth is along the lines of 60 tons per day.

    The scientists published their results in the Journal of Geophysical Research.

    It may sound like an academic exercise, but determining how much cosmic dust falls on the Earth can help us better understand a number of critical processes, such as cloud formation in the upper atmosphere and the fertilization of plankton in Antarctica. It also suggests that we may gain a better answer as to whether the Earth is gaining mass each year or losing it. (Our planet constantly leaks gases into space.)

    Some of that cosmic dust is probably in you and me. While many of the elements that rain down from the heavens settle to the ground, it’s likely that we consume it in our food or inhale a tiny fraction of it. “We are made of star stuff”—Carl Sagan’s famous quote—rings truer than ever.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 3:42 am on March 6, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Powerful, Promising New Molecule May Snuff Antibiotic Resistant Bacteria” 

    PBS NOVA

    NOVA

    09 Jan 2015
    R.A. Becker

    1
    Methicillin-resistant staph surround human immune cell.

    Antibiotic resistant bacteria pose one of greatest threats to public health. Without new weapons in our arsenal, these bugs could cause 10 million deaths and cost nearly $100 trillion worldwide each year by the year 2050, according to a recent study commissioned by the British government.

    But just this week, scientists announced that they have discovered a potent new weapon hiding in the ground beneath our feet—a molecule that kills drug resistant bacteria and might itself be resistant to resistance. The team published their results Wednesday in the journal Nature.

    Scientists have been coopting the arsenal of soil-dwelling microorganisms for some time, said Kim Lewis, professor at Northeastern University and senior investigator of the study. Earth-bound bacteria live tightly packed in an intensely competitive environment, which has led to a bacterial arms race. “The ones that can kill their neighbors are going to have an advantage,” Lewis said. “So they go to war with each other with antibiotics, and then we borrow their weapons to fight our own pathogens.”

    However, by the 1960s, the returns from these efforts were dwindling. Not all bacteria that grow in the soil are easy to culture in the lab, and so antibiotic discovery slowed. Lewis attributes this to the interdependence of many soil-dwelling microbes, which makes it difficult to grow only one strain in the lab when it has been separated from its neighbors. “They kill some, and then they depend on some others. It’s very complex, just like in the human community,” he said.

    But a new device called the iChip, developed by Lewis’s team in collaboration with NovoBiotic Pharmaceuticals and colleagues at the University of Bonn, enables researchers to isolate bacteria reluctant to grow in the lab and cultivate them instead where they’re comfortable—in the soil.

    Carl Nathan, chairman of microbiology and immunology at Weill Cornell Medical School and co-author of a recent New England Journal of Medicine commentary about the growing threat of antibiotic resistance, called the team’s discovery “welcome,” adding that it illustrates a point that Lewis has been making for several years, that soil’s well of antibiotic-producing microorganisms “is not tapped out.”

    The researchers began by growing colonies of formerly un-culturable bacteria on their home turf and then evaluating their antimicrobial defenses. They discovered that one bacterium in particular, which they named Eleftheria terrae, makes a molecule known as teixobactin which kills several different kinds of bacteria, including the ones that cause tuberculosis, anthrax, and even drug resistant staph infections.

    Teixobactin isn’t the first promising new antibiotic candidate, but it does have one quality that sets it apart from others. In many cases, even if a new antibiotic is able to kill bacteria resistant to our current roster of drugs, it may eventually succumb to the same resistance that felled its predecessors. (Resistance occurs when the few bacteria strong enough to evade a drug’s killing effects multiply and pass on their genes.)

    Unlike current antibiotic options, though, teixobactin attacks two lipid building blocks of the cell wall, which many bacteria strains can’t live without. By attacking such a key part of the cell, it becomes harder for a bacterium to mutate to escape being killed.

    “This is very hopeful,” Nathan said. “It makes sense that the frequency of resistance would be very low because there’s more than one essential target.” He added, however, that given the many ways in which bacteria can avoid being killed by pharmaceuticals, “Is this drug one against which no resistance will arise? I don’t think that’s actually proved.”

    Teixobactin has not yet been tested in humans. Lewis said the next steps will be to conduct detailed preclinical studies as well as work on improving teixobactin’s molecular structure to solve several practical problems. One they hope to address, for example, is its poor solubility; another is that it isn’t readily absorbed when given orally—as is, it will have to be administered via injection.

    While Lewis predicts that the drug will not be available for at least five years, this new method offers a promising new avenue of drug discovery. Nathan agrees, though he cautions it’s too soon to claim victory. The message of this recent finding, he said, “is not that the problem of antibiotic resistance has been solved and we can stop worrying about it. Instead it’s to say that there’s hope.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:33 am on February 28, 2015 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “A Brief History of the Speed of Light” 

    PBS NOVA

    NOVA

    27 Feb 2015
    Jennifer Ouellette

    One night over drinks at a conference in San Jose, Miles Padgett, a physicist at Glasgow University in Scotland, was chatting with a colleague about whether or not they could make light go slower than its “lawful” speed in a vacuum. “It’s just one of those big, fundamental questions you may want to ask yourself at some point in the pub one night,” he told BBC News. Though light slows down when it passes through a medium, like water or air, the speed of light in a vacuum is usually regarded as an absolute.

    1
    Image: Flickr user Steve Oldham, adapted under a Creative Commons license.

    This time, the pub talk proved to be a particularly fruitful exchange. Last month, Padgett and his collaborators made headlines when they revealed their surprising success: They raced two photons down a one-meter “track” and managed to slow one down just enough that it finished a few millionths of a meter behind its partner. The experiment showed that it is possible for light to travel at a slower speed even in free space—and Padgett and his colleagues did it at the scale of individual photons.

    The notion that light has a particular speed, and that that speed is measurable, is relatively new. Prior to the 17th century, most natural philosophers assumed light traveled instantaneously. Galileo was one of the first to test this notion, which he did with the help of an assistant and two shuttered lanterns. First, Galileo would lift the shutter on his lantern. When his assistant, standing some distance away, saw that light, he would lift the shutter on his lantern in response. Galileo then timed how long it took for him to see the return signal from his assistant’s lantern, most likely using a water clock, or possibly his pulse. “If not instantaneous, it is extraordinarily rapid,” Galileo concluded, estimating that light travels at about ten times the speed of sound.

    Over the ensuing centuries, many other scientists improved upon Galileo’s work by devising ingenious new methods for measuring the speed of light. Their results fell between 200,000 kilometers per second, recorded in 1675 by Ole Roemer, who made his measurement by studying eclipse patterns in Jupiter’s moons, and 313,000 kilometers per second, recorded in 1849 by Hippolyte Louis Fizeau, who sent light through a rotating tooth wheel and then reflected it back with a mirror. The current accepted value is 299,792.458 kilometers per second, or 669,600,000 miles per hour. Physicists represent this value with the constant c, and it is broadly understood to be the cosmic speed limit: all observers, no matter how fast they are going, will agree on it, and nothing can go faster.

    This limit refers to the speed of light in a vacuum—empty space, with no “stuff” in it with which light can interact. Light traveling through air, water, or glass, for example, will move more slowly as it interacts with the atoms in that substance. In some cases, light will move so slowly that other particles shoot past it. This can create Cherenkov radiation, a “photonic boom” shockwave that can be seen as a flash of blue light. That telltale blue glow is common in certain types of nuclear reactors. (Doctor Manhattan, the ill-fated atomic scientist in Alan Moore’s classic “Watchmen” graphic novel, sports a Cherenkov-blue hue.) It is useful for radiation therapy and the detection of high-energy particles such as neutrinos and cosmic rays—and perhaps one day, dark matter particles—none of which would be possible without the ability of certain materials to slow down light.

    But just how slow can light go? In his 1933 novel Master of Light, French science fiction writer Maurice Renard imagined a special kind of “slow glass” through which light would take 100 years to pass. Slow glass is very much the stuff of fiction, but it has an intriguing real-world parallel in an exotic form of matter known as a Bose-Einstein Condensate (BEC), which exploits the wave nature of matter to stop light completely. At normal temperatures atoms behave a lot like billiard balls, bouncing off one another and any containing walls. The lower the temperature, the slower they go. At billionths of a degree above absolute zero, if the atoms are densely packed enough, the matter waves associated with each atom will be able to “sense” one another and coordinate themselves as if they were one big “superatom.”

    First predicted in the 1920s by Albert Einstein and the Indian physicist Satyendra Bose, BEC wasn’t achieved in the lab until 1995. The Nobel Prize winning research quickly launched an entirely new branch of physics, and in 1999, a group of Harvard physicists realized they could slow light all the way down to 17 miles per hour by passing it through a BEC made of ultracold sodium atoms. Within two years, the same group succeeded in stopping light completely in a BEC of rubidium atoms.

    What was so special about the recent Glasgow experiments, then? Usually, once light exits a medium and enters a vacuum, it speeds right back up again, because the reduced velocity is due to changes in what’s known as phase velocity. Phase velocity tracks the motion of a particular point, like a peak or trough, in a light wave, and it is related to a material’s refractive index, which determines just how much that material will slow down light.

    Padgett and his team found a way to keep the brakes on in their experiment by focusing on a property of light known as group velocity. Padgett likens the effect to a subatomic bicycle race, in which the photons are like riders grouped together in a peloton (light beam). As a group, they appear to be moving together at a constant speed. In reality, some individual riders slow down, while others speed up. The difference, he explained to BBC News, is that instead of using a light pulse made up of many photons, “We measure the speed of a single photon as it propagates, and we find it’s actually being slowed below the speed of light.”

    The Glasgow researchers used a special liquid crystal mask to impose a pattern on one of two photons in a pair. Because light can act like both a particle and a wave—the famous wave-particle duality—the researchers could use the mask to reshape the wavefront of that photon, so instead of spreading out like an ocean wave traveling to the shore, it was focused onto a point. That change in shape corresponded to a slight decrease in speed. To the researchers’ surprise, the light continued to travel at the slightly slower speed even after leaving the confines of the mask. Because the two photons were produced simultaneously from the same light source, they should have crossed the finish line simultaneously; instead, the reshaped photon lagged just a few millionths of a meter behind its partner, evidence that it continued to travel at the slower speed even after passing through the medium of the mask.

    Padgett and his colleagues are still pondering the next step in this intriguing line of research. One possibility is looking for a similar slow-down in light randomly scattered off a rough surface.

    If so, it would be one more bit of evidence that the speed of light, so often touted as an unvarying fundamental constant, is more malleable than physicists previously thought. University of Rochester physicist Robert Boyd, while impressed with the group’s ingenuity and technical achievement, calmly took the news in stride. “I’m not surprised the effect exists,” he told Science News. “But it’s surprising that the effect is so large and robust.”

    His nonchalance might strike non-physicists as strange: Shouldn’t this be momentous news poised to revolutionize physics? As always, there are caveats. When it comes to matters of light speed, it’s important to read the fine print. In this case, one must be careful not to confuse the speed at which light travels, which is just a feature of light, with its central role in special relativity, which holds that the speed of light is constant in all frames of reference. If Galileo measures the speed of light, he gets the same answer whether he is lounging at home in Pisa or cruising in a horse-drawn carriage. The same goes for his trusty assistant. This still holds true, centuries later, despite the exciting news out of Glasgow last month.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 4:54 am on February 23, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “In Once-Mysterious Epigenome, Scientists Find What Turns Genes On” 

    PBS NOVA

    NOVA

    19 Feb 2015
    R.A. Becker

    1
    A handful of new studies provide epigenetic roadmaps to understanding the human genome in action.(No image credit)

    Over a decade ago, the Human Genome Project deciphered the “human instruction book” of our DNA, but how cells develop vastly different functions using the same genetic instructional text has remained largely a mystery.

    As of yesterday, it became a bit less mysterious. A massive NIH consortium called the Roadmap Epigenomics Program published eight papers in the journal Nature which report on their efforts to map epigenetic modifications, or the changes to DNA that don’t alter its code. These subtle modifications make genes more or less likely to be expressed, and the collection of epigenetic modifications is called the epigenome.

    One of the eight studies mapped over 100 epigenomes characterizing every epigenetic modification occurring in human tissue cells. “These 111 reference epigenome maps are essentially a vocabulary book that helps us decipher each DNA segment in distinct cell and tissue types,” Roadmap researcher Bing Ren, a professor of cellular and molecular medicine at the University of California, San Diego, said in a news release. “These maps are like snapshots of the human genome in action.”

    This kind of mapping has challenged the field because of the huge amount of data needed to make sense of the chaotic arrangements of genes and their regulators. “The genome hasn’t nicely arranged the regulatory elements to be cheek by jowl with the elements they regulate,” Broad Institute director Eric Lander told Gina Kolata at The New York Times. “It can be very hard to figure out which regulator lines up with which genes.”

    Here’s how Lander described the detective process used to Kolata:

    If you knew when service on the Red Line was disrupted and when various employees were late for work, you might be able to infer which employees lived on the Red Line, he said. Likewise, when a genetic circuit was shut down, certain genes would be turned off. That would indicate that those genes were connected, like the employees who were late to work when the Red Line shut down.

    Diseases can be linked to epigenetic variations as well. For example, another of the eight papers published yesterday proposed that the roots of Alzheimer’s disease lie in immune cell genetic dysfunction and epigenetic alterations in brain cells.

    Creating an epigenetic road map is a huge step, but it’s just a first step. As Collins wrote in 2001 when the human genome had been mostly mapped, “This is not even the beginning of the end. But it may be the end of the beginning.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:03 am on February 16, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “The New Power Plants That Could Actually Remove Carbon from the Atmosphere” 

    PBS NOVA

    NOVA

    12 Feb 2015
    Tim De Chant

    1
    The Kemper County Energy Facility, seen here under construction, will use CCS, one of the two technologies proposed for negative-carbon power plants.

    What’s better than a zero-carbon source of electricity like solar or wind? One that removes carbon from the atmosphere—a negative-carbon source.

    It’s entirely possible, too. By combining two existing, though still not entirely proven, technologies, researchers have devised a strategy that would allow much of western North America to go carbon negative by 2050. In just a few short decades, we could scrub carbon dioxide from the air and reverse the emissions trend that’s causing climate change.

    The trick involves pairing power plants that burn biomass with carbon capture and sequestration equipment, also known as CCS. While politicians and engineers in the U.S. have been trying—unsuccessfully—to build commercial-scale, coal-fired CCS power plants for more than a decade, the technology is well understood. Originally envisioned as a way to keep dirty coal plants in operation, CCS may be even better suited for biomass power plants, which burn plant material, essentially turning them into carbon dioxide scrubbers that also happen to produce useful amounts of electricity.

    2
    Schematic showing both terrestrial and geological sequestration of carbon dioxide emissions from a coal-fired plant

    The power plants would take excess biomass, burn it just as they would coal, and then concentrate and inject the emitted carbon dioxide deep into the earth where it would be remain sequestered for generations, if not millennia. (Technically, its the plants in this scenario that are scrubbing carbon from the atmosphere, but the CCS equipment ensures it doesn’t return.)

    John Timmer, writing for Ars Technica:

    The authors estimate that it would be economically viable to put up to 10GW of biomass powered plants onto the grid, depending on the level of emissions limits; that corresponds to a bit under 10 percent of the expected 2050 demand for electricity. The generating plants would be supplied with roughly 2,000 PetaJoules of energy in the form of biomass, primarily from waste and residue from agriculture, supplemented by municipal and forestry waste. In all low-emissions scenarios, over 90 percent of the available biomass supply ended up being used for electricity generation.

    Dedicated bioenergy crops are more expensive than simply capturing current waste, and they therefore account for only about seven percent of the biomass used, which helpfully ensures that the transition to biomass would come with minimal land-use changes.

    The tidy proposal suggests that we could add these power plants to actively remove carbon from the atmosphere while, as Timmer points out, still allowing us to use fossil fuels like natural gas to help stabilize the grid. In fact, the biomass plants equipped with CCS could begin their lives burning coal while the market for biomass waste collection and distribution develops, smoothing the transition.

    There’s still the matter of shifting the current system, which favors fossil fuels, over to this more diverse mix. But it’s a sign that, with the right investments, we could achieve some very audacious reductions in carbon dioxide emissions in a very short time.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 11:55 am on February 12, 2015 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “Does Science Need Falsifiability?” 

    PBS NOVA

    NOVA

    11 Feb 2015
    Kate Becker

    If a theory doesn’t make a testable prediction, it isn’t science.

    It’s a basic axiom of the scientific method, dubbed “falsifiability” by the 20th century philosopher of science Karl Popper. General relativity passes the falsifiability test because, in addition to elegantly accounting for previously-observed phenomena like the precession of Mercury’s orbit, it also made predictions about as-yet-unseen effects—how light should bend around the Sun, the way clocks should seem to run slower in a strong gravitational field, and others that have since been borne out by experiment. On the other hand, theories like Marxism and Freudian psychoanalysis failed the falsifiability test—in Popper’s mind, at least—because they could be twisted to explain nearly any “data” about the world. As Wolfgang Pauli is said to have put it, skewering one student’s apparently unfalsifiable idea, “This isn’t right. It’s not even wrong.”

    1
    Some theorists propose that our universe is just one bubble in a multiverse. Will falsifiability burst the balloon? Credit: Flickr user Steve Jurvetson, adapted under a Creative Commons license.

    Now, some physicists and philosophers think it is time to reconsider the notion of falsifiability. Could a theory that provides an elegant and accurate account of the world around us—even if its predictions can’t be tested by today’s experiments, or tomorrow’s—still “count” as science?
    “We are in various ways hitting the limits of what will ever be testable.”

    As theory pulls further and further ahead of the capabilities of experiment, physicists are taking this question seriously. “We are in various ways hitting the limits of what will ever be testable, unless we have misunderstood some essential point about the nature of reality,” says theoretical cosmologist George Ellis. “We have now seen all the visible universe (i.e back to the visual horizon) and only gravitational waves remain to test further; and we are approaching the limits of what particle colliders it will ever be feasible to build, for economic and technical reasons.”

    Case in point: String theory. The darling of many theorists, string theory represents the basic building blocks of matter as vibrating strings. The strings take on different properties depending on their modes of vibration, just as the strings of a violin produce different notes depending on how they are played. To string theorists, the whole universe is a boisterous symphony performed upon these strings.

    It’s a lovely idea. Lovelier yet, string theory could unify general relativity with quantum mechanics, solving what is perhaps the most stubborn problem in fundamental physics. The trouble? To put string theory to the test, we may need experiments that operate at energies far higher than any modern collider. It’s possible that experimental tests of the predictions of string theory will never be within our reach.

    Meanwhile, cosmologists have found themselves at a similar impasse. We live in a universe that is, by some estimations, too good to be true. The fundamental constants of nature and the cosmological constant [usually denoted by the Greek capital letter lambda: Λ], which drives the accelerating expansion of the universe, seem “fine-tuned” to allow galaxies and stars to form. As Anil Ananthaswamy wrote elsewhere on this blog, “Tweak the charge on an electron, for instance, or change the strength of the gravitational force or the strong nuclear force just a smidgen, and the universe would look very different, and likely be lifeless.”

    Why do these numbers, which are essential features of the universe and cannot be derived from more fundamental quantities, appear to conspire for our comfort?

    One answer goes: If they were different, we wouldn’t be here to ask the question.

    This is called the “anthropic principle,” and if you think it feels like a cosmic punt, you’re not alone. Researchers have been trying to underpin our apparent stroke of luck with hard science for decades. String theory suggests a solution: It predicts that our universe is just one among a multitude of universes, each with its own fundamental constants. If the cosmic lottery has played out billions of times, it isn’t so remarkable that the winning numbers for life should come up at least once.

    In fact, you can reason your way to the “multiverse” in at least four different ways, according to MIT physicist Max Tegmark’s accounting. The tricky part is testing the idea. You can’t send or receive messages from neighboring universes, and most formulations of multiverse theory don’t make any testable predictions. Yet the theory provides a neat solution to the fine-tuning problem. Must we throw it out because it fails the falsifiability test?

    “It would be completely non-scientific to ignore that possibility just because it doesn’t conform with some preexisting philosophical prejudices,” says Sean Carroll, a physicist at Caltech, who called for the “retirement” of the falsifiability principle in a controversial essay for Edge last year. Falsifiability is “just a simple motto that non-philosophically-trained scientists have latched onto,” argues Carroll. He also bristles at the notion that this viewpoint can be summed up as “elegance will suffice,” as Ellis put it in a stinging Nature comment written with cosmologist Joe Silk.

    “Elegance can help us invent new theories, but does not count as empirical evidence in their favor,” says Carroll. “The criteria we use for judging theories are how good they are at accounting for the data, not how pretty or seductive or intuitive they are.”

    But Ellis and Silk worry that if physicists abandon falsifiability, they could damage the public’s trust in science and scientists at a time when that trust is critical to policymaking. “This battle for the heart and soul of physics is opening up at a time when scientific results—in topics from climate change to the theory of evolution—are being questioned by some politicians and religious fundamentalists,” Ellis and Silk wrote in Nature.

    “The fear is that it would become difficult to separate such ‘science’ from New Age thinking, or science fiction,” says Ellis. If scientists backpedal on falsifiability, Ellis fears, intellectual disputes that were once resolved by experiment will devolve into never-ending philosophical feuds, and both the progress and the reputation of science will suffer.

    But Carroll argues that he is simply calling for greater openness and honesty about the way science really happens. “I think that it’s more important than ever that scientists tell the truth. And the truth is that in practice, falsifiability is not a good criterion for telling science from non-science,” he says.

    Perhaps “falsifiability” isn’t up to shouldering the full scientific and philosophical burden that’s been placed on it. “Sean is right that ‘falsifiability’ is a crude slogan that fails to capture what science really aims at,” argues MIT computer scientist Scott Aaronson, writing on his blog Shtetl Optimized. Yet, writes Aaronson, “falsifiability shouldn’t be ‘retired.’ Instead, falsifiability’s portfolio should be expanded, with full-time assistants (like explanatory power) hired to lighten falsifiability’s load.”

    “I think falsifiability is not a perfect criterion, but it’s much less pernicious than what’s being served up by the ‘post-empirical’ faction,” says Frank Wilczek, a physicist at MIT. “Falsifiability is too impatient, in some sense,” putting immediate demands on theories that are not yet mature enough to meet them. “It’s an important discipline, but if it is applied too rigorously and too early, it can be stifling.”

    So, where do we go from here?

    “We need to rethink these issues in a philosophically sophisticated way that also takes the best interpretations of fundamental science, and its limitations, seriously,” says Ellis. “Maybe we have to accept uncertainty as a profound aspect of our understanding of the universe in cosmology as well as particle physics.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 426 other followers

%d bloggers like this: