Tagged: DNA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:06 am on June 23, 2017 Permalink | Reply
    Tags: A Single Electron’s Tiny Leap Sets Off ‘Molecular Sunscreen’ Response, , , DNA, , , ,   

    From SLAC: “A Single Electron’s Tiny Leap Sets Off ‘Molecular Sunscreen’ Response” 

    SLAC Lab

    June 22, 2017
    Glennda Chui

    Thymine – the molecule illustrated in the foreground – is one of the four basic building blocks that make up the double helix of DNA. It’s such a strong absorber of ultraviolet light that the UV in sunlight should deactivate it, yet this does not happen. Researchers used an X-ray laser at SLAC National Accelerator Laboratory to observe the infinitesimal leap of a single electron that sets off a protective response in thymine molecules, allowing them to shake off UV damage. (Greg Stewart/SLAC National Accelerator Laboratory)

    In experiments at the Department of Energy’s SLAC National Accelerator Laboratory, scientists were able to see the first step of a process that protects a DNA building block called thymine from sun damage: When it’s hit with ultraviolet light, a single electron jumps into a slightly higher orbit around the nucleus of a single oxygen atom.

    This infinitesimal leap sets off a response that stretches one of thymine’s chemical bonds and snaps it back into place, creating vibrations that harmlessly dissipate the energy of incoming ultraviolet light so it doesn’t cause mutations.

    The technique used to observe this tiny switch-flip at SLAC’s Linac Coherent Light Source (LCLS) X-ray free-electron laser can be applied to almost any organic molecule that responds to light – whether that light is a good thing, as in photosynthesis or human vision, or a bad thing, as in skin cancer, the scientists said. They described the study in Nature Communications today.


    “All of these light-sensitive organic molecules tend to absorb light in the ultraviolet. That’s not only why you get sunburn, but it’s also why your plastic eyeglass lenses offer some UV protection,” said Phil Bucksbaum, a professor at SLAC and Stanford University and director of the Stanford PULSE Institute at SLAC. “You can even see these effects in plastic lawn furniture – after a couple of seasons it can become brittle and discolored simply due to the fact that the plastic was absorbing ultraviolet light all the time, and the way it absorbs sun results in damage to its chemical bonds.”

    Catching Electrons in Action

    Thymine and the other three DNA building blocks also strongly absorb ultraviolet light, which can trigger mutations and skin cancer, yet these molecules seem to get by with minimal damage. In 2014, a team led by Markus Guehr ­– then a SLAC senior staff scientist and now on the faculty of the University of Potsdam in Germany – reported that they had found the answer: The stretch-snap of a single bond and resulting energy-dissipating vibrations, which took place within 200 femtoseconds, or millionths of a billionth of a second after UV light exposure.

    But what made the bond stretch? The team knew the answer had to involve electrons, which are responsible for forming, changing and breaking bonds between atoms. So they devised an ingenious way to catch the specific electron movements that trigger the protective response.

    It relied on the fact that electrons don’t orbit an atom’s nucleus in neat concentric circles, like planets orbiting a sun, but rather in fuzzy clouds that take a different shape depending on how far they are from the nucleus. Some of these orbitals are in fact like a fuzzy sphere; others look a little like barbells or the start of a balloon animal. You can see examples here.

    No image caption or credit, but there is a comment,
    “I see the distribution in different orbitals. So if for example I take the S orbitals, they are all just a sphere. So wont the 2S orbital overlap with the 1S overlap, making the electrons in each orbital “meet” at some point? Or have I misunderstood something?”

    Strong Signal Could Solve Long-Standing Debate

    For this new experiment, the scientists hit thymine molecules with a pulse of UV laser light and tuned the energy of the LCLS X-ray laser pulses so they would home in on the response of the oxygen atom that’s at one end of the stretching, snapping bond.

    The energy from the UV light excited one of the atom’s electrons to jump into a higher orbital. This left the atom in a sort of tippy state where just a little more energy would boost a second electron into a higher orbital; and that second jump is what sets off the protective response, changing the shape of the molecule just enough to stretch the bond.

    The first jump, which was previously known to happen, is difficult to detect because the electron winds up in a rather diffuse orbital cloud, Guehr said. But the second, which had never been observed before, was much easier to spot because that electron ended up in an orbital with a distinctive shape that gave off a big signal.

    “Although this was a very tiny electron movement, the signal kind of jumped out at us in the experiment,” Guehr said. “I always had a feeling this would be a strong transition, just intuitively, but when we saw this come in it was a special moment, one of the best moments an experimentalist can have.”

    Settling a Longstanding Debate

    Study lead author Thomas Wolf, an associate staff scientist at SLAC, said the results should settle a longstanding debate about how long after UV exposure the protective response kicks in: It happens 60 femtoseconds after UV light hits. This time span is important, he said, because the longer the atom spends in the tippy state between the first jump and the second, the more likely it is to undergo some sort of reaction that could damage the molecule.

    Henrik Koch, a theorist at NTNU in Norway who was a guest professor at Stanford at the time, led the study with Guehr. He led the effort to model, understand and interpret what happened in the experiment, and he participated in it to an unusual extent, Guehr said.

    “He is extremely experienced in applying theory to methodology development, and he had this curiosity to bring this to our experiment,” Guehr said. “He was so fascinated by this research that he did something completely untypical of a theorist – he came to LCLS, into the control room, and he wanted to see the data coming in. I found that completely amazing and very motivating. It turned out that some of my previous thinking was completely right but other aspects were completely wrong, and Henrik did the right theory at the right level so we could learn from it.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.

  • richardmitnick 7:31 pm on June 21, 2017 Permalink | Reply
    Tags: , DNA, Gene Therapy, Heterochromatin, heterochromatin protein 1a (HP1a),   

    From LBNL: “Researchers Find New Mechanism for Genome Regulation” 

    Berkeley Logo

    Berkeley Lab

    June 21, 2017
    Sarah Yang
    (510) 486-4575

    Berkeley Lab study could have implications for improving gene therapy.

    The same mechanisms that quickly separate mixtures of oil and water are at play when controlling the organization in an unusual part of our DNA called heterochromatin, according to a new study by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab).

    Researchers studying genome and cell biology provide evidence that heterochromatin organizes large parts of the genome into specific regions of the nucleus using liquid-liquid phase separation, a mechanism well known in physics but whose importance for biology has only recently been revealed.

    Liquid-like fusion of heterochromatin protein 1a droplets is shown in the embryo of a fruit fly. (Credit: Amy Strom/Berkeley Lab)

    They present their findings June 21 in the journal Nature, addressing a long-standing question about how DNA functions are organized in space and time, including how genes are regulated to be silenced or expressed.

    “The importance of DNA sequences in health and disease has been clear for decades, but we only recently have come to realize that the organization of sections of DNA into different physical domains or compartments inside the nucleus is critical to promote distinct genome functions,” said study corresponding author, Gary Karpen, senior scientist at Berkeley Lab’s Biological Systems and Engineering Division.

    The long stretches of DNA in heterochromatin contain sequences that, for the most part, need to be silenced for cells to work properly. Scientists once thought that compaction of the DNA was the primary mechanism for controlling which enzymes and molecules gain access to the sequences. It was reasoned that the more tightly wound the strands, the harder it would be to get to the genetic material inside.

    That mechanism has been questioned in recent years by the discovery that some large protein complexes could get inside the heterochromatin domain, while smaller proteins can remain shut out.

    Shown is purified heterochromatin protein 1a forming liquid droplets in an aqueous solution. On the right side, two drops fuse together over time. (Credit: Amy Strom/Berkeley Lab)

    In this new study of early Drosophila embryos, the researchers observed two non-mixing liquids in the cell nucleus: one that contained expressed genes, and one that contained silenced heterochromatin. They found that heterochromatic droplets fused together just like two drops of oil surrounded by water.

    In lab experiments, researchers purified heterochromatin protein 1a (HP1a), a main component of heterochromatin, and saw that this single component was able to recreate what they saw in the nucleus by forming liquid droplets.

    “We are excited about these findings because they explain a mystery that’s existed in the field for a decade,” said study lead author Amy Strom, a graduate student in Karpen’s lab. “That is, if compaction controls access to silenced sequences, how are other large proteins still able to get in? Chromatin organization by phase separation means that proteins are targeted to one liquid or the other based not on size, but on other physical traits, like charge, flexibility, and interaction partners.”

    The Berkeley Lab study, which used fruit fly and mouse cells, will be published alongside a companion paper in Nature led by UC San Francisco researchers, who showed that the human version of the HP1a protein has the same liquid droplet properties, suggesting that similar principles hold for human heterochromatin.

    Mouse fibroblast cells expressing HP1alpha, the human version of heterochromatin protein 1a. A technique that highlights edges between two liquid phases reveals the liquid droplets in the nucleus. (Credit: Amy Strom/Berkeley Lab)

    Interestingly, this type of liquid-liquid phase separation is very sensitive to changes in temperature, protein concentration, and pH levels.

    “It’s an elegant way for the cell to be able to manipulate gene expression of many sequences at once,” said Strom.

    Other cellular structures, including some involved in disease, are also organized by phase separation.

    “Problems with phase separation have been linked to diseases such as dementia and certain neurodegenerative disorders,” said Karpen.

    He noted that as we age, biological molecules lose their liquid state and become more solid, accumulating damage along the way. Karpen pointed to diseases like Alzheimer’s and Huntington’s, in which proteins misfold and aggregate, becoming less liquid and more solid over time.

    “If we can better understand what causes aggregation, and how to keep things more liquid, we might have a chance to combat these types of disease,” Strom added.

    The work is a big step forward for understanding how DNA functions, but could also help researchers improve their ability to manipulate genes.

    “Gene therapy, or any treatment that relies on tight regulation of gene expression, could be improved by precisely targeting molecules to the right place in the nucleus,” says Karpen. “It is very difficult to target genes located in heterochromatin, but this understanding of the properties linked to phase separation and liquid behaviors could help change that and open up a third of the genome that we couldn’t get to before.”

    This includes targeting gene-editing technologies like CRISPR, which has recently opened up new doors for precise genome manipulation and gene therapy.

    Karpen and Strom have joint appointments at UC Berkeley’s Department of Molecular and Cell Biology. Other study co-authors include Mustafa Mir and Xavier Darzacq at UC Berkeley, and Alexander Emelyanov and Dmitry Fyodorov at the Albert Einstein College of Medicine in New York.

    The National Institutes of Health and the California Institute for Regenerative Medicine helped support this work.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

  • richardmitnick 7:46 am on June 15, 2017 Permalink | Reply
    Tags: , DNA, , , When Neurology Becomes Theology, Wilder Penfield   

    From Nautilus: “When Neurology Becomes Theology” 



    June 15, 2017
    Robert A. Burton

    A neurologist’s perspective on research into consciousness.

    Early in my neurology residency, a 50-year-old woman insisted on being hospitalized for protection from the FBI spying on her via the TV set in her bedroom. The woman’s physical examination, lab tests, EEGs, scans, and formal neuropsychological testing revealed nothing unusual. Other than being visibly terrified of the TV monitor in the ward solarium, she had no other psychiatric symptoms or past psychiatric history. Neither did anyone else in her family, though she had no recollection of her mother, who had died when the patient was only 2.

    The psychiatry consultant favored the early childhood loss of her mother as a potential cause of a mid-life major depressive reaction. The attending neurologist was suspicious of an as yet undetectable degenerative brain disease, though he couldn’t be more specific. We residents were equally divided between the two possibilities.

    Fortunately an intern, a super-sleuth more interested in data than speculation, was able to locate her parents’ death certificates. The patient’s mother had died in a state hospital of Huntington’s disease—a genetic degenerative brain disease. (At that time such illnesses were often kept secret from the rest of the family.) Case solved. The patient was a textbook example of psychotic behavior preceding the cognitive decline and movement disorders characteristic of Huntington’s disease.

    WHERE’S THE MIND?: Wilder Penfield spent decades studying how brains produce the experience of consciousness, but concluded “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” Montreal Neurological Institute

    As a fledgling neurologist, I’d already seen a wide variety of strange mental states arising out of physical diseases. But on this particular day, I couldn’t wrap my mind around a gene mutation generating an isolated feeling of being spied on by the FBI. How could a localized excess of amino acids in a segment of DNA be transformed into paranoia?

    Though I didn’t know it at the time, I had run headlong into the “hard problem of consciousness,” the enigma of how physical brain mechanisms create purely subjective mental states. In the subsequent 50 years, what was once fodder for neurologists’ late night speculations has mushroomed into the pre-eminent question in the philosophy of mind. As an intellectual challenge, there is no equal to wondering how subatomic particles, mindless cells, synapses, and neurotransmitters create the experience of red, the beauty of a sunset, the euphoria of lust, the transcendence of music, or in this case, intractable paranoia.

    Neuroscientists have long known which general areas of the brain and their connections are necessary for the state of consciousness. By observing both the effects of localized and generalized brain insults such as anoxia and anesthesia, none of us seriously doubt that consciousness arises from discrete brain mechanisms. Because these mechanisms are consistent with general biological principles, it’s likely that, with further technical advances, we will uncover how the brain generates consciousness.

    However, such knowledge doesn’t translate into an explanation for the what of consciousness—that state of awareness of one’s surroundings and self, the experience of one’s feelings and thoughts. Imagine a hypothetical where you could mix nine parts oxytocin, 17 parts serotonin, and 11 parts dopamine into a solution that would make 100 percent of people feel a sense of infatuation 100 percent of the time. Knowing the precise chemical trigger for the sensation of infatuation (the how) tells you little about the nature of the resulting feeling (the what).

    Over my career, I’ve gathered a neurologist’s working knowledge of the physiology of sensations. I realize neuroscientists have identified neural correlates for emotional responses. Yet I remain ignorant of what sensations and responses are at the level of experience. I know the brain creates a sense of self, but that tells me little about the nature of the sensation of “I-ness.” If the self is a brain-generated construct, I’m still left wondering who or what is experiencing the illusion of being me. Similarly, if the feeling of agency is an illusion, as some philosophers of mind insist, that doesn’t help me understand the essence of my experience of willfully typing this sentence.

    Slowly, and with much resistance, it’s dawned on me that the pursuit of the nature of consciousness, no matter how cleverly couched in scientific language, is more like metaphysics and theology. It is driven by the same urges that made us dream up gods and demons, souls and afterlife. The human urge to understand ourselves is eternal, and how we frame our musings always depends upon prevailing cultural mythology. In a scientific era, we should expect philosophical and theological ruminations to be couched in the language of physical processes. We argue by inference and analogy, dragging explanations from other areas of science such as quantum physics, complexity, information theory, and math into a subjective domain. Theories of consciousness are how we wish to see ourselves in the world, and how we wish the world might be.

    My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield’s 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. One of the great men of modern neuroscience, Penfield spent several decades stimulating the brains of conscious, non-anesthetized patients and noting their descriptions of the resulting mental states, including long-lost bits of memory, dreamy states, deju vu, feelings of strangeness, and otherworldliness. What was most startling about Penfield’s work was his demonstration that sensations that normally qualify how we feel about our thoughts can occur in the absence of any conscious thought. For example, he could elicit feelings of familiarity and strangeness without the patient thinking of anything to which the feeling might apply. His ability to spontaneously evoke pure mental states was proof positive that these states arise from basic brain mechanisms.

    And yet, here’s Penfield’s conclusion to his end-of-career magnum opus on the nature of the mind: “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” How is this possible? How could a man who had single-handedly elicited so much of the fabric of subjective states of mind decide that there was something to the mind beyond what the brain did?

    In the last paragraph of his book, Penfield explains, “In ordinary conversation, the ‘mind’ and ‘the spirit of man’ are taken to be the same. I was brought up in a Christian family and I have always believed, since I first considered the matter … that there is a grand design in which all conscious individuals play a role … Since a final conclusion … is not likely to come before the youngest reader of this book dies, it behooves each one of us to adopt for himself a personal assumption (belief, religion), and a way of life without waiting for a final word from science on the nature of man’s mind.”

    Front and center is Penfield’s observation that, in ordinary conversation, the mind is synonymous with the spirit of man. Further, he admits that, in the absence of scientific evidence, all opinions about the mind are in the realm of belief and religion. If Penfield is even partially correct, we shouldn’t be surprised that any theory of the “what” of consciousness would be either intentionally or subliminally infused with one’s metaphysics and religious beliefs.

    To see how this might work, take a page from Penfield’s brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify. For instance, conceptualize thought as a mental calculation and a visceral sense of the calculation. If you add 3 + 3, you compute 6, and simultaneously have the feeling that 6 is the correct answer. Thoughts feel right, wrong, strange, beautiful, wondrous, reasonable, far-fetched, brilliant, or stupid. Collectively these widely disparate mental sensations constitute much of the contents of consciousness. But we have no control over the mental sensations that color our thoughts. No one can will a sense of understanding or the joy of an a-ha! moment. We don’t tell ourselves to make an idea feel appealing; it just is. Yet these sensations determine the direction of our thoughts. If a thought feels irrelevant, we ignore it. If it feels promising, we pursue it. Our lines of reasoning are predicated upon how thoughts feel.

    No image caption or credit.

    Shortly after reading Penfield’s book, I had the good fortune to spend a weekend with theoretical physicist David Bohm. Bohm took a great deal of time arguing for a deeper and interconnected hidden reality (his theory of implicate order). Though I had difficulty following his quantum theory-based explanations, I vividly remember him advising me that the present-day scientific approach of studying parts rather than the whole could never lead to any final answers about the nature of consciousness. According to him, all is inseparable and no part can be examined in isolation.

    In an interview in which he was asked to justify his unorthodox view of scientific method, Bohm responded, “My own interest in science is not entirely separate from what is behind an interest in religion or in philosophy—that is to understand the whole of the universe, the whole of matter, and how we originate.” If we were reading Bohm’s argument as a literary text, we would factor in his Jewish upbringing, his tragic mistreatment during the McCarthy era, the lack of general acceptance of his idiosyncratic take on quantum physics, his bouts of depression, and the close relationship between his scientific and religious interests.

    Many of today’s myriad explanations for how consciousness arises are compelling. But once we enter the arena of the nature of consciousness, there are no outright winners.

    Christof Koch, the chief scientific officer of the Allen Institute for Brain Science in Seattle, explains that a “system is conscious if there’s a certain type of complexity. And we live in a universe where certain systems have consciousness. It’s inherent in the design of the universe.”

    According to Daniel Dennett, professor of philosophy at Tufts University and author of Consciousness Explained and many other books on science and philosophy, consciousness is nothing more than a “user-illusion” arising out of underlying brain mechanisms. He argues that believing consciousness plays a major role in our thoughts and actions is the biological equivalent of being duped into believing that the icons of a smartphone app are doing the work of the underlying computer programs represented by the icons. He feels no need to postulate any additional physical component to explain the intrinsic qualities of our subjective experience.

    Meanwhile, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology, tells us consciousness “is how information feels when it is being processed in certain very complex ways.” He writes that “external reality is completely described by mathematics. If everything is mathematical, then, in principle, everything is understandable.” Rudolph E. Tanzi, a professor of neurology at Harvard University, admits, “To me the primal basis of existence is awareness and everything including ourselves and our brains are products of awareness.” He adds, “As a responsible scientist, one hypothesis which should be tested is that memory is stored outside the brain in a sea of consciousness.”

    Each argument, taken in isolation, seems logical, internally consistent, yet is at odds with the others. For me, the thread that connects these disparate viewpoints isn’t logic and evidence, but their overall intent. Belief without evidence is Richard Dawkins’ idea of faith. “Faith is belief in spite of, even perhaps because of, the lack of evidence.” These arguments are best read as differing expressions of personal faith.

    For his part, Dennett is an outspoken atheist and fervent critic of the excesses of religion. “I have absolutely no doubt that secular and scientific vision is right and deserves to be endorsed by everybody, and as we have seen over the last few thousand years, superstitious and religious doctrines will just have to give way.” As the basic premise of atheism is to deny that for which there is no objective evidence, he is forced to avoid directly considering the nature of purely subjective phenomena. Instead he settles on describing the contents of consciousness as illusions, resulting in the circularity of using the definition of mental states (illusions) to describe the general nature of these states.

    The problem compounds itself. Dennett is fond of pointing out (correctly) that there is no physical manifestation of “I,” no ghost in the machine or little homunculus that witnesses and experiences the goings on in the brain. If so, we’re still faced with asking what/who, if anything, is experiencing consciousness? All roads lead back to the hard problem of consciousness.

    Though tacitly agreeing with those who contend that we don’t yet understand the nature of consciousness, Dennett argues that we are making progress. “We haven’t yet succeeded in fully conceiving how meaning could exist in a material world … or how consciousness works, but we’ve made progress: The questions we’re posing and addressing now are better than the questions of yesteryear. We’re hot on the trail of the answers.”

    By contrast, Koch is upfront in correlating his religious upbringing with his life-long pursuit of the nature of consciousness. Raised as a Catholic, he describes being torn between two contradictory views of the world—the Sunday view reflected by his family and church, and the weekday view as reflected in his work as a scientist (the sacred and the profane).

    In an interview with Nautilus, Koch said, “For reasons I don’t understand and don’t comprehend, I find myself in a universe that had to become conscious, reflecting upon itself.” He added, “The God I now believe in is closer to the God of Spinoza than it is to Michelangelo’s paintings or the God of the Old Testament, a god that resides in this mystical notion of all-nothingness.” Koch admitted, “I’m not a mystic. I’m a scientist, but this is a feeling I have.” In short, Koch exemplifies a truth seldom admitted—that mental states such as a mystical feeling shape how one thinks about and goes about studying the universe, including mental states such as consciousness.

    Both Dennett and Koch have spent a lifetime considering the problem of consciousness; though contradictory, each point of view has a separate appeal. And I appreciate much of Dennett and Koch’s explorations in the same way that I can mull over Aquinas and Spinoza without necessarily agreeing with them. One can enjoy the pursuit without believing in or expecting answers. After all these years without any personal progress, I remain moved by the essential nature of the quest, even if it translates into Sisyphus endlessly pushing his rock up the hill.

    The spectacular advances of modern science have generated a mindset that makes potential limits to scientific inquiry intuitively difficult to grasp. Again and again we are given examples of seemingly insurmountable problems that yield to previously unimaginable answers. Just as some physicists believe we will one day have a Theory of Everything, many cognitive scientists believe that consciousness, like any physical property, can be unraveled. Overlooked in this optimism is the ultimate barrier: The nature of consciousness is in the mind of the beholder, not in the eye of the observer.

    It is likely that science will tell us how consciousness occurs. But that’s it. Although the what of consciousness is beyond direct inquiry, the urge to explain will persist. It is who we are and what we do.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 7:35 am on June 2, 2017 Permalink | Reply
    Tags: Alzheimer’s Parkinson’s and Huntington’s diseases as well as schizophrenia autism and depression., , , DNA, , Microglia–unique,   

    From Salk: “Brain’s immune cells linked to Alzheimer’s, Parkinson’s, schizophrenia” 

    Salk Institute bloc

    Salk Institute for Biological Studies

    Salk and UC San Diego scientists conducted vast microglia survey, revealing links to neurodegenerative diseases and psychiatric illnesses.

    Scientists have, for the first time, characterized the molecular markers that make the brain’s front lines of immune defense–cells called microglia–unique. In the process, they discovered further evidence that microglia may play roles in a variety of neurodegenerative and psychiatric illnesses, including Alzheimer’s, Parkinson’s and Huntington’s diseases as well as schizophrenia, autism and depression.

    “Microglia are the immune cells of the brain, but how they function in the human brain is not well understood,” says Rusty Gage, professor in Salk’s Laboratory of Genetics, the Vi and John Adler Chair for Research on Age-Related Neurodegenerative Disease, and a senior author of the new work. “Our work not only provides links to diseases but offers a jumping off point to better understand the basic biology of these cells.”

    Genes that have previously been linked to neurological diseases are turned on at higher levels in microglia compared to other brain cells, the team reported in Science on May 25, 2017. While the link between microglia and a number of disorders has been explored in the past, the new study offers a molecular basis for this connection.

    “These studies represent the first systematic effort to molecularly decode microglia,” says Christopher Glass, a Professor of Cellular and Molecular Medicine and Professor of Medicine at University of California San Diego, also senior author of the paper. “Our findings provide the foundations for understanding the underlying mechanisms that determine beneficial or pathological functions of these cells.”

    Microglia are a type of macrophage, white blood cells found throughout the body that can destroy pathogens or other foreign materials. They’re known to be highly responsive to their surroundings and respond to changes in the brain by releasing pro-inflammatory or anti-inflammatory signals. They also prune back the connections between neurons when cells are damaged or diseased. But microglia are notoriously hard to study. They can’t be easily grown in a culture dish and quickly die outside of a living brain.

    Nicole Coufal, a pediatric critical care doctor at UC San Diego, who also works in the Gage lab at Salk, wanted to make microglia from stem cells. But she realized there wasn’t any way to identify whether the resulting cells were truly microglia.

    “There was not a unique marker that differentiated microglia from circulating macrophages in the rest of the body,” she says.

    David Gosselin and Dylan Skola in the Glass lab, together with Coufal and their collaborators, set out to characterize the molecular characteristics of microglia. They worked with neurosurgeons at UC San Diego to collect brain tissue from 19 patients, all of who were having brain surgery for epilepsy, a brain tumor or a stroke. They isolated microglia from areas of tissue that were unaffected by disease, as well as from mouse brains, and then set out to study the cells. The work was made possible by a multidisciplinary collaboration between bench scientists, bioinformaticians and clinicians.

    The team used a variety of molecular and biochemical tests–performed within hours of the cells being collected–to characterize which genes are turned on and off in microglia, how the DNA is marked up by regulatory molecules, and how these patterns change when the cells are cultured.

    Microglia, they found, have hundreds of genes that are more highly expressed than other types of macrophages, as well as distinct patterns of gene expression compared to other types of brain cells. After the cells were cultured, however, the gene patterns of the microglia began to change. Within just six hours, more than 2,000 genes had their expression turned down by at least fourfold. The results underscore how dependent microglia are on their surroundings in the brain, and why researchers have struggled to culture them.

    Next, the researchers analyzed whether any of the genes that were upregulated in microglia compared to other cells had been previously implicated in disease. Genes linked to a variety of neurodegenerative and psychiatric diseases, they found, were highly expressed in microglia.

    “A really high proportion of genes linked to multiple sclerosis, Parkinson’s and schizophrenia are much more highly expressed in microglia than the rest of the brain,” says Coufal. “That suggests there’s some kind of link between microglia and the diseases.”

    For Alzheimer’s, more than half of the genes known to affect a person’s risk of developing the disease were expressed more highly in microglia than other brain cells.

    In mice, however, many of the disease genes weren’t as highly expressed in microglia. “That tells us that maybe mice aren’t the best model organisms for some of these diseases,” Coufal says.

    More work is needed to understand exactly how microglia may be altered in people with diseases, but the new molecular profile of microglia offers a way for researchers to begin trying to better culture the cells, or coax stem cells to develop into microglia for future studies.

    Other researchers on the study were Baptiste Jaeger, Carolyn O’Connor, Conor Fitzpatrick, Monique Pena, and Amy Adair of the Salk Institute; Inge Holtman, Johannes Schlachetzki, Eniko Sajti, Martina Pasillas, David Gona, and Michael Levy of the University of California San Diego; and Richard Ransohoff of Biogen.

    The work and the researchers involved were supported by grants from the Larry L. Hillblom Foundation, National Institutes of Health, Canadian Institute of Health Research, Multiple Sclerosis Society of Canada, University of California San Diego, Dutch MS Research Foundation, the Gemmy and Mibeth Tichelaar Foundation, the DFG, the JPB Foundation, Dolby Family Ventures, The Paul G. Allen Family Foundation, the Engman Foundation, the Ben and Wanda Hildyard Chair in Hereditary Diseases.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Salk Institute Campus

    Every cure has a starting point. Like Dr. Jonas Salk when he conquered polio, Salk scientists are dedicated to innovative biological research. Exploring the molecular basis of diseases makes curing them more likely. In an outstanding and unique environment we gather the foremost scientific minds in the world and give them the freedom to work collaboratively and think creatively. For over 50 years this wide-ranging scientific inquiry has yielded life-changing discoveries impacting human health. We are home to Nobel Laureates and members of the National Academy of Sciences who train and mentor the next generation of international scientists. We lead biological research. We prize discovery. Salk is where cures begin.

  • richardmitnick 9:02 am on May 29, 2017 Permalink | Reply
    Tags: , , DNA,   

    From Nature: “DNA’s secret weapon against knots and tangles” 

    Nature Mag

    19 April 2017 [Another hidden treasure comes to social media.]
    Elie Dolgin

    DNA loops help to keep local regions of the genome together. M. Imakaev/G. Fudenberg/N. Naumova/J. Dekker/L. Mirny

    Leonid Mirny swivels in his office chair and grabs the power cord for his laptop. He practically bounces in his seat as he threads the cable through his fingers, creating a doughnut-sized loop. “It’s a dynamic process of motors constantly extruding loops!” says Mirny, a biophysicist here at the Massachusetts Institute of Technology in Cambridge.

    Mirny’s excitement isn’t about keeping computer accessories orderly. Rather, he’s talking about a central organizing principle of the genome — how roughly 2 metres of DNA can be squeezed into nearly every cell of the human body without getting tangled up like last year’s Christmas lights.

    He argues that DNA is constantly being slipped through ring-like motor proteins to make loops. This process, called loop extrusion, helps to keep local regions of DNA together, disentangling them from other parts of the genome and even giving shape and structure to the chromosomes.

    Scientists have bandied about similar hypotheses for decades, but Mirny’s model, and a similar one championed by Erez Lieberman Aiden, a geneticist at Baylor College of Medicine in Houston, Texas, add a new level of molecular detail at a time of explosive growth for research into the 3D structure of the genome. The models neatly explain the data flowing from high-profile projects on how different parts of the genome interact physically — which is why they’ve garnered so much attention.

    But these simple explanations are not without controversy. Although it has become increasingly clear that genome looping regulates gene expression, possibly contributing to cell development and diseases such as cancer, the predictions of the models go beyond what anyone has ever seen experimentally.

    For one thing, the identity of the molecular machine that forms the loops remains a mystery. If the leading protein candidate acted like a motor, as Mirny proposes, it would guzzle energy faster than it has ever been seen to do. “As a physicist friend of mine tells me, ‘This is kind of the Higgs boson of your field’,” says Mirny; it explains one of the deepest mysteries of genome biology, but could take years to prove.

    And although Mirny’s model is extremely similar to Lieberman Aiden’s — and the differences esoteric — sorting out which is right is more than a matter of tying up loose ends. If Mirny is correct, “it’s a complete revolution in DNA enzymology”, says Kim Nasmyth, a leading chromosome researcher at the University of Oxford, UK. What’s actually powering the loop formation, he adds, “has got to be the biggest problem in genome biology right now”.

    Loop back

    Geneticists have known for more than three decades that the genome forms loops, bringing regulatory elements into close proximity with genes that they control. But it was unclear how these loops formed.

    Several researchers have independently put forward versions of loop extrusion over the years. The first was Arthur Riggs, a geneticist at the Beckman Research Institute of City of Hope in Duarte, California, who first proposed what he called “DNA reeling” in an overlooked 1990 report[1]. Yet it’s Nasmyth who is most commonly credited with originating the concept.

    As he tells it, the idea came to him in 2000, after a day spent mountain climbing in the Italian Alps. He and his colleagues had recently discovered the ring-like shape of cohesin[2], a protein complex best known for helping to separate copies of chromosomes during cell division. As Nasmyth fiddled with his climbing gear, it dawned on him that chromosomes might be actively threaded through cohesin, or the related complex condensin, in much the same way as the ropes looped through his carabiners. “It appeared to explain everything,” he says.

    Nasmyth described the idea in a few paragraphs in a massive, 73-page review article [3]. “Nobody took notice whatsoever,” he says — not even John Marko, a biophysicist at Northwestern University in Evanston, Illinois, who more than a decade later developed a mathematical model that complemented Nasmyth’s verbal argument[4].

    Mirny joined this loop-modelling club around five years ago. He wanted to explain data sets compiled by biologist Job Dekker, a frequent collaborator at the University of Massachusetts Medical School in Worcester. Dekker had been looking at physical interactions between different spots on chromosomes using a technique called Hi-C, in which scientists sequence bits of DNA that are close to one another and produce a map of each chromosome, usually depicted as a fractal-like chessboard. The darkest squares along the main diagonal represent spots of closest interaction.

    The Hi-C snapshots that Dekker and his collaborators had taken revealed distinct compartmentalized loops, with interactions happening in discrete blocks of DNA between 200,000 and 1 million letters long[5].

    These ‘topologically associating domains’, or TADs, are a bit like the carriages on a crowded train. People can move about and bump into each other in the same carriage, but they can’t interact with passengers in adjacent carriages unless they slip between the end doors. The human genome may be 3 billion nucleotides long, but most interactions happen locally, within TADs.

    Mirny and his team had been labouring for more than a year to explain TAD formation using computer simulations. Then, as luck would have it, Mirny happened to attend a conference at which Marko spoke about his then-unpublished model of loop extrusion. (Marko coined the term, which remains in use today.) It was the missing piece of Mirny’s puzzle. The researchers gave loop extrusion a try, and it worked. The physical act of forming the loops kept the local domains well organized. The model reproduced many of the finer-scale features of the Hi-C maps.

    When Mirny and his colleagues posted their finished manuscript on the bioRxiv preprint server in August 2015, they were careful to describe the model in terms of a generic “loop-extruding factor”. But the paper didn’t shy away from speculating as to its identity: cohesin was the driving force behind the looping process for cells not in the middle of dividing, when chromosomes are loosely packed[6]. Condensin, they argued in a later paper, served this role during cell division, when the chromosomes are tightly wound[7].

    A key clue was the protein CTCF, which was known to interact with cohesin at the base of each loop of uncondensed chromosomes. For a long time, researchers had assumed that loops form on DNA when these CTCF proteins bump into one another at random and lock together. But if any two CTCF proteins could pair, why did loops form only locally, and not between distant sites?

    Mirny’s model assumes that CTCFs act as stop signs for cohesin. If cohesin stops extruding DNA only when it hits CTCFs on each side of a growing loop, it will naturally bring the proteins together.

    But singling out cohesin was “a big leap of faith”, says biophysicist Geoff Fudenberg, who did his PhD in Mirny’s lab and is now at the University of California, San Francisco. “No one has seen these motors doing these things in living cells or even in vitro,” he says. “But we see all of these different features of the data that line up and can be unified under this principle.”

    Experiments had shown, for example, that reducing the amount of cohesin in a cell results in the formation of fewer loops[8]. Overactive cohesin creates so many loops that chromosomes smush up into structures that resemble tiny worms[9].

    The authors of these studies had trouble making sense of their results. Then came Mirny’s paper on bioRxiv. It was “the first time that a preprint has really changed the way people were thinking about stuff in this field”, says Matthias Merkenschlager, a cell biologist at the MRC London Institute of Medical Sciences. (Mirny’s team eventually published the work in May 2016, in Cell Reports [6].)

    Multiple discovery?

    Lieberman Aiden says that the idea of loop extrusion first dawned on him during a conference call in March 2015. He and his former mentor, geneticist Eric Lander of the Broad Institute in Cambridge, Massachusetts, had published some of the most detailed, high-resolution Hi-C maps of the human genome available at the time[10].

    During his conference call, Lieberman Aiden was trying to explain a curious phenomenon in his data. Almost all the CTCF landing sites that anchored loops had the same orientation. What he realized was that CTCF, as a stop sign for extrusion, had inherent directionality. And just as motorists race through intersections with stop signs facing away from them, so a loop-extruding factor goes through CTCF sites unless the stop sign is facing the right way.

    His lab tested the model by systematically deleting and flipping CTCF-binding sites, and remapping the chromosomes with Hi-C. Time and again, the data fitted the model. The team sent its paper for review in July 2015 and published the findings three months later [11].

    Mirny’s August 2015 bioRxiv paper didn’t have the same level of experimental validation, but it did include computer simulations to explain the directional bias of CTCF. In fact, both models make essentially the same predictions, leading some onlookers to speculate on whether Mirny seeded the idea. Lieberman Aiden insists that he came up with his model independently. “We submitted our paper before I ever saw their manuscript,” he says.

    There are some tiny differences. The cartoons Mirny uses to describe his model seem to suggest that one cohesin ring does the extruding, whereas Lieberman Aiden’s contains two rings, connected like a pair of handcuffs (see ‘The taming of the tangles’). Suzana Hadjur, a cell biologist at University College London, calls this mechanistic nuance “absolutely fundamental” to determining cohesin’s role in the extrusion process.

    Nik Spencer/Nature

    Neither Lieberman Aiden nor Mirny say they have a strong opinion on whether the system uses one ring or two, but they do differ on cohesin’s central contribution to loop formation. Mirny maintains that the protein is the power source for looping, whereas Lieberman Aiden summarily dismisses this idea. Cohesin “is a big doughnut”, he says. It doesn’t do that much. “It can open and close, but we are very, very confident that cohesin itself is not a motor.”

    Instead, he suspects that some other factor is pushing cohesin around, and many in the field agree. Claire Wyman, a molecular biophysicist at Erasmus University Medical Centre in Rotterdam, the Netherlands, points out that cohesin is only known to consume small amounts of energy for clasping and releasing DNA, so it’s a stretch to think of it motoring along the chromosome at the speeds required for Mirny’s model to work. “I’m willing to concede that it’s possible,” she says. “But the Magic 8-Ball would say that, ‘All signs point to no’.”

    One group of proteins that might be doing the pushing is the RNA polymerases, the enzymes that create RNA from a DNA template. In a study online in Nature this week[12], Jan-Michael Peters, a chromosome biologist at the Research Institute of Molecular Pathology in Vienna, and his colleagues show that RNA polymerases can move cohesin over long distances on the genome as they transcribe genes into RNA. “RNA polymerases are one type of motor that could contribute to loop extrusion,” Peters says. But, he adds, the data indicate that it cannot be the only force at play.

    Frank Uhlmann, a biochemist at the Francis Crick Institute in London, offers an alternative that doesn’t require a motor protein at all. In his view, a cohesin complex might slide along DNA randomly until it hits a CTCF site and creates a loop. This model requires only nearby strands of DNA to interact randomly — which is much more probable, Uhlmann says. “We do not need to make any assumptions about activities that we don’t have experimental evidence for.”

    Researchers are trying to gather experimental evidence for one model or another. At the Lawrence Livermore National Laboratory in California, for example, biophysicist Aleksandr Noy is attempting to watch loop extrusion in action in a test tube. He throws in just three ingredients: DNA, some ATP to provide energy, and the bacterial equivalent of cohesin and condensin, a protein complex known as SMC.

    “We see evidence of DNA being compacted into these kinds of flowers with loops,” says Noy, who is collaborating with Mirny on the project. That suggests that SMC — and by extension cohesin — might have a motor function. But then again, it might not. “The truth is that we just don’t know at this point,” Noy says.

    Bacterial battery

    The experiment that perhaps comes the closest to showing cohesin acting as a motor was published in February[13]. David Rudner, a bacterial cell biologist at Harvard Medical School in Boston, Massachusetts, and his colleagues made time-lapse Hi-C maps of the bacterium Bacillus subtilis that reveal SMC zipping along the chromosome and creating a loop at a rate of more than 50,000 DNA letters per minute. This tempo is on par with what researchers estimate would be necessary for Mirny’s model to work in human cells as well.

    Rudner hasn’t yet proved that SMC uses ATP to make that happen. But, he says, he’s close — and he would be “shocked” if cohesin worked differently in human cells.

    For now, the debate rages about what cohesin is, or is not, doing inside the cell — and many researchers, including Doug Koshland, a cell biologist at the University of California, Berkeley, insist that a healthy dose of scepticism is still warranted when it comes to Mirny’s idea. “I am worried that the simplicity and elegance of the loop-extrusion model is already filling textbooks, coronated long before its time,” he says.

    And although it may seem an academic dispute among specialists, Mirny notes that if it his model is correct, it will have real-world implications. In cancer, for instance, cohesin is frequently mutated and CTCF sites altered. Defective versions of cohesin have also been implicated in several rare human developmental disorders. If the loop-extruding process is to blame, says Mirny, then perhaps a better understanding of the motor could help fix the problem.

    But his main interest remains more fundamental. He just wants to understand why DNA is configured in the way it is. And although his model assumes a lot of things about cohesin, Mirny says, “The problem is that I don’t know any other way to explain the formation of these loops.”

    See the full article for 13 references with links.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

  • richardmitnick 2:33 pm on April 11, 2017 Permalink | Reply
    Tags: , DNA, , , Molecular clocks track human evolution   

    From EarthSky: “Molecular clocks track human evolution” 



    April 9, 2017
    Bridget Alex, Harvard University
    Priya Moorjani, Columbia University

    Our cells have a built-in genetic clock, tracking time… but how accurately?. Image via http://www.shutterstock.com

    DNA holds the story of our ancestry – how we’re related to the familiar faces at family reunions as well as more ancient affairs: how we’re related to our closest nonhuman relatives, chimpanzees; how Homo sapiens mated with Neanderthals; and how people migrated out of Africa, adapting to new environments and lifestyles along the way. And our DNA also holds clues about the timing of these key events in human evolution. The Conversation

    When scientists say that modern humans emerged in Africa about 200,000 years ago and began their global spread about 60,000 years ago, how do they come up with those dates? Traditionally researchers built timelines of human prehistory based on fossils and artifacts, which can be directly dated with methods such as radiocarbon dating and Potassium-argon dating. However, these methods require ancient remains to have certain elements or preservation conditions, and that is not always the case. Moreover, relevant fossils or artifacts have not been discovered for all milestones in human evolution.

    Analyzing DNA from present-day and ancient genomes provides a complementary approach for dating evolutionary events. Because certain genetic changes occur at a steady rate per generation, they provide an estimate of the time elapsed. These changes accrue like the ticks on a stopwatch, providing a “molecular clock.” By comparing DNA sequences, geneticists can not only reconstruct relationships between different populations or species but also infer evolutionary history over deep timescales.

    Molecular clocks are becoming more sophisticated, thanks to improved DNA sequencing, analytical tools and a better understanding of the biological processes behind genetic changes. By applying these methods to the ever-growing database of DNA from diverse populations (both present-day and ancient), geneticists are helping to build a more refined timeline of human evolution.

    How DNA accumulates changes

    Molecular clocks are based on two key biological processes that are the source of all heritable variation: mutation and recombination.

    Mutations are changes to the DNA code, such as when one nucleotide base (A, T, G or C) is incorrectly subbed for another.. Image via http://www.shutterstock.com

    Mutations are changes to the letters of DNA’s genetic code – for instance, a nucleotide Guanine (G) becomes a Thymine (T). These changes will be inherited by future generations if they occur in eggs, sperm or their cellular precursors (the germline). Most result from mistakes when DNA copies itself during cell division, although other types of mutations occur spontaneously or from exposure to hazards like radiation and chemicals.

    In a single human genome, there are about 70 nucleotide changes per generation – minuscule in a genome made up of six billion letters. But in aggregate, over many generations, these changes lead to substantial evolutionary variation.

    Scientists can use mutations to estimate the timing of branches in our evolutionary tree. First they compare the DNA sequences of two individuals or species, counting the neutral differences that don’t alter one’s chances of survival and reproduction. Then, knowing the rate of these changes, they can calculate the time needed to accumulate that many differences. This tells them how long it’s been since the individuals shared ancestors.

    Comparison of DNA between you and your sibling would show relatively few mutational differences because you share ancestors – mom and dad – just one generation ago. However, there are millions of differences between humans and chimpanzees; our last common ancestor lived over six million years ago.

    Bits of the chromosomes from your mom and your dad recombine as your DNA prepares to be passed on. Chromosomes image via http://www.shutterstock.com.

    Recombination, also known as crossing-over, is the other main way DNA accumulates changes over time. It leads to shuffling of the two copies of the genome (one from each parent), which are bundled into chromosomes. During recombination, the corresponding (homologous) chromosomes line up and exchange segments, so the genome you pass on to your children is a mosaic of your parents’ DNA.

    In humans, about 36 recombination events occur per generation, one or two per chromosome. As this happens every generation, segments inherited from a particular individual get broken into smaller and smaller chunks. Based on the size of these chunks and frequency of crossovers, geneticists can estimate how long ago that individual was your ancestor.

    Gene flow between divergent populations leads to chromosomes with mosaic ancestry. As recombination occurs in each generation, the bits of Neanderthal ancestry in modern human genomes becomes smaller and smaller over time. Image via Bridget Alex.

    Building timelines based on changes

    Genetic changes from mutation and recombination provide two distinct clocks, each suited for dating different evolutionary events and timescales.

    Because mutations accumulate so slowly, this clock works better for very ancient events, like evolutionary splits between species. The recombination clock, on the other hand, ticks at a rate appropriate for dates within the last 100,000 years. These “recent” events (in evolutionary time) include gene flow between distinct human populations, the rise of beneficial adaptations or the emergence of genetic diseases.

    The case of Neanderthals illustrates how the mutation and recombination clocks can be used together to help us untangle complicated ancestral relationships. Geneticists estimate that there are 1.5-2 million mutational differences between Neanderthals and modern humans. Applying the mutation clock to this count suggests the groups initially split between 750,000 and 550,000 years ago.

    At that time, a population – the common ancestors of both human groups – separated geographically and genetically. Some individuals of the group migrated to Eurasia and over time evolved into Neanderthals. Those who stayed in Africa became anatomically modern humans.

    An evolutionary tree displays the divergence and interbreeding dates that researchers estimated with molecular clock methods for these groups. Image via Bridget Alex.

    However, their interactions were not over: Modern humans eventually spread to Eurasia and mated with Neanderthals. Applying the recombination clock to Neanderthal DNA retained in present-day humans, researchers estimate that the groups interbred between 54,000 and 40,000 years ago. When scientists analyzed a Homo sapiens fossil, known as Oase 1, who lived around 40,000 years ago, they found large regions of Neanderthal ancestry embedded in the Oase genome, suggesting that Oase had a Neanderthal ancestor just four to six generations ago. In other words, Oase’s great-great-grandparent was a Neanderthal.

    Comparing chromosome 6 from the 40,000-year-old Oase fossil to a present-day human. The blue bands represent segments of Neanderthal DNA from past interbreeding. Oase’s segments are longer because he had a Neanderthal ancestor just 4–6 generations before he lived, based on estimates using the recombination clock. Image via Bridget Alex.

    The challenges of unsteady clocks

    Molecular clocks are a mainstay of evolutionary calculations, not just for humans but for all forms of living organisms. But there are some complicating factors.

    The main challenge arises from the fact that mutation and recombination rates have not remained constant over human evolution. The rates themselves are evolving, so they vary over time and may differ between species and even across human populations, albeit fairly slowly. It’s like trying to measure time with a clock that ticks at different speeds under different conditions.

    One issue relates to a gene called Prdm9, which determines the location of those DNA crossover events. Variation in this gene in humans, chimpanzees and mice has been shown to alter recombination hotspots – short regions of high recombination rates. Due to the evolution of Prdm9 and hotspots, the fine-scale recombination rates differ between humans and chimps, and possibly also between Africans and Europeans. This implies that over different timescales and across populations, the recombination clock ticks at slightly different rates as hotspots evolve.

    Another issue is that mutation rates vary by sex and age. As fathers get older, they transmit a couple extra mutations to their offspring per year. The sperm of older fathers has undergone more rounds of cell division, so more opportunities for mutations. Mothers, on the other hand, transmit fewer mutations (about 0.25 per year) as a female’s eggs are mostly formed all at the same time, before her own birth. Mutation rates also depend on factors like onset of puberty, age at reproduction and rate of sperm production. These life history traits vary across living primates and probably also differed between extinct species of human ancestors.

    Consequently, over the course of human evolution, the average mutation rate seems to have slowed significantly. The average rate over millions of years since the split of humans and chimpanzees has been estimated as about 1×10?? mutations per site per year – or roughly six altered DNA letters per year. This rate is determined by dividing the number of nucleotide differences between humans and other apes by the date of their evolutionary splits, as inferred from fossils. It’s like calculating your driving speed by dividing distance traveled by time passed. But when geneticists directly measure nucleotide differences between living parents and children (using human pedigrees), the mutation rate is half the other estimate: about 0.5×10?? per site per year, or only about three mutations per year.

    For the divergence between Neanderthals and modern humans, the slower rate provides an estimate between 765,000-550,000 years ago. The faster rate, however, would suggest half that age, or 380,000-275,000 years ago: a big difference.

    To resolve the question of which rates to use when and on whom, researchers have been developing new molecular clock methods, which address the challenges of evolving mutation and recombination rates.

    New approaches for better dating

    One approach is to focus on mutations that arise at a steady rate regardless of sex, age and species. This may be the case for a special type of mutation that geneticists call CpG transitions by which the C nucelotides spontaneously become T’s. Because CpG transitions mostly do not result from DNA copying errors during cell division, their rates should be mainly independent of life history variables – and presumably more uniform over time.

    Focusing on CpG transitions, geneticists recently estimated the split between humans and chimps to have occurred between 9.3 and 6.5 million years ago, which agrees with the age expected from fossils. While in comparisons across species, these mutations seem to happen more like clockwork than other types, they are still not completely steady.

    Another approach is to develop models that adjust molecular clock rates based on sex and other life history traits. Using this method, researchers calculated a chimp-human divergence consistent with the CpG estimate and fossil dates. The drawback here is that, when it comes to ancestral species, we can’t be sure of life history traits, like age at puberty or generation length, leading to some uncertainty in the estimates.

    The most direct solution comes from analyses of ancient DNA recovered from fossils. Because the fossil specimens are independently dated by geologic methods, geneticists can use them to calibrate the molecular clocks for a given time period or population.

    This strategy recently resolved the debate over the timing of our divergence with Neanderthals. In 2016, geneticists extracted ancient DNA from 430,000-year-old fossils that were Neanderthal ancestors, after their lineage split from Homo sapiens. Knowing where these fossils belong in the evolutionary tree, geneticists could confirm that for this period of human evolution, the slower molecular clock rate of 0.5×10?? provides accurate dates. That puts the Neanderthal-modern human split between 765,000 to 550,000 years ago.

    As geneticists sort out the intricacies of molecular clocks and sequence more genomes, we’re poised to learn more than ever about human evolution, directly from our DNA.

    Bridget Alex, Postdoctoral College Fellow, Department of Human Evolutionary Biology, Harvard University and Priya Moorjani, Postdoctoral Research Fellow in Biological Sciences, Columbia University

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:13 am on March 9, 2017 Permalink | Reply
    Tags: , , DNA, Embryos can be repaired, in vitro fertilization, Triple helix,   

    From Yale: “Gene editing opens the door to a “revolution” in treating and preventing disease” 

    Yale University bloc

    Yale University

    March 8, 2017
    John Dent Curtis

    Today, in vitro fertilization provides a way for couples to avoid passing potentially disease-causing genes to their offspring. A couple will undergo genetic screening. Tests will determine whether their unborn children are at risk. If embryos created through IVF show signs of such a genetic mutation, they can be discarded.

    Flash forward a few years, and, instead of being discarded, those embryos can be repaired with new gene editing technologies. And those repairs will affect not only those children, but all their descendants.

    “This is definitely new territory,” said Pasquale Patrizio, M.D., director of the Yale Fertility Center and Fertility Preservation Program. “We are at the verge of a huge revolution in the way disease is treated.”

    We are at the verge of a huge revolution in the way disease is treated.”
    Pasquale Patrizio, M.D., director of the Yale Fertility Center and Fertility Preservation Program

    In a move that seems likely to help clear the path for the use of gene editing in the clinical setting, on February 14 the Committee on Human Gene Editing, formed by the National Academy of Medicine and the National Academy of Sciences, recommended that research into human gene editing should go forward under strict ethical and safety guidelines. Among their concerns were ensuring that the technology be used to treat only serious diseases for which there is no other remedy, that there be broad oversight, and that there be equal access to the treatment. These guidelines provide a framework for discussion of technology that has been described as an “ethical minefield” and for which there is no government support in the United States.

    A main impetus for the committee’s work appears to be the discovery and widespread use of CRISPR-Cas9, a defense that bacteria use against viral infection. Scientists including former Yale faculty member Jennifer Doudna, Ph.D., now at the University of California, Berkeley, and Emmanuelle Charpentier, Ph.D., of the Max Planck Institute for Infection Biology in Berlin, discerned that the CRISPR enzyme could be harnessed to make precision cuts and repairs to genes. Faster, easier, and cheaper than previous gene editing technologies, CRISPR was declared the breakthrough of the year in 2015 by Science magazine, and has become a basic and ubiquitous laboratory research tool. The committee’s guidelines, said scientists, physicians, and ethicists at Yale, could pave the way for thoughtful and safe use of this and other human gene editing technologies. In addition to CRISPR, the committee described three commonly used gene editing techniques; zinc finger nucleases, meganucleases, and transcription activator-like effector nucleases.

    Patrizio, professor of obstetrics, gynecology, and reproductive sciences, said the guidelines are on the mark, especially because they call for editing only in circumstances where the diseases or disabilities are serious and where there are not alternative treatments. He and others cited such diseases as cystic fibrosis, sickle cell anemia, and thalassemia as targets for gene editing. Because they are caused by mutations in a single gene, repairing that one gene could prevent disease.

    Peter Glazer, M.D. ’87, Ph.D. ’87, HS ’91, FW ’91, chair and the Robert E. Hunter Professor of Therapeutic Radiology and professor of genetics, said, “The field will benefit from guidelines that are thoughtfully developed. This was a step in the right direction.”

    The panel recommended that gene editing techniques should be limited to deal with genes proven to cause or predispose to specific diseases. It should be used to convert mutated genes to versions that are already prevalent in the population. The panel also called for stringent oversight of the process and for a prohibition against use of the technology for “enhancements,” rather than to treat disease. “As physicians, we understand what serious diseases are. Many of them are very well known and well characterized on a genetic level,” Glazer said. “The slippery slope is where people start thinking about modifications in situations where people don’t have a serious disorder or disease.”

    Mark Mercurio, M.D., professor of pediatrics (neonatology), and director of the Program for Biomedical Ethics, echoed that concern. While he concurs with the panel’s recommendations, he urged a clear definition of disease prevention and treatment. “At some point we are not treating, but enhancing.” This in turn, he said, conjures up the nation’s own medical ethical history, which includes eugenics policies in the early 20th century that were later adopted in Nazi Germany. “This has the potential to help a great many people, and is a great advance. But we need to be cognizant of the history of eugenics in the United States and elsewhere, and need to be very thoughtful in how we use this technology going forward,” he said.

    The new technology, he said, can lead to uncharted ethical waters. “Pediatric ethics are more difficult,” Mercurio said. “It is one thing to decide for yourself–is this a risk I’m willing to take—and another thing to decide for a child. It is another thing still further, which we have never had to consider, to decide for future generations.”

    Myron Genel, M.D., emeritus professor of pediatrics and senior research scientist, served on Connecticut’s stem cell commission and four years on the Health and Human Services Secretary’s Advisory Committee on Human Research Protections. He believes that Connecticut’s guidelines on stem cell research provide a framework for addressing the issues associated with human gene editing. “There is a whole regulatory process that has been evolved governing the therapeutic use of stem cells,” he said. “There are mechanisms that have been put in place for effective local oversight and national oversight for stem cell research.”

    Although CRISPR has been the subject of a bitter patent dispute between Doudna and Charpentier and The Broad Institute in Cambridge, Mass., a recent decision by the U.S. Patent Trial and Appeal Board in favor of Broad is unlikely to affect research at Yale and other institutions. Although Broad, an institute of Harvard and the Massachusetts Institute of Technology, can now claim the patent, universities do not typically enforce patent rights against other universities over research uses.

    At Yale, scientists and physicians noted that gene editing is years away from human trials, and that risks remain. The issue now, said Glazer, is “How do we do it safely? It is never going to be risk-free. Many medical therapies have side effects and we balance the risks and benefits.” Despite its effectiveness, CRISPR is also known for what’s called “off-target risk,” imprecise cutting and splicing of genes that could lead to unforeseen side effects that persist in future generations. “CRISPR is extremely potent in editing the gene it is targeting,” Glazer said. “But it is still somewhat promiscuous and will cut other places. It could damage a gene you don’t want damaged.”

    Glazer has been working with a gene editing technology called triple helix that hijacks DNA’s own repair mechanisms to fix gene mutations. Triple helix, as its name suggests, adds a third strand to the double helix of DNA. That third layer, a peptide nucleic acid, binds to DNA and provokes a natural repair process that copies a strand of DNA into a target gene. Unlike CRISPR and other editing techniques, it does not use nucleases that cut DNA. “This just recruits a process that is natural. Then you give the cell this piece of DNA, this template that has a new sequence,” Glazer said, adding that triple helix is more precise than CRISPR and leads to fewer off-target effects, but is a more complex technology that requires advanced synthetic chemistry.

    Along with several scientists across Yale, Glazer is studying triple helix as a potential treatment for cystic fibrosis, HIV/AIDS, spherocytosis, and thalassemia.

    Adele Ricciardi, a student in her sixth year of the M.D./Ph.D. program, is working with Glazer and other faculty on use of triple helix to make DNA repairs in utero. She also supports the panel’s decision, but believes that more public discussion is needed to allay fears of misuse of the technology. In a recent presentation to her lab mates, she noted that surveys show widespread public concern about such biomedical advances. One study found that most of those surveyed felt it should be illegal to change the genes of unborn babies, even to prevent disease.

    “There is, I believe, a misconception of what we are using gene editing for,” Ricciardi said. “We are using it to edit disease-causing mutations, not to improve the intelligence of our species or get favorable characteristics in babies. We can improve quality of life in kids with severe genetic disorders.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Yale University Campus

    Yale University comprises three major academic components: Yale College (the undergraduate program), the Graduate School of Arts and Sciences, and the professional schools. In addition, Yale encompasses a wide array of centers and programs, libraries, museums, and administrative support offices. Approximately 11,250 students attend Yale.

  • richardmitnick 2:45 pm on February 24, 2017 Permalink | Reply
    Tags: , , DNA, Nucleotides, Seesaw Compiler   

    From Caltech: “Computing with Biochemical Circuits Made Easy” 

    Caltech Logo

    Detail from painting “What Dreams Are Made Of.” Credit: Ann Erpino

    Electronic circuits are found in almost everything from smartphones to spacecraft and are useful in a variety of computational problems from simple addition to determining the trajectories of interplanetary satellites. At Caltech, a group of researchers led by Assistant Professor of Bioengineering Lulu Qian is working to create circuits using not the usual silicon transistors but strands of DNA.

    The Qian group has made the technology of DNA circuits accessible to even novice researchers—including undergraduate students—using a software tool they developed called the Seesaw Compiler. Now, they have experimentally demonstrated that the tool can be used to quickly design DNA circuits that can then be built out of cheap “unpurified” DNA strands, following a systematic wet-lab procedure devised by Qian and colleagues.

    A paper describing the work appears in the February 23 issue of Nature Communications.

    Although DNA is best known as the molecule that encodes the genetic information of living things, they are also useful chemical building blocks. This is because the smaller molecules that make up a strand of DNA, called nucleotides, bind together only with very specific rules—an A nucleotide binds to a T, and a C nucleotide binds to a G. A strand of DNA is a sequence of nucleotides and can become a double strand if it binds with a sequence of complementary nucleotides.

    DNA circuits are good at collecting information within a biochemical environment, processing the information locally and controlling the behavior of individual molecules. Circuits built out of DNA strands instead of silicon transistors can be used in completely different ways than electronic circuits. “A DNA circuit could add ‘smarts’ to chemicals, medicines, or materials by making their functions responsive to the changes in their environments,” Qian says. “Importantly, these adaptive functions can be programmed by humans.”

    To build a DNA circuit that can, for example, compute the square root of a number between 0 and 16, researchers first have to carefully design a mixture of single and partially double-stranded DNA that can chemically recognize a set of DNA strands whose concentrations represent the value of the original number. Mixing these together triggers a cascade of zipping and unzipping reactions, each reaction releasing a specific DNA strand upon binding. Once the reactions are complete, the identities of the resulting DNA strands reveal the answer to the problem.

    With the Seesaw Compiler, a researcher could tell a computer the desired function to be calculated and the computer would design the DNA sequences and mixtures needed. However, it was not clear how well these automatically designed DNA sequences and mixtures would work for building DNA circuits with new functions; for example, computing the rules that govern how a cell evolves by sensing neighboring cells, defined in a classic computational model called “cellular automata.”

    “Constructing a circuit made of DNA has thus far been difficult for those who are not in this research area, because every circuit with a new function requires DNA strands with new sequences and there are no off-the-shelf DNA circuit components that can be purchased,” says Chris Thachuk, senior postdoctoral scholar in computing and mathematical sciences and second author on the paper. “Our circuit-design software is a step toward enabling researchers to just type in what they want to do or compute and having the software figure out all the DNA strands needed to perform the computation, together with simulations to predict the DNA circuit’s behavior in a test tube. Even though these DNA strands are still not off-the-shelf products, we have now shown that they do work well for new circuits with user-designed functions.”

    “In the 1950s, only a few research labs that understood the physics of transistors could build early versions of electronic circuits and control their functions,” says Qian. “But today many software tools are available that use simple and human-friendly languages to design complex electronic circuits embedded in smart machines. Our software is kind of like that: it translates simple and human-friendly descriptions of computation to the design of complex DNA circuits.”

    The Seesaw Compiler was put to the test in 2015 in a unique course at Caltech, taught by Qian and called “Design and Construction of Programmable Molecular Systems” (BE/CS 196 ab). “How do you evaluate the accessibility of a new technology? You give the technology to someone who is intellectually capable but has minimal prior background,” Qian says.

    “The students in this class were undergrads and first-year graduate students majoring in computer science and bioengineering,” says Anupama Thubagere, a graduate student in biology and bioengineering and first author on the paper. “I started working with them as a head teaching assistant and together we soon discovered that using the Seesaw Compiler to design a DNA circuit was easy for everyone.”

    However, building the designed circuit in the wet lab was not so simple. Thus, with continued efforts after the class, the group set out to develop a systematic wet-lab procedure that could guide researchers—even novices like undergraduate students—through the process of building DNA circuits. “Fortunately, we found a general solution to every challenge that we encountered, now making it easy for everyone to build their own DNA circuits,” Thubagere says.

    The group showed that it was possible to use cheap, “unpurified” DNA strands in these circuits using the new process. This was only possible because steps in the systematic wet-lab procedure were designed to compensate for the lower synthesis quality of the DNA strands.

    “We hope that this work will convince more computer scientists and researchers from other fields to join our community in developing increasingly powerful molecular machines and to explore a much wider range of applications that will eventually lead to the transformation of technology that has been promised by the invention of molecular computers,” Qian says.

    The paper is titled, Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components. Other Caltech co-authors include graduate students Robert Johnson and Kevin Cherry, alumnus Joseph Berleant (BS ’16), and undergraduate Diana Ardelean. The work was funded by the National Science Foundation, the Banting Postdoctoral Fellowships program, the Burroughs Wellcome Fund, and Innovation in Education funds from Caltech.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    Caltech campus
    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

  • richardmitnick 1:38 pm on January 9, 2017 Permalink | Reply
    Tags: DNA, , Skeletal muscle mass,   

    U Aberdeen: “Gene could play role in body’s muscle mass” 

    U Aberdeen bloc

    University of Aberdeen

    09 January 2017
    Euan Wemyss

    Scientists at the University of Aberdeen identify gene which could play role in determining muscle mass. No image credit.

    “Our research suggests this gene could play a role in regulating muscle mass and the fact that drugs have already been developed to target the gene gives us an obvious focus for further research”
    Dr Arimantas Lionikas

    Scientists have identified a gene they think could play a role in determining a person’s muscle mass – which is linked to a number of health factors, including how long someone lives.

    Previous studies have shown a link between muscle mass and life expectancy in elderly people.

    Muscle is the most abundant tissue in the body and enables many functions from allowing us to move around to allowing us to breathe.

    The amount of skeletal muscle mass each person has can vary significantly.

    Skeletal muscle mass can be increased if a person undertakes strength exercise but genetic factors play an equally important role in determining how much muscle mass a person can have.

    Now, scientists at the University of Aberdeen, led by Dr Arimantas Lionikas, have identified a gene that appears to affect muscle mass in mice. The findings have been published in Nature Genetics.

    The same gene has previously been linked with the spread of cancer and drugs have been developed to target it.

    The team hope to study these drugs further to understand their effects on muscle tissue. If there are different drugs targeting the same gene, the research could uncover which drug has the less negative effect on muscle mass.

    “Skeletal muscle mass is incredibly important in humans, especially as they get older. We have already seen in older adults that statistically, those with lower muscle mass are more likely to die at a younger age.

    “Our research suggests this gene could play a role in regulating muscle mass and the fact that drugs have already been developed to target the gene gives us an obvious focus for further research.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Aberdeen Campus

    Founded in 1495 by William Elphinstone, Bishop of Aberdeen and Chancellor of Scotland, the University of Aberdeen is Scotland’s third oldest and the UK’s fifth oldest university.

    William Elphinstone established King’s College to train doctors, teachers and clergy for the communities of northern Scotland, and lawyers and administrators to serve the Scottish Crown. Much of the King’s College still remains today, as do the traditions which the Bishop began.

    King’s College opened with 36 staff and students, and embraced all the known branches of learning: arts, theology, canon and civil law. In 1497 it was first in the English-speaking world to create a chair of medicine. Elphinstone’s college looked outward to Europe and beyond, taking the great European universities of Paris and Bologna as its model.
    Uniting the Rivals

    In 1593, a second, Post-Reformation University, was founded in the heart of the New Town of Aberdeen by George Keith, fourth Earl Marischal. King’s College and Marischal College were united to form the modern University of Aberdeen in 1860. At first, arts and divinity were taught at King’s and law and medicine at Marischal. A separate science faculty – also at Marischal – was established in 1892. All faculties were opened to women in 1892, and in 1894 the first 20 matriculated female students began their studies. Four women graduated in arts in 1898, and by the following year, women made up a quarter of the faculty.

    Into our Sixth Century

    Throughout the 20th century Aberdeen has consistently increased student recruitment, which now stands at 14,000. In recent years picturesque and historic Old Aberdeen, home of Bishop Elphinstone’s original foundation, has again become the main campus site.

    The University has also invested heavily in medical research, where time and again University staff have demonstrated their skills as world leaders in their field. The Institute of Medical Sciences, completed in 2002, was designed to provide state-of-the-art facilities for medical researchers and their students. This was followed in 2007 by the Health Sciences Building. The Foresterhill campus is now one of Europe’s major biomedical research centres. The Suttie Centre for Teaching and Learning in Healthcare, a £20m healthcare training facility, opened in 2009.

  • richardmitnick 10:17 am on January 9, 2017 Permalink | Reply
    Tags: 16S rRNA sequencing, Archaea, DNA, , , Polymerase chain reaction, Prokaryotes, The Never-Ending Quest to Rewrite the Tree of Life   

    From NOVA: “The Never-Ending Quest to Rewrite the Tree of Life” 



    04 Jan 2017
    Carrie Arnold

    The bottom of the ocean is one of the most mysterious places on the planet, but microbiologist Karen Lloyd of the University of Tennessee, Knoxville, wanted to go deeper than that. In 2010, she was a postdoc at Aarhus University in Denmark, and Lloyd wanted to see what microbes were living more than 400 feet beneath the sea floor.

    Like nearly all microbiologists doing this type of census, she relied on 16S rRNA sequencing to determine who was there. Developed by microbiologist Carl Woese in the late 1970s, the technique looks for variation in the 16S rRNA gene, one that’s common to all organisms (it’s key to turning DNA into protein, one of life’s of the most fundamental processes). When Lloyd compared what she had seen under the microscope to what her sequencing data said, however, she knew her DNA results were missing a huge portion of the life hidden underneath the ocean.

    “I had two problems with just 16S sequencing. One, I knew it would miss organisms, and two, it’s not good for understanding small differences between microbes,” Lloyd says.

    Scientists use heat maps like these to visualize the diversity of bacteria in various environments. Credits below.

    Technology had made gene sequencing much quicker and easier compared to when Woese first started his work back in the 1970s, but the principle remained the same. The 16S rRNA gene codes for a portion of the machinery used by prokaryotes to make protein, which is a central activity in the cell. All microbes have a copy of this gene, but different species have slightly different copies. If two species are closely related, their 16S rRNA sequences will be nearly identical; more distantly related organisms will have a greater number of differences. It not only gave researchers a way to quantify evolutionary relationships between species, Woese’s work also revealed an entirely new branch on the tree of life—the archaea, a group of microscopic organisms distinct from bacteria.

    Woese’s success in using 16S rRNA to rewrite the tree of life no doubt encouraged its widespread use. But as Lloyd and other scientists began to realize, some microbes carry a version that is significantly different from that seen in other bacteria or archaea. Since biologists depended on this similarity to identify an organism, they began to realize that they were leaving out potentially significant portions of life from their investigations.

    These concerns culminated approximately ten years ago during a period when sequencing technologies were rapidly accelerating. During this time, researchers figured out how to prepare DNA for sequencing without needing to know anything about the organism you were studying. At the same time, scientists invented a strategy to isolate single cells. At her lab at the Joint Genome Institute outside San Francisco, microbiologist Tanja Woyke put these two strategies together to sequence the genomes of individual microbial cells. Meanwhile, Jill Banfield, across the bay at the University of California, Berkeley, used a different approach called metagenomics that sequenced genes from multiple species at once, and used computer algorithms to reconstruct each organism’s genome. Over the past several years, their work has helped illuminate the massive amount of microbial dark matter that comprises life on Earth.

    “These two strategies really complement each other. They have opened up our ability to see the true diversity of microbial life,” says Roger Lasken, a microbial geneticist at the J. Craig Venter Institute.

    Microbial Dark Matter

    When Woese sequenced the 16S genes of the microbes that would come to be known as archaea, they were completely different from most of the other bacterial sequences he had accumulated. They lacked a true nucleus, like other bacteria, but their metabolisms were completely different. These microbes also tended to favor extreme environments, such as those at high temperatures (hot springs and hydrothermal vents), high salt concentrations, or high acidity. Sensing their ancient origins, Woese named these microbes the archaea, and gave them their own branch on the tree of life.

    Woese did all of his original sequencing by hand, a laborious process that took years. Later, DNA sequencing machines greatly simplified the work, although it still required amplifying the small amount of DNA present using a technique known as polymerase chain reaction, or PCR, before sequencing. The utility of 16S sequencing soon made the technique one of the mainstays of the microbiology lab, along with the Petri dish and the microscope.

    The method uses a set of what’s known as universal primers—short strands of RNA or DNA that help jump start the duplication of DNA—to make lots of copies of the 16S gene so it can be sequenced. The primers bound to a set of DNA sequences flanking the 16S gene that were thought to be common to all organisms. This acted like a set of bookends to identify the region to be copied by PCR. As DNA sequencing technology improved, researchers began amplifying and sequencing 16S genes in environmental samples as a way of identifying the microbes present without the need to grow them in the lab. Since scientists have only been able to culture about one in 100 microbial species, this method opened broad swaths of biodiversity that would otherwise have remained invisible.

    “We didn’t know that these deep branches existed. Trying to study life from just 16S rRNA sequences is like trying to understand all animals by visiting a zoo,” says Lionel Guy, a microbiologist from Uppsala University in Sweden.

    Access mp4 video here .
    Discover how to interpret and create evolutionary trees, then explore the tree of life in NOVA’s Evolution Lab.

    It didn’t take long, however, for scientists to realize the universal primers weren’t nearly as universal as researchers had hoped. The use of the primers rested on the assumption that all organisms, even unknown ones, would have similar DNA sequences surrounding the 16S rRNA gene. But that meant that any true oddballs probably wouldn’t have 16S rRNA sequences that matched the universal primers—they would remain invisible. These uncultured, unsequenced species were nicknamed “microbial dark matter” by Stanford University bioengineer and physicist Stephen Quake in a 2007 PNAS paper.

    The name, he says, is analogous to dark matter in physics, which is invisible but thought to make up the bulk of the universe. “It took DNA technology to realize the depth of the problem. I mean, holy crap, there’s a lot more out there than we can discover,” Quake says.

    Quake’s snappy portmanteau translated into the Microbial Dark Matter project—an ongoing quest in microbiology, led by Woyke, to understand the branches on the tree of life that remain shrouded in mystery by isolating DNA from single bacterial and archaeal cells. These microbial misfits intrigued Lloyd as well, and she believed the subsurface had many more of them than anyone thought. Her task was to find them.

    “We had no idea what was really there, but we knew it was something,” Lloyd says.

    To solve her Rumsfeldian dilemma of identifying both her known and unknown unknowns, Lloyd needed a DNA sequencing method that would allow her to sequence the genomes of the microbes in her sample without any preconceived notions of what they looked like. As it turns out, a scientist in New Haven, Connecticut was doing just that.

    Search for Primers

    In the 1990s, Roger Lasken had recognized the problems with traditional 16S rRNA and other forms of sequencing. Not only did you need to know something about the DNA sequence ahead of time in order to make enough genetic material to be sequenced, you also needed a fairly large sample. The result was a significant limitation in the types of material that could be sequenced. Lasken wanted to be able to sequence the genome of a single cell without needing to know anything about it.

    Then employed at the biotech firm Molecular Staging, Lasken began work on what he called multiple displacement amplification (MDA). He built on a recently discovered DNA polymerase (the enzyme that adds nucleotides, one by one, to a growing piece of DNA) called φ29 DNA polymerase. Compared to the more commonly used Taq polymerase, the φ29 polymerase created much longer strands of DNA and could operate at much cooler temperatures. Scientists had also developed random primers, small pieces of randomly generated DNA. Unlike the universal primers, which were designed to match specific DNA sequences 20–30 nucleotides in length, random primers were only six nucleotides long. This meant they were small enough to match pieces of DNA on any genome. With enough random primers to act as starting points for the MDA process, scientists could confidently amplify and sequence all the genetic material in a sample. The bonus inherent in the random primers was that scientists didn’t need to know anything about the sample they were sequencing in order to begin work.

    “For the first time, you didn’t need to culture an organism or amplify its DNA to sequence it,” he says.

    The method had only been tested on relatively small pieces of DNA. Lasken’s major breakthrough was making the system work for larger chromosomes, including those in humans, which was published in 2002 in PNAS. Lasken was halfway to his goal—his next step was figuring out how to do this in a single bacterium, which would enable researchers to sequence any microbial cell they found. In 2005, Lasken and colleagues managed to isolate a single E. coli cell and sequence its 16S rRNA gene using MDA. It was a good proof of principle that the system worked, but to understand the range and depth of microbial biodiversity, researchers like Tanja Woyke, the microbiologist at the Joint Genome Institute, needed to look at the entire genome of a single cell. In theory, the system should work neatly: grab a single cell, amplify its DNA, and then sequence it. But putting all of the steps together and working on the kinks in the system would require years of work.

    Woyke had spent her postdoc at the Joint Genome Institute sequencing DNA from samples not grown in the lab, but drawn directly from the environment, like a scoop of soil. At the time, she was using metagenomics, which amplified and sequenced DNA directly from environmental samples, yielding millions of As, Ts, Gs, and Cs from even a thimble of dirt. Woyke’s problem was determining which genes belonged to which microbe, a key step in assembling a complete genome. Nor was she able to study different strains of the same microbe that were present in a sample because their genomes were just too similar to tell apart using the available sequencing technology. What’s more, the sequences from common species often completely drowned out the data from more rare ones.

    “I kept thinking to myself, wouldn’t it be nice to get the entire genome from just a single cell,” Woyke says. Single-cell genomics would enable her to match a genome and a microbe with near 100% certainty, and it would also allow her to identify species with only a few individuals in any sample. Woyke saw a chance to make her mark with these rare but environmentally important species.

    Soon after that, she read Lasken’s paper and decided to try his technique on microbes she had isolated from the grass sharpshooter Draeculacephala minerva, an important plant pest. One of her biggest challenges was contamination. Pieces of DNA are everywhere—on our hands, on tables and lab benches, and in the water. The short, random primers upon which single-cell sequencing was built could help amplify these fragments of DNA just as easily as they could the microbial genomes Woyke was studying. “If someone in the lab had a cat, it could pick up cat DNA,” Woyke says of the technique.

    In 2010, after more than a year of work, Woyke had her first genome, that of Sulcia bacteria, which had a small genome and could only live inside the grass sharpshooter. Each cell also carried two copies of the genome, which helped make Woyke’s work easier. It was a test case that proved the method, but to shine a spotlight on the world’s hidden microbial biodiversity, Woyke would need to figure out how to sequence the genomes from multiple individual microbes.

    Work with Jonathan Eisen, a microbiologist at UC Davis, on the Genomic Encyclopedia of Bacteria and Archaea Project, known as GEBA, enabled her lab to set up a pipeline to perform single cell sequencing on multiple organisms at once. GEBA, which seeks to sequence thousands of bacterial and archaeal genomes, provided a perfect entry to her Microbial Dark Matter sequencing project. More than half of all known bacterial phyla—the taxonomic rank just below kingdom—were only represented by a single 16S rRNA sequence.

    “We knew that there were far more microbes and a far greater diversity of life than just those organisms being studied in the lab,” says Matthew Kane, a program director at the National Science Foundation and a former microbiologist. Studying the select few organisms that scientists could grow in pure culture was “useful for picking apart how cells work, but not for understanding life on Earth.”

    GEBA was a start, but even the best encyclopedia is no match for even the smallest public library. Woyke’s Microbial Dark Matter project would lay the foundation for the first of those libraries. She didn’t want to fill it with just any sequences, however. Common bacteria like E. coli, Salmonella, and Clostridium were the Dr. Seuss books and Shakespeare plays of the microbial world—every library had copies, though they represented only a tiny slice of all published works. Woyke was after the bacterial and archaeal equivalents of rare, single-edition books. So she began searching in extreme environments including boiling hot springs of caustic acid, volcanic vents at the bottom of the ocean, and deep inside abandoned mines.

    Using the single-celled sequencing techniques that she had perfected at the Joint Genome Institute, Woyke and her colleagues ended up with exactly 201 genomes from these candidate phyla, representing 29 branches on the tree of life that scientists knew nothing about. “For many phyla, this was the first genomic data anyone had seen,” she says.

    The results, published in Nature in 2013, identified some unusual species for which even Woyke wasn’t prepared. Up until that study, all organisms used the same sequence of three DNA nucleotides to signal the stop of a protein, one of the most fundamental components of any organism’s genome. Several of the species of archaea identified by Woyke and her colleagues, however, used a completely different stop signal. The discovery was not unlike traveling to a different country and having the familiar red stop sign replaced by a purple square, she says. Their work also identified other rare and bizarre features of the organisms’ metabolisms that make them unique among Earth’s biodiversity. Other microbial dark matter sequencing projects, both under Woyke’s Microbial Dark Matter project umbrella and other independent ventures, identified microbes from unusual phyla living in our mouths.

    Some of the extremeophile archaea that Woyke and her colleagues identified were so unlike other forms of life that they grouped them into their own superset of phyla, known as DPANN (Diapherotrites, Parvarchaeota, Aenigmarchaeota, Nanohaloarchaeota, and Nanoarchaeota). The only thing that scientists knew about these organisms were the genomes that Woyke had sequenced, isolated from individual organisms. These single-cell sequencing projects are key not just for filling in the foliage on the tree of life, but also for demonstrating just how much remains unknown, and Woyke and her team have been at the forefront of these discoveries, Kane says.

    Sequencing microbes cell by cell, however, isn’t the only method for uncovering Earth’s hidden biodiversity. Just a few miles from Woyke’s lab, microbiologist Jill Banfield at UC Berkeley is taking a different approach that also has also produced promising results.

    Studying the Uncultured

    Typically, to study microbes, scientists have grown them in a pure culture from a single individual. Though useful for studying these organisms in the laboratory, most microbes live in complex communities of many individuals from different species. Starting in the early 2000s, genetic sequencing technologies had advanced to the point where researchers could study the complex array of microbial genomes without necessarily needing to culture each individual organism. Known as metagenomics, the field began with scientists focused on which genes were found in the wild, which would hint at how each species or strain of microbe could survive in different environments.

    Just as Woyke was doubling down on single-cell sequencing, Banfield began using metagenomics to obtain a more nuanced and detailed picture of microbial ecology. The problems she faced, though very different from Woyke’s, were no less vexing. Like Woyke, Banfield focused on extreme environments: acrid hydrothermal vents at the bottom of the ocean that belched a vile mixture of sulfuric acid and smoke; an aquifer flowing through toxic mine tailings in Rifle, Colorado; a salt flat in Chile’s perpetually parched Atacama Desert; and water found in the Iron Mountain Mine in Northern California that is some of the most acidic found anywhere on Earth. Also like Woyke, Banfield knew that identifying the full range of microbes living in these hellish environments would mean moving away from using the standard set of 16S rRNA primers. The main issue Banfield and colleagues faced was figuring out how to assemble the mixture of genetic material they isolated from their samples into discrete genomes.

    A web of connectivity calculated by Banfield and her collaborators shows how different proteins illustrate relationships between different microbes.
    Credit below.

    The solution wasn’t a new laboratory technique, but a different way of processing the data. Researchers obtain their metagenomic information by drawing a sample from a particular environment, isolating the DNA, and sequencing it. The process of sequencing breaks each genome down into smaller chunks of DNA that computers then reassemble. Reassembling a single genome isn’t unlike assembling a jigsaw puzzle, says Laura Hug, a microbiologist at the University of Waterloo in Ontario, Canada, and a former postdoc in Banfield’s lab.

    When faced with just one puzzle, people generally work out a strategy, like assembling all the corners and edges, grouping the remaining pieces into different colors, and slowly putting it all together. It’s a challenging task with a single genome, but it’s even more difficult in metagenomics. “In metagenomics, you can have hundreds or even thousands of puzzles, many of them might be all blue, and you have no idea what the final picture looks like. The computers have to figure out which blue pieces go together and try to extract a full, accurate puzzle from this jumble,” Hug says. Not surprisingly, the early days of metagenomics were filled with incomplete and misassembled genomes.

    Banfield’s breakthrough helped tame the task. She and her team developed a better method for binning, the formal name for the computer process that sorts through the pile of DNA jigsaw pieces and arranges them into a final product. As her lab made improvements, they were able to survey an increasing range of environments looking for rare and bizarre microbes. Progress was rapid. In the 1980s, most of the bacteria and archaea that scientists knew about fit into 12 major phyla. By 2014, scientists had increased that number to more than 50. But in a single 2015 Nature paper, Banfield and her colleagues added an additional 35 phyla of bacteria to the tree of life.

    The latest tree of life was produced when Banfield and her colleagues added another 35 major groups, known as phyla. Credit below.

    Because researchers knew essentially nothing about these bacteria, they dubbed them the “candidate phyla radiation”—or CPR—the bacterial equivalent of Woyke’s DPANN. Like the archaea, these bacteria were grouped together because of their similarities to each other and their stark differences to other bacteria. Banfield and colleagues estimated that the CPR organisms may encompass more than 15% of all bacterial species.

    “This wasn’t like discovering a new species of mammal,” Hug says. “It was like discovering that mammals existed at all, and that they’re all around us and we didn’t know it.”

    Nine months later, in April 2016, Hug, Banfield, and their colleagues used past studies to construct a new tree of life. Their result reaffirmed Woese’s original 1978 tree, showing humans and, indeed, most plants and animals, as mere twigs. This new tree, however, was much fuller, with far more branches and twigs and a richer array of foliage. Thanks in no small part to the efforts of Banfield and Woyke, our understanding of life is, perhaps, no longer a newborn sapling, but a rapidly maturing young tree on its way to becoming a fully rooted adult.

    Photo credits: Miller et al. 2013/PLOS, Podell et al. 2013/PLOS, Hug et al. 2016/UC Berkeley

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: