Tagged: DNA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:33 pm on September 25, 2015 Permalink | Reply
    Tags: , , DNA, ,   

    From MIT: “New system for human genome editing has potential to increase power and precision of DNA engineering” 

    MIT News

    September 25, 2015
    Broad Institute

    CRISPR systems are found in many different bacterial species, and have evolved to protect host cells against infection by viruses. Image courtesy of Broad Institute/Science Photo Images

    CRISPR-Cpf1 offers simpler approach to editing DNA; technology could disrupt scientific and commercial landscape.

    A team including the scientist who first harnessed the CRISPR-Cas9 system for mammalian genome editing has now identified a different CRISPR system with the potential for even simpler and more precise genome engineering.

    In a study published today in Cell, Feng Zhang and his colleagues at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT, with co-authors Eugene Koonin at the National Institutes of Health, Aviv Regev of the Broad Institute and the MIT Department of Biology, and John van der Oost at Wageningen University, describe the unexpected biological features of this new system and demonstrate that it can be engineered to edit the genomes of human cells.

    “This has dramatic potential to advance genetic engineering,” says Eric Lander, director of the Broad Institute. “The paper not only reveals the function of a previously uncharacterized CRISPR system, but also shows that Cpf1 can be harnessed for human genome editing and has remarkable and powerful features. The Cpf1 system represents a new generation of genome editing technology.”

    CRISPR sequences were first described in 1987, and their natural biological function was initially described in 2010 and 2011. The application of the CRISPR-Cas9 system for mammalian genome editing was first reported in 2013, by Zhang and separately by George Church at Harvard University.

    In the new study, Zhang and his collaborators searched through hundreds of CRISPR systems in different types of bacteria, searching for enzymes with useful properties that could be engineered for use in human cells. Two promising candidates were the Cpf1 enzymes from bacterial species Acidaminococcus and Lachnospiraceae, which Zhang and his colleagues then showed can target genomic loci in human cells.

    “We were thrilled to discover completely different CRISPR enzymes that can be harnessed for advancing research and human health,” says Zhang, the W.M. Keck Assistant Professor in Biomedical Engineering in MIT’s Department of Brain and Cognitive Sciences.

    The newly described Cpf1 system differs in several important ways from the previously described Cas9, with significant implications for research and therapeutics, as well as for business and intellectual property:

    First: In its natural form, the DNA-cutting enzyme Cas9 forms a complex with two small RNAs, both of which are required for the cutting activity. The Cpf1 system is simpler in that it requires only a single RNA. The Cpf1 enzyme is also smaller than the standard SpCas9, making it easier to deliver into cells and tissues.

    Second, and perhaps most significantly: Cpf1 cuts DNA in a different manner than Cas9. When the Cas9 complex cuts DNA, it cuts both strands at the same place, leaving “blunt ends” that often undergo mutations as they are rejoined. With the Cpf1 complex the cuts in the two strands are offset, leaving short overhangs on the exposed ends. This is expected to help with precise insertion, allowing researchers to integrate a piece of DNA more efficiently and accurately.

    Third: Cpf1 cuts far away from the recognition site, meaning that even if the targeted gene becomes mutated at the cut site, it can likely still be recut, allowing multiple opportunities for correct editing to occur.

    Fourth: The Cpf1 system provides new flexibility in choosing target sites. Like Cas9, the Cpf1 complex must first attach to a short sequence known as a PAM, and targets must be chosen that are adjacent to naturally occurring PAM sequences. The Cpf1 complex recognizes very different PAM sequences from those of Cas9. This could be an advantage in targeting some genomes, such as in the malaria parasite as well as in humans.

    “The unexpected properties of Cpf1 and more precise editing open the door to all sorts of applications, including in cancer research,” says Levi Garraway, an institute member of the Broad Institute, and the inaugural director of the Joint Center for Cancer Precision Medicine at the Dana-Farber Cancer Institute, Brigham and Women’s Hospital, and the Broad Institute. Garraway was not involved in the research.

    An open approach to empower research

    Zhang, along with the Broad Institute and MIT, plan to share the Cpf1 system widely. As with earlier Cas9 tools, these groups will make this technology freely available for academic research via the Zhang lab’s page on the plasmid-sharing website Addgene, through which the Zhang lab has already shared Cas9 reagents more than 23,000 times with researchers worldwide to accelerate research. The Zhang lab also offers free online tools and resources for researchers through its website.

    The Broad Institute and MIT plan to offer nonexclusive licenses to enable commercial tool and service providers to add this enzyme to their CRISPR pipeline and services, further ensuring availability of this new enzyme to empower research. These groups plan to offer licenses that best support rapid and safe development for appropriate and important therapeutic uses.

    “We are committed to making the CRISPR-Cpf1 technology widely accessible,” Zhang says. “Our goal is to develop tools that can accelerate research and eventually lead to new therapeutic applications. We see much more to come, even beyond Cpf1 and Cas9, with other enzymes that may be repurposed for further genome editing advances.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 5:40 pm on September 10, 2015 Permalink | Reply
    Tags: , DNA, The Atlantic   

    From The Atlantic: “How Data-Wranglers Are Building the Great Library of Genetic Variation” 

    Atlantic Magazine

    The Atlantic Magazine

    Sep 9, 2015
    Ed Yong


    A huge project unexpectedly led to a way of finding disease genes without needing to know about diseases.

    Let’s say you have a patient with a severe inherited muscle disorder, the kind that Daniel MacArthur from the Broad Institute of Harvard and MIT specializes in. They’re probably a child, with debilitating symptoms and perhaps no diagnosis. To discover the gene(s) that underlie the kid’s condition, you sequence their genome, or perhaps just their exome: the 1 percent of their DNA that codes for proteins. The results come back, and you see tens of thousands of variants—sites where, say, the usual A has been replaced by a T, or the typical C is instead a G.

    You’d then want to know if those variants have ever been associated with diseases, and how common they are in the general population. (The latter is especially important because most variants are so common that they can’t possibly be plausible culprits behind rare genetic diseases.) “To make sense of a single patient’s genome, you need to put it in the context of many people’s genomes,” says MacArthur. In an ideal world, you would compare all of a patient’s variants against “every individual who has ever been sequenced in the history of sequencing.”

    This is not that world, at least not yet. When Macarthur launched his lab in 2012, he started by sequencing the exomes of some 300 patients with rare muscle diseases. But he quickly realized that he had nothing decent to compare them against. It has never been easier, cheaper, or quicker to sequence a person’s genome, but interpreting those sequences is tricky, absent a comprehensive reference library of human genetic variation. No such library existed, or at least nothing big or diverse enough. So, MacArthur started making one.

    It was hard work, not because the data didn’t exist, but because it was scattered. To date, scientists have probably sequenced at least 5,000 full genomes and some 500,000 exomes, but most are completely inaccessible to other researchers. There might be intellectual-property restrictions, or issues around consent. There’s the logistical hassle of shipping huge volumes of data on hard drives. And some scientists are just plain competitive.

    Fortunately, MacArthur’s colleagues at the Broad Institute and beyond had deciphered so many exomes that he could gather thousands of sequences by personally popping into offices. Buoyed by that success, he started contacting people who were studying the genomes of people with cancer, heart disease, diabetes, schizophrenia, and more. “There’s a big swath of human genetics where people have learned that you either fail by yourself or succeed together, so they’re committed to sharing data,” MacArthur says.

    By 2014, he had amassed more than 90,000 exomes from around a dozen sources, collectively called the Exome Aggregation Consortium. Then, he had to munge them together.

    That was the worst bit. Researchers use very different technologies to sequence and annotate genomes, so combining disparate data sets is like mushing together the dishes from separate restaurants and hoping that the results will be palatable. Often, they won’t be.

    Monkol Lek, a postdoc in MacArthur’s lab who himself has a genetic muscle disease, solved this problem by essentially starting from scratch. He took the raw data from some 60,706 patients and analyzed their exomes, one position at a time. The raw sequences took up a petabyte of memory, and the final compressed file filled a three-terabyte hard disk.

    The prize from all this data-wrangling was one of the most thorough portraits of human genetic variation ever produced. MacArthur went through the main results in the opening talk of this week’s Genome Science 2015 conference, in Birmingham, U.K. His team had identified around 10 million genetic variants scattered throughout the exome, most of which had never been described before. And most turned up just once in the data, meaning that they lurk within just one in every 60,000 people. “Human variation is dominated by these extremely rare variants,” says MacArthur. That’s where the secrets of many rare genetic disorders reside.

    But unexpectedly, the most interesting variants turned out to be the ones that weren’t there.

    The graduate student Kaitlin Samocha developed a mathematical model to predict how many variants you’d expect to find in a given gene, in a population of 60,000 people. The model was remarkably accurate at estimating neutral variants, which don’t change the protein that’s encoded by the gene, and so have minimal impact. But the model often wildly overestimated the number of “loss-of-function variants,” which severely disrupt the gene in question. Repeatedly, the ExAc data revealed far fewer of these variants than Samocha’s model predicted.

    Why? Because many of these loss-of-function variants are so destructive that their carriers develop debilitating disorders, or die before they’re even born. So, the difference between prediction and reality reflects the brutal hand of natural selection. The variants are simply not around to be sequenced because they have long been expunged from the gene pool.

    For example, the team expected to find 161 loss-of-function variants in a gene called DYNC1H1. By contrast, the ExAc data revealed only four—and indeed, DYNC1H1 is associated with several severe inherited neurodevelopmental disorders.

    The model also predicted 125 loss-of-function variants in the UBR5 gene—and the data revealed just one. That’s far more interesting because UBR5 has never before been linked to a human disease.

    A full quarter of human genes are like this: They have a lower-than-expected number of loss-of-function variants. And while some of them are known “diseases genes,” the rest have never been pinpointed as such. So, if you find one of these variants in a patient with a severe genetic disorder, the chances are good that you’ve found a genuine culprit.

    That blew my mind. Here is a way of identifying potential disease-related genes, without needing to know anything about the diseases in question. Or, as MacArthur said in his talk, “We should soon be able to say, with high precision: If you have a mutation at this site, it will kill you. And we’ll be able to say that without ever seeing a person with that mutation.”

    These results speak to one of the greatest challenges of modern genomics: weaving together existing sets of data in useful ways. They also vindicate the big, expensive studies that have searched for variants behind common diseases like type 2 diabetes, heart disease, and schizophrenia. These endeavors have indeed found several variants, but with such small effects that they explain just a tiny fraction of the risk of each condition. But “all this data can be re-purposed for analyzing rare diseases,” says MacArthur. “Without those large-scale studies, we’d have no chance of doing something like ExAc.”

    “His talk really shows that you can’t anticipate what these data sets will show you until you put them together,” says Nick Loman from the University of Birmingham. “Our ability to interrogate biology if you can put hundreds of thousands, or millions, of genomes together is massive.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 12:13 pm on September 9, 2015 Permalink | Reply
    Tags: , DNA,   

    From Rockefeller: “New findings shed light on fundamental process of DNA repair” 

    Rockefeller University bloc

    Rockefeller University

    September 8, 2015
    Eva Kiesler | 212-327-7963

    Repair process spotted: In these cells, whose DNA has been damaged by radiation, the histone protein γH2AX (red) accumulates at sites of broken DNA strands. A new study shows that this histone recruits the protein 53BP1 (green) to help mend the damage.

    Inside the trillions of cells that make up the human body, things are rarely silent. Molecules are constantly being made, moved, and modified — and during these processes, mistakes are sometimes made. Strands of DNA, for instance, can break for any number of reasons, such as exposure to UV radiation, or mechanical stress on the chromosomes into which our genetic material is packaged.

    To make sure cells stay alive and multiply properly, the body relies on a number of mechanisms to fix such damage. Although researchers have been studying DNA repair for decades, much remains unknown about this fundamental process of life — and in a study published online in Nature Chemical Biology on September 7, researchers at The Rockefeller University uncover new aspects of it.

    “Our findings are revealing more clues about the intricacies of DNA repair,” says study author Ralph Kleiner, a postdoctoral fellow in the Laboratory of Chemistry and Cell Biology, led by Tarun Kapoor. “We now know how key proteins get where they need to be to facilitate the process.”

    “This is also a nice example of how innovative chemical approaches can help decipher fundamental biological mechanisms,” adds Kapoor, who serves as Pels Family Professor at Rockefeller.

    When DNA strands break, the cell ideally puts them back together and carries on as usual. But sometimes, repairs don’t go that smoothly. For instance, different regions of a chromosome can fuse together, causing genes to rearrange themselves—and such chromosome fusions can lead to diseases such as cancer.

    To learn more about the process, Kapoor, Kleiner, and their colleagues zeroed in on the sites in chromosomes where DNA repair happens. Specifically, they focused on a single histone, a type of protein that DNA wraps around to make up chromosomes. This histone, H2AX, is known to be involved in DNA repair.

    Immediately after DNA damage occurs, H2AX gets a mark — it becomes tagged with a chemical moiety known as a phosphate. This process, called phosphorylation, occurs at sites of broken DNA as a way to mediate interactions between key proteins. In the study, the researchers wanted to learn more about how phosphorylation of H2AX helps mediate DNA repair.

    The researchers employed a new method for scrutinizing the DNA repair process. To learn more about which proteins interact with H2AX when it becomes phosphorylated, they added their own light-sensitive chemical tags to a portion of the histone.

    This tag was designed such that it became activated only when the researchers shone a light upon it. Once activated, the tag reacts with interacting proteins, facilitating their capture and isolation. This technique enabled the researchers to identify not just the proteins that were known to strongly bind to H2AX and facilitate DNA repair, but also those that were considered “weak binders” as well, says Kleiner.

    Indeed, they found that part of a DNA repair protein known as 53BP1 fits over the phosphorylated part of H2AX “like a glove,” says Kleiner. This interaction helps bring 53BP1 to the site of DNA damage, where it mediates the repair of double-stranded breaks in DNA by encouraging the repair machinery to glue the two ends back together.

    “We’ve identified a component of the DNA repair process that others had previously missed,” notes Kleiner. “Scientists have known about 53BP1 for a long time, but didn’t understand the function of this particular portion of the protein that interacts with the phosphorylation mark of H2AX. These findings help solve that mystery.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rockefeller University Campus

    The Rockefeller University is a world-renowned center for research and graduate education in the biomedical sciences, chemistry, bioinformatics and physics. The university’s 75 laboratories conduct both clinical and basic research and study a diverse range of biological and biomedical problems with the mission of improving the understanding of life for the benefit of humanity.

    Founded in 1901 by John D. Rockefeller, the Rockefeller Institute for Medical Research was the country’s first institution devoted exclusively to biomedical research. The Rockefeller University Hospital was founded in 1910 as the first hospital devoted exclusively to clinical research. In the 1950s, the institute expanded its mission to include graduate education and began training new generations of scientists to become research leaders around the world. In 1965, it was renamed The Rockefeller University.

    The university is supported by a combination of government and private grants and contracts, private philanthropy and income from the endowment.

    Since its founding, The Rockefeller University has embraced an open structure to encourage collaboration between disciplines and empower faculty members to take on high-risk, high-reward projects. No formal departments exist, bureaucracy is kept to a minimum and scientists are given resources, support and unparalleled freedom to follow the science wherever it leads.

    This unique approach to science has led to some of the world’s most revolutionary contributions to biology and medicine.

  • richardmitnick 2:12 pm on September 7, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From phys.org: “Scientists create world’s largest protein map to reveal which proteins work together in a cell” 


    September 7, 2015
    No Writer Credit

    Scientists have uncovered tens of thousands of new protein interactions, accounting for about a quarter of all estimated protein contacts in a cell. Credit: Jovana Drinkjakovic

    A multinational team of scientists have sifted through cells of vastly different organisms, from amoebae to worms to mice to humans, to reveal how proteins fit together to build different cells and bodies.

    This tour de force of protein science, a result of a collaboration between seven research groups from three countries, led by Professor Andrew Emili from the University of Toronto’s Donnelly Centre and Professor Edward Marcotte from the University of Texas at Austin, uncovered tens of thousands of new protein interactions, accounting for about a quarter of all estimated protein contacts in a cell.

    When even a single one of these interactions is lost it can lead to disease, and the map is already helping scientists spot individual proteins that could be at the root of complex human disorders. The data will be available to researchers across the world through open access databases.

    The study comes out in Nature on September 7.

    While the sequencing of the human genome more than a decade ago was undoubtedly one of the greatest discoveries in biology, it was only the beginning of our in-depth understanding of how cells work. Genes are just blueprints and it is the genes’ products, the proteins, that do much of the work in a cell.

    Proteins work in teams by sticking to each other to carry out their jobs. Many proteins come together to form so called molecular machines that play key roles, such a building new proteins or recycling those no longer needed by literally grinding them into reusable parts. But for the vast majority of proteins, and there are tens of thousands of them in human cells, we still don’t know what they do.

    This is where Emili and Marcotte’s map comes in. Using a state-of-the-art method developed by the groups, the researchers were able to fish thousands of protein machineries out of cells and count individual proteins they are made of. They then built a network that, similar to social networks, offers clues into protein function based on which other proteins they hang out with. For example, a new and unstudied protein, whose role we don’t yet know, is likely to be involved in fixing damage in a cell if it sticks to cell’s known “handymen” proteins.

    Today’s landmark study gathered information on protein machineries from nine species that represent the tree of life: baker’s yeast, amoeba, sea anemones, flies, worms, sea urchins, frogs, mice and humans. The new map expands the number of known protein associations over 10 fold, and gives insights into how they evolved over time.

    “For me the highlight of the study is its sheer scale. We have tripled the number of protein interactions for every species. So across all the animals, we can now predict, with high confidence, more than 1 million protein interactions – a fundamentally ‘big step’ moving the goal posts forward in terms of protein interactions networks,” says Emili, who is also Ontario Research Chair in Biomarkers in Disease Management and a professor in the Department of Molecular Genetics.

    The researchers discovered that tens of thousands of protein associations remained unchanged since the first ancestral cell appeared, one billion years ago (!), preceding all of animal life on Earth.

    “Protein assemblies in humans were often identical to those in other species. This not only reinforces what we already know about our common evolutionary ancestry, it also has practical implications, providing the ability to study the genetic basis for a wide variety of diseases and how they present in different species,” says Marcotte.

    The map is already proving useful in pinpointing possible causes of human disease. One example is a newly discovered molecular machine, dubbed Commander, which consists of about a dozen individual proteins. Genes that encode some of Commander’s components had previously been found to be mutated in people with intellectual disabilities but it was not clear how these proteins worked.

    Because Commander is present in all animal cells, graduate student Fan Tu went on to disrupt its components in tadpoles, revealing abnormalities in the way brain cells are positioned during embryo development and providing a possible origin for a complex human condition.

    “With tens of thousands of other new protein interactions, our map promises to open many more lines of research into links between proteins and disease, which we are keen to explore in depth over the coming years,” concludes Dr. Emili.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 3:13 pm on September 3, 2015 Permalink | Reply
    Tags: , , DNA,   

    From Caltech: “Making Nanowires from Protein and DNA” 

    Caltech Logo

    Jessica Stoller-Conrad

    Co-crystal structure of protein-DNA nanowires. The protein-DNA nanowire design is experimentally verified by X-ray crystallography.
    Credit: Yun (Kurt) Mou, Jiun-Yann Yu, Timothy M. Wannier, Chin-Lin Guo and Stephen L. Mayo/Caltech

    The ability to custom design biological materials such as protein and DNA opens up technological possibilities that were unimaginable just a few decades ago. For example, synthetic structures made of DNA could one day be used to deliver cancer drugs directly to tumor cells, and customized proteins could be designed to specifically attack a certain kind of virus. Although researchers have already made such structures out of DNA or protein alone, a Caltech team recently created—for the first time—a synthetic structure made of both protein and DNA. Combining the two molecule types into one biomaterial opens the door to numerous applications.

    A paper describing the so-called hybridized, or multiple component, materials appears in the September 2 issue of the journal Nature.

    There are many advantages to multiple component materials, says Yun (Kurt) Mou (PhD ’15), first author of the Nature study. “If your material is made up of several different kinds of components, it can have more functionality. For example, protein is very versatile; it can be used for many things, such as protein–protein interactions or as an enzyme to speed up a reaction. And DNA is easily programmed into nanostructures of a variety of sizes and shapes.”

    But how do you begin to create something like a protein–DNA nanowire—a material that no one has seen before?

    Mou and his colleagues in the laboratory of Stephen Mayo, Bren Professor of Biology and Chemistry and the William K. Bowes Jr. Leadership Chair of Caltech’s Division of Biology and Biological Engineering, began with a computer program to design the type of protein and DNA that would work best as part of their hybrid material. “Materials can be formed using just a trial-and-error method of combining things to see what results, but it’s better and more efficient if you can first predict what the structure is like and then design a protein to form that kind of material,” he says.

    The researchers entered the properties of the protein–DNA nanowire they wanted into a computer program developed in the lab; the program then generated a sequence of amino acids (protein building blocks) and nitrogenous bases (DNA building blocks) that would produce the desired material.

    However, successfully making a hybrid material is not as simple as just plugging some properties into a computer program, Mou says. Although the computer model provides a sequence, the researcher must thoroughly check the model to be sure that the sequence produced makes sense; if not, the researcher must provide the computer with information that can be used to correct the model. “So in the end, you choose the sequence that you and the computer both agree on. Then, you can physically mix the prescribed amino acids and DNA bases to form the nanowire.”

    The resulting sequence was an artificial version of a protein–DNA coupling that occurs in nature. In the initial stage of gene expression, called transcription, a sequence of DNA is first converted into RNA. To pull in the enzyme that actually transcribes the DNA into RNA, proteins called transcription factors must first bind certain regions of the DNA sequence called protein-binding domains.

    Using the computer program, the researchers engineered a sequence of DNA that contained many of these protein-binding domains at regular intervals. They then selected the transcription factor that naturally binds to this particular protein-binding site—the transcription factor called Engrailed from the fruit fly Drosophila. However, in nature, Engrailed only attaches itself to the protein-binding site on the DNA. To create a long nanowire made of a continuous strand of protein attached to a continuous strand of DNA, the researchers had to modify the transcription factor to include a site that would allow Engrailed also to bind to the next protein in line.

    “Essentially, it’s like giving this protein two hands instead of just one,” Mou explains. “The hand that holds the DNA is easy because it is provided by nature, but the other hand needs to be added there to hold onto another protein.”

    Another unique attribute of this new protein–DNA nanowire is that it employs coassembly—meaning that the material will not form until both the protein components and the DNA components have been added to the solution. Although materials previously could be made out of DNA with protein added later, the use of coassembly to make the hybrid material was a first. This attribute is important for the material’s future use in medicine or industry, Mou says, as the two sets of components can be provided separately and then combined to make the nanowire whenever and wherever it is needed.

    This finding builds on earlier work in the Mayo lab, which, in 1997, created one of the first artificial proteins, thus launching the field of computational protein design. The ability to create synthetic proteins allows researchers to develop proteins with new capabilities and functions, such as therapeutic proteins that target cancer. The creation of a coassembled protein–DNA nanowire is another milestone in this field.

    “Our earlier work focused primarily on designing soluble, protein-only systems. The work reported here represents a significant expansion of our activities into the realm of nanoscale mixed biomaterials,” Mayo says.

    Although the development of this new biomaterial is in the very early stages, the method, Mou says, has many promising applications that could change research and clinical practices in the future.

    “Our next step will be to explore the many potential applications of our new biomaterial,” Mou says. “It could be incorporated into methods to deliver drugs into cells—to create targeted therapies that only bind to a certain biomarker on a certain cell type, such as cancer cells. We could also expand the idea of protein–DNA nanowires to protein–RNA nanowires that could be used for gene therapy applications. And because this material is brand-new, there are probably many more applications that we haven’t even considered yet.”

    The work was published in a paper titled, Computational design of co-assembling protein-DNA nanowires.” In addition to Mou and Mayo, other Caltech coauthors include former graduate students Jiun-Yann Yu (PhD ’14) and Timothy M. Wannier (PhD ’15), as well as Chin-Lin Guo from Academia Sinica in Taiwan. The work was funded by the Defense Advanced Research Projects Agency Protein Design Processes Program, a National Security Science and Engineering Faculty Fellowship, and the Caltech Programmable Molecular Technology Initiative funded by the Gordon and Betty Moore Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

  • richardmitnick 11:50 am on August 25, 2015 Permalink | Reply
    Tags: , , DNA   

    From Discovery: “Life 2.0? Synthetic DNA Added to Genetic Code” 

    Discovery News
    Discovery News

    Aug 25, 2015
    Glenn McDonald


    Well, there’s no way this could go wrong.

    According to recent announcements, a small biotech startup in California has successfully added new synthetic components to the genetic alphabet of DNA, potentially creating entirely new kinds of life on Earth.

    You’d need a Ph.D. or three to really get into it, but here goes: DNA, the organic molecule that carries genetic information for life, is made from a limited chemical “alphabet.” DNA can be thought of as a molecular code containing exactly four nitrogen-containing nucleobases — cytosine (C), guanine (G), adenine (A), or thymine (T). All known living organisms on the planet, from bacteria to biologists, are based on combinations of this four-letter molecular code: C-G-A-T.

    That’s how it’s been for several billion years, but last year the biotech company Synthorx announced development of a synthetic pair of nucleobases — abbreviated X-Y — to create a new and expanded genetic code.

    From the company website: “Adding two new synthetic bases, termed X and Y, to the genetic alphabet, we now have an expanded vocabulary to improve the discovery and development of new therapeutics, diagnostics and vaccines as well as create innovative products and processes, including using semi-synthetic organisms….”

    The additions to the four letter DNA code effectively raises the number of possible amino acids an organism could use to build proteins from 20 to 172. That opens up entire new vistas of possibilities, including a completely new class of semi-synthetic life forms using a six-letter DNA code instead of a four-letter code.

    Synthorx’s most recent announcement concerns the successful production of proteins containing the new synthetic base pair, building on the research published last year: “Since the publication, Synthorx has developed and validated a protein expression system, employing its synthetic DNA technology to incorporate novel amino acids to create new full-length and functional proteins.”

    According to third-party reports, Synthorx has even started creating new organisms with the technology, including a type of E. coli bacteria “never before seen on the face of the Earth.”

    The company insists that multiple safeguards are built into the technology, and that organisms created with the synthetic elements can only be produced in the lab. That, of course, is the premise of roughly one million science fiction horror stories, but what can you do? Well, you can read more about it here.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:57 am on August 25, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From phys.org: “Genetic overlapping in multiple autoimmune diseases may suggest common therapies” 


    August 24, 2015
    No Writer Credit

    DNA double helix. Credit: public domain

    Scientists who analyzed the genes involved in 10 autoimmune diseases that begin in childhood have discovered 22 genome-wide signals shared by two or more diseases. These shared gene sites may reveal potential new targets for treating many of these diseases, in some cases with existing drugs already available for non-autoimmune disorders.

    Autoimmune diseases, such as type 1 diabetes, Crohn’s disease, and juvenile idiopathic arthritis, collectively affect 7 to 10 percent of the population in the Western Hemisphere.

    “Our approach did more than finding genetic associations among a group of diseases,” said study leader, Hakon Hakonarson, M.D., Ph.D., director of the Center for Applied Genomics at The Children’s Hospital of Philadelphia (CHOP). “We identified genes with a biological relevance to these diseases, acting along gene networks and pathways that may offer very useful targets for therapy.”

    The paper appears online today in Nature Medicine.

    The international study team performed a meta-analysis, including a case-control study of 6,035 subjects with automimmune disease and 10,700 controls, all of European ancestry. The study’s lead analyst, Yun (Rose) Li, an M.D./Ph.D. graduate student at the University of Pennsylvania and the Center for Applied Genomics, mentored by Hakonarson and his research team, applied highly innovative and integrative approaches in supporting the study of pathogenic roles of the genes uncovered across multiple diseases.

    The research encompassed 10 clinically distinct autoimmune diseases with onset during childhood: type 1 diabetes, celiac disease, juvenile idiopathic arthritis, common variable immunodeficiency disease, systemic lupus erythematosus, Crohn’s disease, ulcerative colitis, psoriasis, autoimmune thyroiditis and ankylosing spondylitis.

    Because many of these diseases run in families and because individual patients often have more than one autoimmune condition, clinicians have long suspected these conditions have shared genetic predispositions. Previous genome-wide association studies have identified hundreds of susceptibility genes among autoimmune diseases, largely affecting adults.

    The current research was a systematic analysis of multiple pediatric-onset diseases simultaneously. The study team found 27 genome-wide loci, including five novel loci, among the diseases examined. Of those 27 signals, 22 were shared by at least two of the autoimmune diseases, and 19 of them were shared by at least three of them.

    Many of the gene signals the investigators discovered were on biological pathways functionally linked to cell activation, cell proliferation and signaling systems important in immune processes. One of the five novel signals, near the CD40LG gene, was especially compelling, said Hakonarson, who added, “That gene encodes the ligand for the CD40 receptor, which is associated with Crohn’s disease, ulcerative colitis and celiac disease. This ligand may represent another promising drug target in treating these diseases.”

    Many of the 27 gene signals the investigators uncovered have a biological relevance to autoimmune disease processes, Hakonarson said. “Rather than looking at overall gene expression in all cells, we focused on how these genes upregulated gene expression in specific cell types and tissues, and found patterns that were directly relevant to specific diseases. For instance, among several of the diseases, we saw genes with stronger expression in B cells. Looking at diseases such as lupus or juvenile idiopathic arthritis, which feature dysfunctions in B cells, we can start to design therapies to dial down over-expression in those cells.”

    He added that “the level of granularity the study team uncovered offers opportunities for researchers to better target gene networks and pathways in specific autoimmune diseases, and perhaps to fine tune and expedite drug development by repurposing existing drugs, based on our findings.”

    More information: Meta-analysis of shared genetic architecture across ten pediatric autoimmune diseases, Nature Medicine, published online Aug. 24, 2015. doi.org/10.1038/nm.3933

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 4:13 pm on August 17, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From isgtw: “Simplifying and accelerating genome assembly” 

    international science grid this week

    August 12, 2015
    Linda Vu

    To extract meaning from a genome, scientists must reconstruct portions — a time consuming process akin to rebuilding the sentences and paragraphs of a book from snippets of text. But by applying novel algorithms and high-performance computational techniques to the cutting-edge de novogenome assembly tool Meraculous, a team of scientists have simplified and accelerated genome assembly — reducing a months-long process to mere minutes.

    Temp 1
    “The new parallelized version of Meraculous shows unprecedented performance and efficient scaling up to 15,360 processor cores for the human and wheat genomes on NERSC’s Edison supercomputer,” says Evangelos Georganas. “This performance improvement sped up the assembly workflow from days to seconds.” Courtesy NERSC.

    Researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have made this gain by ‘parallelizing’ the DNA code — sometimes billions of bases long — to harness the processing power of supercomputers, such as the US Department of Energy’s National Energy Research Scientific Computing Center’s (NERSC’s) Edison system. (Parallelizing means splitting up tasks to run on the many nodes of a supercomputer at once.)

    “Using the parallelized version of Meraculous, we can now assemble the entire human genome in about eight minutes,” says Evangelos Georganas, a UC Berkeley graduate student. “With this tool, we estimate that the output from the world’s biomedical sequencing capacity could be assembled using just a portion of the Berkeley-managed NERSC’s Edison supercomputer.”

    Supercomputers: A game changer for assembly

    High-throughput next-generation DNA sequencers allow researchers to look for biological solutions — and for the most part, these machines are very accurate at recording the sequence of DNA bases. Sometimes errors do occur, however. These errors complicate analysis by making it harder to assemble genomes and identify genetic mutations. They can also lead researchers to misinterpret the function of a gene.

    Researchers use a technique called shotgun sequencing to identify these errors. This involves taking numerous copies of a DNA strand, breaking it up into random smaller pieces and then sequencing each piece separately. For a particularly complex genome, this process can generate several terabytes of data.

    To identify data errors quickly and effectively, the Berkeley Lab and UC Berkeley team use ‘Bloom filters‘ and massively parallel supercomputers. “Applying Bloom filters has been done before, but what we have done differently is to get Bloom filters to work with distributed memory systems,” says Aydin Buluç, a research scientist in Berkeley Lab’s Computational Research Division (CRD). “This task was not trivial; it required some computing expertise to accomplish.”

    The team also developed solutions for parallelizing data input and output (I/O). “When you have several terabytes of data, just getting the computer to read your data and output results can be a huge bottleneck,” says Steven Hofmeyr, a research scientist in CRD who developed these solutions. “By allowing the computer to download the data in multiple threads, we were able to speed up the I/O process from hours to minutes.”

    The assembly process

    Once errors are removed, researchers can begin the genome assembly. This process relies on computer programs to join k-mers — short DNA sequences consisting of a fixed number (K) of bases — at overlapping regions, so they form a continuous sequence, or contig. If the genome has previously been sequenced, scientists can use reference recorded gene annotations to align the reads. If not, they need to create a whole new catalog of contigs through de novo assembly.

    Temp 1
    “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress,” says Jarrod Chapman. Pictured: Human Chromosomes. Courtesy Jane Ades, National Human Genome Research Institute.

    De novoassembly is memory-intensive, and until recently was resistant to parallelization in distributed memory. Many researchers turned to specialized large memory nodes, several terabytes in size, to do this work, but even the largest commercially available memory nodes are not big enough to assemble massive genomes. Even with supercomputers, it still took several hours, days or even months to assemble a single genome.

    To make efficient use of massively parallel systems, Georganas created a novel algorithm for de novo assembly that takes advantage of the one-sided communication and Partitioned Global Address Space (PGAS) capabilities of the UPC (Unified Parallel C) programming language. PGAS lets researchers treat the physically separate memories of each supercomputer node as one address space, reducing the time and energy spent swapping information between nodes.

    Tackling the metagenome

    Now that computation is no longer a bottleneck, scientists can try a number of different parameters and run as many analyses as necessary to produce very accurate results. This breakthrough means that Meraculous could also be used to analyze metagenomes — microbial communities recovered directly from environmental samples. This work is important because many microbes exist only in nature and cannot be grown in a laboratory. These organisms may be the key to finding new medicines or viable energy sources.

    “Analyzing metagenomes is a tremendous effort,” says Jarrod Chapman, who developed Meraculous at the US Department of Energy’s Joint Genome Institute (managed by the Berkeley Lab). “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress. Using Meraculous to effectively do this analysis would be a game changer.”

    –iSGTW is becoming the Science Node. Watch for our new branding and website this September.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

  • richardmitnick 9:31 am on July 30, 2015 Permalink | Reply
    Tags: , , , DNA, ,   

    From livescience: “Origin-of-Life Story May Have Found Its Missing Link” 


    June 06, 2015
    Jesse Emspak

    A field of geysers called El Tatio located in northern Chile’s Andes Mountains. Credit: Gerald Prins

    How did life on Earth begin? It’s been one of modern biology’s greatest mysteries: How did the chemical soup that existed on the early Earth lead to the complex molecules needed to create living, breathing organisms? Now, researchers say they’ve found the missing link.

    Between 4.6 billion and 4.0 billion years ago, there was probably no life on Earth. The planet’s surface was at first molten and even as it cooled, it was getting pulverized by asteroids and comets. All that existed were simple chemicals. But about 3.8 billion years ago, the bombardment stopped, and life arose. Most scientists think the “last universal common ancestor” — the creature from which everything on the planet descends — appeared about 3.6 billion years ago.

    But exactly how that creature arose has long puzzled scientists. For instance, how did the chemistry of simple carbon-based molecules lead to the information storage of ribonucleic acid, or RNA?

    A hairpin loop from a pre-mRNA. Highlighted are the nucleobases (green) and the ribose-phosphate backbone (blue). Note that this is a single strand of RNA that folds back upon itself.

    The RNA molecule must store information to code for proteins. (Proteins in biology do more than build muscle — they also regulate a host of processes in the body.)

    The new research — which involves two studies, one led by Charles Carter and one led by Richard Wolfenden, both of the University of North Carolina — suggests a way for RNA to control the production of proteins by working with simple amino acids that does not require the more complex enzymes that exist today. [7 Theories on the Origin of Life on Earth]

    Missing RNA link

    This link would bridge this gap in knowledge between the primordial chemical soup and the complex molecules needed to build life. Current theories say life on Earth started in an “RNA world,” in which the RNA molecule guided the formation of life, only later taking a backseat to DNA, which could more efficiently achieve the same end result.

    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    Like DNA, RNA is a helix-shaped molecule that can store or pass on information. (DNA is a double-stranded helix, whereas RNA is single-stranded.) Many scientists think the first RNA molecules existed in a primordial chemical soup — probably pools of water on the surface of Earth billions of years ago. [Photo Timeline: How the Earth Formed]

    The idea was that the very first RNA molecules formed from collections of three chemicals: a sugar (called a ribose); a phosphate group, which is a phosphorus atom connected to oxygen atoms; and a base, which is a ring-shaped molecule of carbon, nitrogen, oxygen and hydrogen atoms. RNA also needed nucleotides, made of phosphates and sugars.

    The question: How did the nucleotides come together within the soupy chemicals to make RNA? John Sutherland, a chemist at the University of Cambridge in England, published a study in May in the journal Nature Chemistry that showed that a cyanide-based chemistry could make two of the four nucleotides in RNA and many amino acids.

    That still left questions, though. There wasn’t a good mechanism for putting nucleotides together to make RNA. Nor did there seem to be a natural way for amino acids to string together and form proteins. Today, adenosine triphosphate (ATP) does the job of linking amino acids into proteins, activated by an enzyme called aminoacyl tRNA synthetase. But there’s no reason to assume there were any such chemicals around billions of years ago.

    Also, proteins have to be shaped a certain way in order to function properly. That means RNA has to be able to guide their formation — it has to “code” for them, like a computer running a program to do a task.

    Carter noted that it wasn’t until the past decade or two that scientists were able to duplicate the chemistry that makes RNA build proteins in the lab. “Basically, the only way to get RNA was to evolve humans first,” he said. “It doesn’t do it on its own.”

    Perfect sizes

    In one of the new studies, Carter looked at the way a molecule called “transfer RNA,” or tRNA, reacts with different amino acids.

    They found that one end of the tRNA could help sort amino acids according to their shape and size, while the other end could link up with amino acids of a certain polarity. In that way, this tRNA molecule could dictate how amino acids come together to make proteins, as well as determine the final protein shape. That’s similar to what the ATP enzyme does today, activating the process that strings together amino acids to form proteins.

    Carter told Live Science that the ability to discriminate according to size and shape makes a kind of “code” for proteins called peptides, which help to preserve the helix shape of RNA.

    “It’s an intermediate step in the development of genetic coding,” he said.

    In the other study, Wolfenden and colleagues tested the way proteins fold in response to temperature, since life somehow arose from a proverbial boiling pot of chemicals on early Earth. They looked at life’s building blocks, amino acids, and how they distribute in water and oil — a quality called hydrophobicity. They found that the amino acids’ relationships were consistent even at high temperatures — the shape, size and polarity of the amino acids are what mattered when they strung together to form proteins, which have particular structures.

    “What we’re asking here is, ‘Would the rules of folding have been different?'” Wolfenden said. At higher temperatures, some chemical relationships change because there is more thermal energy. But that wasn’t the case here.

    By showing that it’s possible for tRNA to discriminate between molecules, and that the links can work without “help,” Carter thinks he’s found a way for the information storage of chemical structures like tRNA to have arisen — a crucial piece of passing on genetic traits. Combined with the work on amino acids and temperature, it offers insight into how early life might have evolved.

    This work still doesn’t answer the ultimate question of how life began, but it does show a mechanism for the appearance of the genetic codes that pass on inherited traits, which got evolution rolling.

    The two studies are published in the June 1 issue of the journal Proceedings of the National Academy of Sciences.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 9:15 am on July 29, 2015 Permalink | Reply
    Tags: , , DNA,   

    From The Conversation: “CRISPR/Cas gene-editing technique holds great promise, but research moratorium makes sense pending further study “ 

    The Conversation
    The Conversation

    July 29, 2015
    No Writer Credit


    CRISPR/Cas is a new technology that allows unprecedented control over the DNA code. It’s sparked a revolution in the fields of genetics and cell biology, becoming the scientific equivalent of a household name by raising hopes about new ways to cure diseases including cancer and to unlock the remaining mysteries of our cells.

    The gene editing technique also raises concerns. Could the new tools allow parents to order “designer babies”? Could premature use in patients lead to unforeseen and potentially dangerous consequences? This potential for abuse or misuse led prominent scientists to call for a halt on some types of new research until ethical issues can be discussed – a voluntary ban that was swiftly ignored in some quarters.

    The moratorium is a positive step toward preserving the public’s trust and safety, while the promising new technology can be further studied.

    Editing DNA to cure disease

    While most human diseases are caused, at least partially, by mutations in our DNA, current therapies treat the symptoms of these mutations but not the genetic root cause. For example, cystic fibrosis, which causes the lungs to fill with excess mucus, is caused by a single DNA mutation. However, cystic fibrosis treatments focus on the symptoms – working to reduce mucus in the lungs and fight off infections – rather than correcting the mutation itself. That’s because making precise changes to the three-billion-letter DNA code remains a challenge even in a Petri dish, and it is unprecedented in living patients. (The only current example of gene therapy, called Glybera, does not involve modifying the patient’s DNA, and has been approved for limited use in Europe to treat patients with a digestive disorder.)

    That all changed in 2012, when several research groups demonstrated that a DNA-cutting technology called CRISPR/Cas could operate on human DNA. Compared to previous, inefficient methods for editing DNA, CRISPR/Cas offers a shortcut. It acts like a pair of DNA scissors that cut where prompted by a special strand of RNA (a close chemical relative of DNA). Snipping DNA turns on the cell’s DNA repair process, which can be hijacked to either disable a gene – say, one that allows tumor cells to grow uncontrollably – or to fix a broken gene, such as the mutation that causes cystic fibrosis. The advantages of the Cas9 system over its predecessor genome-editing technologies – its high specificity and the ease of navigating to a specific DNA sequence with the “guide RNA” – have contributed to its rapid adoption in the scientific community.

    The barrier to fixing the DNA of diseased cells appears to have evaporated.

    Playing with fire

    With the advance of this technique, the obstacles to altering genes in embryos are falling away, opening the door to so-called “designer babies” with altered appearance or intelligence. Ethicists have long feared the consequences of allowing parents to choose the traits of their babies. Further, there is a wide gap between our understanding of disease and the genes that might cause them. Even if we were capable of performing flawless genetic surgery, we don’t yet know how specific changes to the DNA will manifest in a living human. Finally, the editing of germ line cells such as embryos could permanently introduce altered DNA into the gene pool to be inherited by descendants.

    And making cuts in one’s DNA is not without risks. Cas9 – the scissor protein – is known to cleave DNA at unintended or “off-target” sites in the genome. Were Cas9 to inappropriately chop an important gene and inactivate it, the therapy could cause cancer instead of curing it.

    Take it slow

    All the concerns around Cas9 triggered a very unusual event: a call from prominent scientists to halt some of this research. In March of 2015, a group of researchers and lawyers called for a voluntary pause on further using CRISPR technology in germ line cells until ethical guidelines could be decided.

    Writing in the journal Science, the group – including two Nobel laureates and the inventors of the CRISPR technology – noted that we don’t yet understand enough about the link between our health and our DNA sequence. Even if a perfectly accurate DNA-editing system existed – and Cas9 surely doesn’t yet qualify – it would still be premature to treat patients with genetic surgery. The authors disavowed genome editing only in specific cell types such as embryos, while encouraging the basic research that would put future therapeutic editing on a firmer foundation of evidence.

    The basic research isn’t ready for deployment in human embryos yet. Petri dishes image via http://www.shutterstock.com

    Pushing ahead

    Despite this call for CRISPR/Cas research to be halted, a Chinese research group reported on their attempts at editing human embryos only two months later. Described in the journal Protein & Cell, the authors treated nonviable embryos to fix a gene mutation that causes a blood disease called β-thalassemia.

    The study results proved the concerns of the Science group to be well-founded. The treatment killed nearly one in five embryos, and only half of the surviving cells had their DNA modified. Of the cells that were even modified, only a fraction had the disease mutation repaired. The study also revealed off-target DNA cutting and incomplete editing among all the cells of a single embryo. Obviously these kinds of errors are problematic in embryos meant to mature into fully grown human beings.

    George Daley, a Harvard biologist and member of the group that called for the moratorium, concluded that “their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

    In the enthusiasm and hype surrounding Cas9, it is easy to forget that the technology has been in wide use for barely three years.

    Role of a moratorium

    Despite the publication of the Protein & Cell study – whose experiments likely took place at least months earlier – the Science plea for a moratorium can already be considered a success. The request from such a respected group has brought visibility to the topic and put pressure on universities, regulatory boards and the editors of scientific journals to discourage such research. (As evidence of this pressure, the Chinese authors were rejected from at least two top science journals before getting their paper accepted.) And the response to the voluntary ban has thus far not included accusations of “stifling academic freedom,” possibly due to the scientific credibility of the organizers.

    While rare, the call for a moratorium on research for ethical reasons can be traced to an earlier controversy over DNA technology. In 1975, a group that came to be known as the Asilomar Conference called for caution with an emerging technology called recombinant DNA until its safety could be evaluated and ethical guidelines could be published. The similarity between the two approaches is no coincidence: several authors of the Science essay were also members of the Asilomar team.

    The Asilomar guidelines are now widely viewed as having been a proportionate and responsible measure, placing the right emphasis on safety and ethics without hampering research progress. It turns out recombinant DNA technology was much less dangerous than originally feared; existing evidence already shows that we might not be so lucky with Cas9. Another important legacy of the Asilomar conference was the promotion of an open discussion involving experts as well as the general public. By heeding the lessons of caution and public engagement, hopefully the saga of CRISPR/Cas will unfold in a similarly responsible – yet exciting – way.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 473 other followers

%d bloggers like this: