Tagged: DNA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:27 am on October 27, 2015 Permalink | Reply
    Tags: , DNA,   

    From MIT: “Mapping the 3-D structure of DNA” 

    MIT News

    October 26, 2015
    Julia Sklar

    Abe Weintraub Photo: M. Scott Brauer

    PhD student Abe Weintraub helps identify when DNA folding is helpful, and when it might cause cancer.

    For graduate student Abe Weintraub, the magic and intrigue of DNA is all in the packaging.

    Imagine trying to fit 24 miles of string into a tennis ball, the PhD student in biology says: That is, in essence, what it’s like inside every cell nucleus in the human body, each of which contains about 2 meters’ worth of DNA strands. But, as Weintraub is finding, this packaging sometimes goes awry, which may be the basis for disease.

    Although the genetic code that resides in DNA has traditionally been thought of as linear, Weintraub is contributing to a body of knowledge about its 3-D organization. Two genes that may exist far apart when a strand is stretched out straight could actually be right next to each other when the strand is folded inside a cell nucleus — and the same applies to regulatory elements, which tell genes to turn on or off.

    Looking at DNA as a 3-D phenomenon may yield insights about how certain genes get turned on or off, and thus how cells differentiate — in other words, DNA’s 3-D structure might actually be what’s behind one cell becoming a skin cell, while another becomes a lung cell.

    Weintraub has now been part of the lab of Richard Young, a professor of biology, for one and a half years; his research began in figuring out how DNA gets folded up the way it does, and has more recently shifted to the consequences of improper folding.

    DNA gets packed tightly in organized loops, rather than being haphazardly crammed into cell nuclei. Weintraub helped find what causes this ordered looping.

    “When we zoomed out, we could see that what creates the particular 3-D structure of DNA are basically large loops, constraining small loops. Science can be pretty meta,” Weintraub jokes.

    This looping system looks fairly consistent across cells, and is how the same genes and regulatory elements end up adjacent to each other in every skin cell, for example. To study the 3-D structure of DNA, Weintraub works with mouse embryonic stem cells. He is now working on assembling maps that show normal patterns and gene organization across different types of healthy cells.

    Bad packaging

    While there is a general consistency to DNA’s 3-D structure, Weintraub also noticed variations as he worked to create these maps. He began to wonder whether DNA’s packaging affects its functionality, beyond just allowing it to fit inside a nucleus. So he shifted his research slightly, and now focuses on how slight changes in the way that DNA strands are folded can cause serious problems, like cancer.

    In particular, he’s interested in T-cell acute lymphoblastic leukemia.

    “That’s a disease that primarily affects children,” Weintraub says. “So it keeps me motivated in my research.”

    Outside of the lab, his research also resonates with physicians at Massachusetts General Hospital and Boston Children’s Hospital, grounding this aspect of his work in therapeutic discovery — although Weintraub remains staunchly connected to the importance of basic science, too.

    “It’s kind of the best of both research worlds,” he says of his research with DNA, as his project mapping healthy cells’ 3-D DNA structure has broader applications.

    Historical context

    For Weintraub, finding his place in the historical context of this field has been interesting. Through reading peer-reviewed journal articles from the 1970s and 1980s, he noticed a number of researchers postulating that DNA had a distinct 3-D structure that affected its functionality, but were unable to prove this hypothesis. With today’s precise technology, he gets to be part of a team that’s finally confirming this notion.

    “I like the idea that what I’m doing is identifying principles behind something as central to our biology as DNA,” Weintraub says. “I like that there’s still room for that kind of discovery here.”

    While Weintraub grew up and attended college in California, he says he may end up settling in Boston because of the area’s vast resources for biotech research. But with several years left until he completes his PhD, it’s too early to tell whether he’ll choose to go into academia or industry.

    “I don’t want to limit myself just yet,” he says. “There’s still a lot to learn.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 3:24 am on October 16, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From EPFL: “The future of farming depends on local breeds” 

    EPFL bloc

    Ecole Polytechnique Federale Lausanne

    Jan Overney

    The dwindling genetic diversity of farm animals is increasingly becoming a threat to livestock production. Although new DNA technologies can help us address this problem, changing habits to preserve local lineages may prove to be challenging. No image credit.

    It is hard to overestimate the importance of global livestock production to society and the economy. It constitutes the main source of income for 1.3 billion farmers, providing vital food for 800 million subsistence farmers, and making up 40% of global agricultural GDP. But overbreeding and dwindling genetic diversity could limit the ability of livestock populations to adapt the environmental changes, such as global warming and related new diseases. Currently on the sidelines, lesser-known livestock breeds and the DNA they carry could become key to securing the future of livestock farming.

    For four years leading up to 2014, a European research project chaired by EPFL took stock of the past, present, and future of farm animal genetic resources and outlined the questions of highest priority for research, infrastructure and policy development for the coming decade. A selection of the project’s scientific output has now been published by the open access journal Frontiers in Genetics and is available online in the form of 31 research papers.

    A shrinking genetic reservoir

    Over the past 100 years, many local breeds have gone extinct, as more productive industrial breeds have taken over. Even within these breeds, the genetic diversity between individuals is shrinking. So why does this matter? “A reduction of genetic diversity goes hand with hand with a reduction of the species’ capacity to adapt to new diseases, warmer temperatures, or new food sources,” says Stéphane Joost, the project’s chair.

    “Studying 1,200 sheep from 32 old, native breeds from around the world, we previously identified a specific gene involved in regulating their metabolism, whose presence correlated strongly with the amount of incident solar radiation – a genetic trait that made them better adapted to their environment than cosmopolitan breeds that are more productive in the short term,” says Joost. If breeds carrying such specific adaptations disappear, so too will the coping strategies they acquired throughout evolution.

    The better choice?

    Joost’s advice to farmers: “Farmers should keep their local, well adapted breeds,” he urges. They may be less productive than their industrially bred cousins, but in developing countries with extreme climates sticking to them is often the wiser choice – a lesson that many farmers learn the hard way. After investing their savings to crossbreed a species of cow local to West Africa with an industrial breed, farmers in Burkina Faso first reaped the fruits of their investment, until they realized that all of the mixed breed’s offspring were poorly adapted to their climate and eventually died. “Only local breeds are adapted to resist to such harsh environments and withstand diseases such as trypanosomiasis, spread by the tsetse fly,” says Joost.

    An archive of adaptation

    Understanding the genetic history of today’s breeds could help us find ways of adapting in the future, says Joost. “What ancestral animals conferred the species with a specific trait? And what can we do today to recover that same trait?” he asks. Knowing, for example, exactly which native species were crossbred to produce today’s breeds could help pinpoint certain well-adapted genes present in the native species that may have been lost. In the same way, well-adapted local breeds that were abandoned to the point of extinction could be recreated by cross-breeding the ancestral species they emerged from.

    To ensure that the research carried out in this project finds its way into the agricultural community, the 31 studies will be compiled into an e-book, which will also be made available in print and distributed to stakeholders in developing countries by the FAO. But changing habits will be an uphill battle, as it involves sacrificing short-term profits for long-term sustainability – a problem that Joost and the co-organizers of the research project are well aware of. “Throughout this project, we emphasized the need to work with social scientists to effectively influence the habits of the breeders associations and other stakeholders. This is one front on which we still have much to do,” he concludes.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    EPFL is Europe’s most cosmopolitan technical university. It receives students, professors and staff from over 120 nationalities. With both a Swiss and international calling, it is therefore guided by a constant wish to open up; its missions of teaching, research and partnership impact various circles: universities and engineering schools, developing and emerging countries, secondary schools and gymnasiums, industry and economy, political circles and the general public.

  • richardmitnick 2:33 pm on September 25, 2015 Permalink | Reply
    Tags: , , DNA, ,   

    From MIT: “New system for human genome editing has potential to increase power and precision of DNA engineering” 

    MIT News

    September 25, 2015
    Broad Institute

    CRISPR systems are found in many different bacterial species, and have evolved to protect host cells against infection by viruses. Image courtesy of Broad Institute/Science Photo Images

    CRISPR-Cpf1 offers simpler approach to editing DNA; technology could disrupt scientific and commercial landscape.

    A team including the scientist who first harnessed the CRISPR-Cas9 system for mammalian genome editing has now identified a different CRISPR system with the potential for even simpler and more precise genome engineering.

    In a study published today in Cell, Feng Zhang and his colleagues at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT, with co-authors Eugene Koonin at the National Institutes of Health, Aviv Regev of the Broad Institute and the MIT Department of Biology, and John van der Oost at Wageningen University, describe the unexpected biological features of this new system and demonstrate that it can be engineered to edit the genomes of human cells.

    “This has dramatic potential to advance genetic engineering,” says Eric Lander, director of the Broad Institute. “The paper not only reveals the function of a previously uncharacterized CRISPR system, but also shows that Cpf1 can be harnessed for human genome editing and has remarkable and powerful features. The Cpf1 system represents a new generation of genome editing technology.”

    CRISPR sequences were first described in 1987, and their natural biological function was initially described in 2010 and 2011. The application of the CRISPR-Cas9 system for mammalian genome editing was first reported in 2013, by Zhang and separately by George Church at Harvard University.

    In the new study, Zhang and his collaborators searched through hundreds of CRISPR systems in different types of bacteria, searching for enzymes with useful properties that could be engineered for use in human cells. Two promising candidates were the Cpf1 enzymes from bacterial species Acidaminococcus and Lachnospiraceae, which Zhang and his colleagues then showed can target genomic loci in human cells.

    “We were thrilled to discover completely different CRISPR enzymes that can be harnessed for advancing research and human health,” says Zhang, the W.M. Keck Assistant Professor in Biomedical Engineering in MIT’s Department of Brain and Cognitive Sciences.

    The newly described Cpf1 system differs in several important ways from the previously described Cas9, with significant implications for research and therapeutics, as well as for business and intellectual property:

    First: In its natural form, the DNA-cutting enzyme Cas9 forms a complex with two small RNAs, both of which are required for the cutting activity. The Cpf1 system is simpler in that it requires only a single RNA. The Cpf1 enzyme is also smaller than the standard SpCas9, making it easier to deliver into cells and tissues.

    Second, and perhaps most significantly: Cpf1 cuts DNA in a different manner than Cas9. When the Cas9 complex cuts DNA, it cuts both strands at the same place, leaving “blunt ends” that often undergo mutations as they are rejoined. With the Cpf1 complex the cuts in the two strands are offset, leaving short overhangs on the exposed ends. This is expected to help with precise insertion, allowing researchers to integrate a piece of DNA more efficiently and accurately.

    Third: Cpf1 cuts far away from the recognition site, meaning that even if the targeted gene becomes mutated at the cut site, it can likely still be recut, allowing multiple opportunities for correct editing to occur.

    Fourth: The Cpf1 system provides new flexibility in choosing target sites. Like Cas9, the Cpf1 complex must first attach to a short sequence known as a PAM, and targets must be chosen that are adjacent to naturally occurring PAM sequences. The Cpf1 complex recognizes very different PAM sequences from those of Cas9. This could be an advantage in targeting some genomes, such as in the malaria parasite as well as in humans.

    “The unexpected properties of Cpf1 and more precise editing open the door to all sorts of applications, including in cancer research,” says Levi Garraway, an institute member of the Broad Institute, and the inaugural director of the Joint Center for Cancer Precision Medicine at the Dana-Farber Cancer Institute, Brigham and Women’s Hospital, and the Broad Institute. Garraway was not involved in the research.

    An open approach to empower research

    Zhang, along with the Broad Institute and MIT, plan to share the Cpf1 system widely. As with earlier Cas9 tools, these groups will make this technology freely available for academic research via the Zhang lab’s page on the plasmid-sharing website Addgene, through which the Zhang lab has already shared Cas9 reagents more than 23,000 times with researchers worldwide to accelerate research. The Zhang lab also offers free online tools and resources for researchers through its website.

    The Broad Institute and MIT plan to offer nonexclusive licenses to enable commercial tool and service providers to add this enzyme to their CRISPR pipeline and services, further ensuring availability of this new enzyme to empower research. These groups plan to offer licenses that best support rapid and safe development for appropriate and important therapeutic uses.

    “We are committed to making the CRISPR-Cpf1 technology widely accessible,” Zhang says. “Our goal is to develop tools that can accelerate research and eventually lead to new therapeutic applications. We see much more to come, even beyond Cpf1 and Cas9, with other enzymes that may be repurposed for further genome editing advances.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 5:40 pm on September 10, 2015 Permalink | Reply
    Tags: , DNA, The Atlantic   

    From The Atlantic: “How Data-Wranglers Are Building the Great Library of Genetic Variation” 

    Atlantic Magazine

    The Atlantic Magazine

    Sep 9, 2015
    Ed Yong


    A huge project unexpectedly led to a way of finding disease genes without needing to know about diseases.

    Let’s say you have a patient with a severe inherited muscle disorder, the kind that Daniel MacArthur from the Broad Institute of Harvard and MIT specializes in. They’re probably a child, with debilitating symptoms and perhaps no diagnosis. To discover the gene(s) that underlie the kid’s condition, you sequence their genome, or perhaps just their exome: the 1 percent of their DNA that codes for proteins. The results come back, and you see tens of thousands of variants—sites where, say, the usual A has been replaced by a T, or the typical C is instead a G.

    You’d then want to know if those variants have ever been associated with diseases, and how common they are in the general population. (The latter is especially important because most variants are so common that they can’t possibly be plausible culprits behind rare genetic diseases.) “To make sense of a single patient’s genome, you need to put it in the context of many people’s genomes,” says MacArthur. In an ideal world, you would compare all of a patient’s variants against “every individual who has ever been sequenced in the history of sequencing.”

    This is not that world, at least not yet. When Macarthur launched his lab in 2012, he started by sequencing the exomes of some 300 patients with rare muscle diseases. But he quickly realized that he had nothing decent to compare them against. It has never been easier, cheaper, or quicker to sequence a person’s genome, but interpreting those sequences is tricky, absent a comprehensive reference library of human genetic variation. No such library existed, or at least nothing big or diverse enough. So, MacArthur started making one.

    It was hard work, not because the data didn’t exist, but because it was scattered. To date, scientists have probably sequenced at least 5,000 full genomes and some 500,000 exomes, but most are completely inaccessible to other researchers. There might be intellectual-property restrictions, or issues around consent. There’s the logistical hassle of shipping huge volumes of data on hard drives. And some scientists are just plain competitive.

    Fortunately, MacArthur’s colleagues at the Broad Institute and beyond had deciphered so many exomes that he could gather thousands of sequences by personally popping into offices. Buoyed by that success, he started contacting people who were studying the genomes of people with cancer, heart disease, diabetes, schizophrenia, and more. “There’s a big swath of human genetics where people have learned that you either fail by yourself or succeed together, so they’re committed to sharing data,” MacArthur says.

    By 2014, he had amassed more than 90,000 exomes from around a dozen sources, collectively called the Exome Aggregation Consortium. Then, he had to munge them together.

    That was the worst bit. Researchers use very different technologies to sequence and annotate genomes, so combining disparate data sets is like mushing together the dishes from separate restaurants and hoping that the results will be palatable. Often, they won’t be.

    Monkol Lek, a postdoc in MacArthur’s lab who himself has a genetic muscle disease, solved this problem by essentially starting from scratch. He took the raw data from some 60,706 patients and analyzed their exomes, one position at a time. The raw sequences took up a petabyte of memory, and the final compressed file filled a three-terabyte hard disk.

    The prize from all this data-wrangling was one of the most thorough portraits of human genetic variation ever produced. MacArthur went through the main results in the opening talk of this week’s Genome Science 2015 conference, in Birmingham, U.K. His team had identified around 10 million genetic variants scattered throughout the exome, most of which had never been described before. And most turned up just once in the data, meaning that they lurk within just one in every 60,000 people. “Human variation is dominated by these extremely rare variants,” says MacArthur. That’s where the secrets of many rare genetic disorders reside.

    But unexpectedly, the most interesting variants turned out to be the ones that weren’t there.

    The graduate student Kaitlin Samocha developed a mathematical model to predict how many variants you’d expect to find in a given gene, in a population of 60,000 people. The model was remarkably accurate at estimating neutral variants, which don’t change the protein that’s encoded by the gene, and so have minimal impact. But the model often wildly overestimated the number of “loss-of-function variants,” which severely disrupt the gene in question. Repeatedly, the ExAc data revealed far fewer of these variants than Samocha’s model predicted.

    Why? Because many of these loss-of-function variants are so destructive that their carriers develop debilitating disorders, or die before they’re even born. So, the difference between prediction and reality reflects the brutal hand of natural selection. The variants are simply not around to be sequenced because they have long been expunged from the gene pool.

    For example, the team expected to find 161 loss-of-function variants in a gene called DYNC1H1. By contrast, the ExAc data revealed only four—and indeed, DYNC1H1 is associated with several severe inherited neurodevelopmental disorders.

    The model also predicted 125 loss-of-function variants in the UBR5 gene—and the data revealed just one. That’s far more interesting because UBR5 has never before been linked to a human disease.

    A full quarter of human genes are like this: They have a lower-than-expected number of loss-of-function variants. And while some of them are known “diseases genes,” the rest have never been pinpointed as such. So, if you find one of these variants in a patient with a severe genetic disorder, the chances are good that you’ve found a genuine culprit.

    That blew my mind. Here is a way of identifying potential disease-related genes, without needing to know anything about the diseases in question. Or, as MacArthur said in his talk, “We should soon be able to say, with high precision: If you have a mutation at this site, it will kill you. And we’ll be able to say that without ever seeing a person with that mutation.”

    These results speak to one of the greatest challenges of modern genomics: weaving together existing sets of data in useful ways. They also vindicate the big, expensive studies that have searched for variants behind common diseases like type 2 diabetes, heart disease, and schizophrenia. These endeavors have indeed found several variants, but with such small effects that they explain just a tiny fraction of the risk of each condition. But “all this data can be re-purposed for analyzing rare diseases,” says MacArthur. “Without those large-scale studies, we’d have no chance of doing something like ExAc.”

    “His talk really shows that you can’t anticipate what these data sets will show you until you put them together,” says Nick Loman from the University of Birmingham. “Our ability to interrogate biology if you can put hundreds of thousands, or millions, of genomes together is massive.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 12:13 pm on September 9, 2015 Permalink | Reply
    Tags: , DNA,   

    From Rockefeller: “New findings shed light on fundamental process of DNA repair” 

    Rockefeller University bloc

    Rockefeller University

    September 8, 2015
    Eva Kiesler | 212-327-7963

    Repair process spotted: In these cells, whose DNA has been damaged by radiation, the histone protein γH2AX (red) accumulates at sites of broken DNA strands. A new study shows that this histone recruits the protein 53BP1 (green) to help mend the damage.

    Inside the trillions of cells that make up the human body, things are rarely silent. Molecules are constantly being made, moved, and modified — and during these processes, mistakes are sometimes made. Strands of DNA, for instance, can break for any number of reasons, such as exposure to UV radiation, or mechanical stress on the chromosomes into which our genetic material is packaged.

    To make sure cells stay alive and multiply properly, the body relies on a number of mechanisms to fix such damage. Although researchers have been studying DNA repair for decades, much remains unknown about this fundamental process of life — and in a study published online in Nature Chemical Biology on September 7, researchers at The Rockefeller University uncover new aspects of it.

    “Our findings are revealing more clues about the intricacies of DNA repair,” says study author Ralph Kleiner, a postdoctoral fellow in the Laboratory of Chemistry and Cell Biology, led by Tarun Kapoor. “We now know how key proteins get where they need to be to facilitate the process.”

    “This is also a nice example of how innovative chemical approaches can help decipher fundamental biological mechanisms,” adds Kapoor, who serves as Pels Family Professor at Rockefeller.

    When DNA strands break, the cell ideally puts them back together and carries on as usual. But sometimes, repairs don’t go that smoothly. For instance, different regions of a chromosome can fuse together, causing genes to rearrange themselves—and such chromosome fusions can lead to diseases such as cancer.

    To learn more about the process, Kapoor, Kleiner, and their colleagues zeroed in on the sites in chromosomes where DNA repair happens. Specifically, they focused on a single histone, a type of protein that DNA wraps around to make up chromosomes. This histone, H2AX, is known to be involved in DNA repair.

    Immediately after DNA damage occurs, H2AX gets a mark — it becomes tagged with a chemical moiety known as a phosphate. This process, called phosphorylation, occurs at sites of broken DNA as a way to mediate interactions between key proteins. In the study, the researchers wanted to learn more about how phosphorylation of H2AX helps mediate DNA repair.

    The researchers employed a new method for scrutinizing the DNA repair process. To learn more about which proteins interact with H2AX when it becomes phosphorylated, they added their own light-sensitive chemical tags to a portion of the histone.

    This tag was designed such that it became activated only when the researchers shone a light upon it. Once activated, the tag reacts with interacting proteins, facilitating their capture and isolation. This technique enabled the researchers to identify not just the proteins that were known to strongly bind to H2AX and facilitate DNA repair, but also those that were considered “weak binders” as well, says Kleiner.

    Indeed, they found that part of a DNA repair protein known as 53BP1 fits over the phosphorylated part of H2AX “like a glove,” says Kleiner. This interaction helps bring 53BP1 to the site of DNA damage, where it mediates the repair of double-stranded breaks in DNA by encouraging the repair machinery to glue the two ends back together.

    “We’ve identified a component of the DNA repair process that others had previously missed,” notes Kleiner. “Scientists have known about 53BP1 for a long time, but didn’t understand the function of this particular portion of the protein that interacts with the phosphorylation mark of H2AX. These findings help solve that mystery.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rockefeller University Campus

    The Rockefeller University is a world-renowned center for research and graduate education in the biomedical sciences, chemistry, bioinformatics and physics. The university’s 75 laboratories conduct both clinical and basic research and study a diverse range of biological and biomedical problems with the mission of improving the understanding of life for the benefit of humanity.

    Founded in 1901 by John D. Rockefeller, the Rockefeller Institute for Medical Research was the country’s first institution devoted exclusively to biomedical research. The Rockefeller University Hospital was founded in 1910 as the first hospital devoted exclusively to clinical research. In the 1950s, the institute expanded its mission to include graduate education and began training new generations of scientists to become research leaders around the world. In 1965, it was renamed The Rockefeller University.

    The university is supported by a combination of government and private grants and contracts, private philanthropy and income from the endowment.

    Since its founding, The Rockefeller University has embraced an open structure to encourage collaboration between disciplines and empower faculty members to take on high-risk, high-reward projects. No formal departments exist, bureaucracy is kept to a minimum and scientists are given resources, support and unparalleled freedom to follow the science wherever it leads.

    This unique approach to science has led to some of the world’s most revolutionary contributions to biology and medicine.

  • richardmitnick 2:12 pm on September 7, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From phys.org: “Scientists create world’s largest protein map to reveal which proteins work together in a cell” 


    September 7, 2015
    No Writer Credit

    Scientists have uncovered tens of thousands of new protein interactions, accounting for about a quarter of all estimated protein contacts in a cell. Credit: Jovana Drinkjakovic

    A multinational team of scientists have sifted through cells of vastly different organisms, from amoebae to worms to mice to humans, to reveal how proteins fit together to build different cells and bodies.

    This tour de force of protein science, a result of a collaboration between seven research groups from three countries, led by Professor Andrew Emili from the University of Toronto’s Donnelly Centre and Professor Edward Marcotte from the University of Texas at Austin, uncovered tens of thousands of new protein interactions, accounting for about a quarter of all estimated protein contacts in a cell.

    When even a single one of these interactions is lost it can lead to disease, and the map is already helping scientists spot individual proteins that could be at the root of complex human disorders. The data will be available to researchers across the world through open access databases.

    The study comes out in Nature on September 7.

    While the sequencing of the human genome more than a decade ago was undoubtedly one of the greatest discoveries in biology, it was only the beginning of our in-depth understanding of how cells work. Genes are just blueprints and it is the genes’ products, the proteins, that do much of the work in a cell.

    Proteins work in teams by sticking to each other to carry out their jobs. Many proteins come together to form so called molecular machines that play key roles, such a building new proteins or recycling those no longer needed by literally grinding them into reusable parts. But for the vast majority of proteins, and there are tens of thousands of them in human cells, we still don’t know what they do.

    This is where Emili and Marcotte’s map comes in. Using a state-of-the-art method developed by the groups, the researchers were able to fish thousands of protein machineries out of cells and count individual proteins they are made of. They then built a network that, similar to social networks, offers clues into protein function based on which other proteins they hang out with. For example, a new and unstudied protein, whose role we don’t yet know, is likely to be involved in fixing damage in a cell if it sticks to cell’s known “handymen” proteins.

    Today’s landmark study gathered information on protein machineries from nine species that represent the tree of life: baker’s yeast, amoeba, sea anemones, flies, worms, sea urchins, frogs, mice and humans. The new map expands the number of known protein associations over 10 fold, and gives insights into how they evolved over time.

    “For me the highlight of the study is its sheer scale. We have tripled the number of protein interactions for every species. So across all the animals, we can now predict, with high confidence, more than 1 million protein interactions – a fundamentally ‘big step’ moving the goal posts forward in terms of protein interactions networks,” says Emili, who is also Ontario Research Chair in Biomarkers in Disease Management and a professor in the Department of Molecular Genetics.

    The researchers discovered that tens of thousands of protein associations remained unchanged since the first ancestral cell appeared, one billion years ago (!), preceding all of animal life on Earth.

    “Protein assemblies in humans were often identical to those in other species. This not only reinforces what we already know about our common evolutionary ancestry, it also has practical implications, providing the ability to study the genetic basis for a wide variety of diseases and how they present in different species,” says Marcotte.

    The map is already proving useful in pinpointing possible causes of human disease. One example is a newly discovered molecular machine, dubbed Commander, which consists of about a dozen individual proteins. Genes that encode some of Commander’s components had previously been found to be mutated in people with intellectual disabilities but it was not clear how these proteins worked.

    Because Commander is present in all animal cells, graduate student Fan Tu went on to disrupt its components in tadpoles, revealing abnormalities in the way brain cells are positioned during embryo development and providing a possible origin for a complex human condition.

    “With tens of thousands of other new protein interactions, our map promises to open many more lines of research into links between proteins and disease, which we are keen to explore in depth over the coming years,” concludes Dr. Emili.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 3:13 pm on September 3, 2015 Permalink | Reply
    Tags: , , DNA,   

    From Caltech: “Making Nanowires from Protein and DNA” 

    Caltech Logo

    Jessica Stoller-Conrad

    Co-crystal structure of protein-DNA nanowires. The protein-DNA nanowire design is experimentally verified by X-ray crystallography.
    Credit: Yun (Kurt) Mou, Jiun-Yann Yu, Timothy M. Wannier, Chin-Lin Guo and Stephen L. Mayo/Caltech

    The ability to custom design biological materials such as protein and DNA opens up technological possibilities that were unimaginable just a few decades ago. For example, synthetic structures made of DNA could one day be used to deliver cancer drugs directly to tumor cells, and customized proteins could be designed to specifically attack a certain kind of virus. Although researchers have already made such structures out of DNA or protein alone, a Caltech team recently created—for the first time—a synthetic structure made of both protein and DNA. Combining the two molecule types into one biomaterial opens the door to numerous applications.

    A paper describing the so-called hybridized, or multiple component, materials appears in the September 2 issue of the journal Nature.

    There are many advantages to multiple component materials, says Yun (Kurt) Mou (PhD ’15), first author of the Nature study. “If your material is made up of several different kinds of components, it can have more functionality. For example, protein is very versatile; it can be used for many things, such as protein–protein interactions or as an enzyme to speed up a reaction. And DNA is easily programmed into nanostructures of a variety of sizes and shapes.”

    But how do you begin to create something like a protein–DNA nanowire—a material that no one has seen before?

    Mou and his colleagues in the laboratory of Stephen Mayo, Bren Professor of Biology and Chemistry and the William K. Bowes Jr. Leadership Chair of Caltech’s Division of Biology and Biological Engineering, began with a computer program to design the type of protein and DNA that would work best as part of their hybrid material. “Materials can be formed using just a trial-and-error method of combining things to see what results, but it’s better and more efficient if you can first predict what the structure is like and then design a protein to form that kind of material,” he says.

    The researchers entered the properties of the protein–DNA nanowire they wanted into a computer program developed in the lab; the program then generated a sequence of amino acids (protein building blocks) and nitrogenous bases (DNA building blocks) that would produce the desired material.

    However, successfully making a hybrid material is not as simple as just plugging some properties into a computer program, Mou says. Although the computer model provides a sequence, the researcher must thoroughly check the model to be sure that the sequence produced makes sense; if not, the researcher must provide the computer with information that can be used to correct the model. “So in the end, you choose the sequence that you and the computer both agree on. Then, you can physically mix the prescribed amino acids and DNA bases to form the nanowire.”

    The resulting sequence was an artificial version of a protein–DNA coupling that occurs in nature. In the initial stage of gene expression, called transcription, a sequence of DNA is first converted into RNA. To pull in the enzyme that actually transcribes the DNA into RNA, proteins called transcription factors must first bind certain regions of the DNA sequence called protein-binding domains.

    Using the computer program, the researchers engineered a sequence of DNA that contained many of these protein-binding domains at regular intervals. They then selected the transcription factor that naturally binds to this particular protein-binding site—the transcription factor called Engrailed from the fruit fly Drosophila. However, in nature, Engrailed only attaches itself to the protein-binding site on the DNA. To create a long nanowire made of a continuous strand of protein attached to a continuous strand of DNA, the researchers had to modify the transcription factor to include a site that would allow Engrailed also to bind to the next protein in line.

    “Essentially, it’s like giving this protein two hands instead of just one,” Mou explains. “The hand that holds the DNA is easy because it is provided by nature, but the other hand needs to be added there to hold onto another protein.”

    Another unique attribute of this new protein–DNA nanowire is that it employs coassembly—meaning that the material will not form until both the protein components and the DNA components have been added to the solution. Although materials previously could be made out of DNA with protein added later, the use of coassembly to make the hybrid material was a first. This attribute is important for the material’s future use in medicine or industry, Mou says, as the two sets of components can be provided separately and then combined to make the nanowire whenever and wherever it is needed.

    This finding builds on earlier work in the Mayo lab, which, in 1997, created one of the first artificial proteins, thus launching the field of computational protein design. The ability to create synthetic proteins allows researchers to develop proteins with new capabilities and functions, such as therapeutic proteins that target cancer. The creation of a coassembled protein–DNA nanowire is another milestone in this field.

    “Our earlier work focused primarily on designing soluble, protein-only systems. The work reported here represents a significant expansion of our activities into the realm of nanoscale mixed biomaterials,” Mayo says.

    Although the development of this new biomaterial is in the very early stages, the method, Mou says, has many promising applications that could change research and clinical practices in the future.

    “Our next step will be to explore the many potential applications of our new biomaterial,” Mou says. “It could be incorporated into methods to deliver drugs into cells—to create targeted therapies that only bind to a certain biomarker on a certain cell type, such as cancer cells. We could also expand the idea of protein–DNA nanowires to protein–RNA nanowires that could be used for gene therapy applications. And because this material is brand-new, there are probably many more applications that we haven’t even considered yet.”

    The work was published in a paper titled, Computational design of co-assembling protein-DNA nanowires.” In addition to Mou and Mayo, other Caltech coauthors include former graduate students Jiun-Yann Yu (PhD ’14) and Timothy M. Wannier (PhD ’15), as well as Chin-Lin Guo from Academia Sinica in Taiwan. The work was funded by the Defense Advanced Research Projects Agency Protein Design Processes Program, a National Security Science and Engineering Faculty Fellowship, and the Caltech Programmable Molecular Technology Initiative funded by the Gordon and Betty Moore Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

  • richardmitnick 11:50 am on August 25, 2015 Permalink | Reply
    Tags: , , DNA   

    From Discovery: “Life 2.0? Synthetic DNA Added to Genetic Code” 

    Discovery News
    Discovery News

    Aug 25, 2015
    Glenn McDonald


    Well, there’s no way this could go wrong.

    According to recent announcements, a small biotech startup in California has successfully added new synthetic components to the genetic alphabet of DNA, potentially creating entirely new kinds of life on Earth.

    You’d need a Ph.D. or three to really get into it, but here goes: DNA, the organic molecule that carries genetic information for life, is made from a limited chemical “alphabet.” DNA can be thought of as a molecular code containing exactly four nitrogen-containing nucleobases — cytosine (C), guanine (G), adenine (A), or thymine (T). All known living organisms on the planet, from bacteria to biologists, are based on combinations of this four-letter molecular code: C-G-A-T.

    That’s how it’s been for several billion years, but last year the biotech company Synthorx announced development of a synthetic pair of nucleobases — abbreviated X-Y — to create a new and expanded genetic code.

    From the company website: “Adding two new synthetic bases, termed X and Y, to the genetic alphabet, we now have an expanded vocabulary to improve the discovery and development of new therapeutics, diagnostics and vaccines as well as create innovative products and processes, including using semi-synthetic organisms….”

    The additions to the four letter DNA code effectively raises the number of possible amino acids an organism could use to build proteins from 20 to 172. That opens up entire new vistas of possibilities, including a completely new class of semi-synthetic life forms using a six-letter DNA code instead of a four-letter code.

    Synthorx’s most recent announcement concerns the successful production of proteins containing the new synthetic base pair, building on the research published last year: “Since the publication, Synthorx has developed and validated a protein expression system, employing its synthetic DNA technology to incorporate novel amino acids to create new full-length and functional proteins.”

    According to third-party reports, Synthorx has even started creating new organisms with the technology, including a type of E. coli bacteria “never before seen on the face of the Earth.”

    The company insists that multiple safeguards are built into the technology, and that organisms created with the synthetic elements can only be produced in the lab. That, of course, is the premise of roughly one million science fiction horror stories, but what can you do? Well, you can read more about it here.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:57 am on August 25, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From phys.org: “Genetic overlapping in multiple autoimmune diseases may suggest common therapies” 


    August 24, 2015
    No Writer Credit

    DNA double helix. Credit: public domain

    Scientists who analyzed the genes involved in 10 autoimmune diseases that begin in childhood have discovered 22 genome-wide signals shared by two or more diseases. These shared gene sites may reveal potential new targets for treating many of these diseases, in some cases with existing drugs already available for non-autoimmune disorders.

    Autoimmune diseases, such as type 1 diabetes, Crohn’s disease, and juvenile idiopathic arthritis, collectively affect 7 to 10 percent of the population in the Western Hemisphere.

    “Our approach did more than finding genetic associations among a group of diseases,” said study leader, Hakon Hakonarson, M.D., Ph.D., director of the Center for Applied Genomics at The Children’s Hospital of Philadelphia (CHOP). “We identified genes with a biological relevance to these diseases, acting along gene networks and pathways that may offer very useful targets for therapy.”

    The paper appears online today in Nature Medicine.

    The international study team performed a meta-analysis, including a case-control study of 6,035 subjects with automimmune disease and 10,700 controls, all of European ancestry. The study’s lead analyst, Yun (Rose) Li, an M.D./Ph.D. graduate student at the University of Pennsylvania and the Center for Applied Genomics, mentored by Hakonarson and his research team, applied highly innovative and integrative approaches in supporting the study of pathogenic roles of the genes uncovered across multiple diseases.

    The research encompassed 10 clinically distinct autoimmune diseases with onset during childhood: type 1 diabetes, celiac disease, juvenile idiopathic arthritis, common variable immunodeficiency disease, systemic lupus erythematosus, Crohn’s disease, ulcerative colitis, psoriasis, autoimmune thyroiditis and ankylosing spondylitis.

    Because many of these diseases run in families and because individual patients often have more than one autoimmune condition, clinicians have long suspected these conditions have shared genetic predispositions. Previous genome-wide association studies have identified hundreds of susceptibility genes among autoimmune diseases, largely affecting adults.

    The current research was a systematic analysis of multiple pediatric-onset diseases simultaneously. The study team found 27 genome-wide loci, including five novel loci, among the diseases examined. Of those 27 signals, 22 were shared by at least two of the autoimmune diseases, and 19 of them were shared by at least three of them.

    Many of the gene signals the investigators discovered were on biological pathways functionally linked to cell activation, cell proliferation and signaling systems important in immune processes. One of the five novel signals, near the CD40LG gene, was especially compelling, said Hakonarson, who added, “That gene encodes the ligand for the CD40 receptor, which is associated with Crohn’s disease, ulcerative colitis and celiac disease. This ligand may represent another promising drug target in treating these diseases.”

    Many of the 27 gene signals the investigators uncovered have a biological relevance to autoimmune disease processes, Hakonarson said. “Rather than looking at overall gene expression in all cells, we focused on how these genes upregulated gene expression in specific cell types and tissues, and found patterns that were directly relevant to specific diseases. For instance, among several of the diseases, we saw genes with stronger expression in B cells. Looking at diseases such as lupus or juvenile idiopathic arthritis, which feature dysfunctions in B cells, we can start to design therapies to dial down over-expression in those cells.”

    He added that “the level of granularity the study team uncovered offers opportunities for researchers to better target gene networks and pathways in specific autoimmune diseases, and perhaps to fine tune and expedite drug development by repurposing existing drugs, based on our findings.”

    More information: Meta-analysis of shared genetic architecture across ten pediatric autoimmune diseases, Nature Medicine, published online Aug. 24, 2015. doi.org/10.1038/nm.3933

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 4:13 pm on August 17, 2015 Permalink | Reply
    Tags: , DNA, ,   

    From isgtw: “Simplifying and accelerating genome assembly” 

    international science grid this week

    August 12, 2015
    Linda Vu

    To extract meaning from a genome, scientists must reconstruct portions — a time consuming process akin to rebuilding the sentences and paragraphs of a book from snippets of text. But by applying novel algorithms and high-performance computational techniques to the cutting-edge de novogenome assembly tool Meraculous, a team of scientists have simplified and accelerated genome assembly — reducing a months-long process to mere minutes.

    Temp 1
    “The new parallelized version of Meraculous shows unprecedented performance and efficient scaling up to 15,360 processor cores for the human and wheat genomes on NERSC’s Edison supercomputer,” says Evangelos Georganas. “This performance improvement sped up the assembly workflow from days to seconds.” Courtesy NERSC.

    Researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have made this gain by ‘parallelizing’ the DNA code — sometimes billions of bases long — to harness the processing power of supercomputers, such as the US Department of Energy’s National Energy Research Scientific Computing Center’s (NERSC’s) Edison system. (Parallelizing means splitting up tasks to run on the many nodes of a supercomputer at once.)

    “Using the parallelized version of Meraculous, we can now assemble the entire human genome in about eight minutes,” says Evangelos Georganas, a UC Berkeley graduate student. “With this tool, we estimate that the output from the world’s biomedical sequencing capacity could be assembled using just a portion of the Berkeley-managed NERSC’s Edison supercomputer.”

    Supercomputers: A game changer for assembly

    High-throughput next-generation DNA sequencers allow researchers to look for biological solutions — and for the most part, these machines are very accurate at recording the sequence of DNA bases. Sometimes errors do occur, however. These errors complicate analysis by making it harder to assemble genomes and identify genetic mutations. They can also lead researchers to misinterpret the function of a gene.

    Researchers use a technique called shotgun sequencing to identify these errors. This involves taking numerous copies of a DNA strand, breaking it up into random smaller pieces and then sequencing each piece separately. For a particularly complex genome, this process can generate several terabytes of data.

    To identify data errors quickly and effectively, the Berkeley Lab and UC Berkeley team use ‘Bloom filters‘ and massively parallel supercomputers. “Applying Bloom filters has been done before, but what we have done differently is to get Bloom filters to work with distributed memory systems,” says Aydin Buluç, a research scientist in Berkeley Lab’s Computational Research Division (CRD). “This task was not trivial; it required some computing expertise to accomplish.”

    The team also developed solutions for parallelizing data input and output (I/O). “When you have several terabytes of data, just getting the computer to read your data and output results can be a huge bottleneck,” says Steven Hofmeyr, a research scientist in CRD who developed these solutions. “By allowing the computer to download the data in multiple threads, we were able to speed up the I/O process from hours to minutes.”

    The assembly process

    Once errors are removed, researchers can begin the genome assembly. This process relies on computer programs to join k-mers — short DNA sequences consisting of a fixed number (K) of bases — at overlapping regions, so they form a continuous sequence, or contig. If the genome has previously been sequenced, scientists can use reference recorded gene annotations to align the reads. If not, they need to create a whole new catalog of contigs through de novo assembly.

    Temp 1
    “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress,” says Jarrod Chapman. Pictured: Human Chromosomes. Courtesy Jane Ades, National Human Genome Research Institute.

    De novoassembly is memory-intensive, and until recently was resistant to parallelization in distributed memory. Many researchers turned to specialized large memory nodes, several terabytes in size, to do this work, but even the largest commercially available memory nodes are not big enough to assemble massive genomes. Even with supercomputers, it still took several hours, days or even months to assemble a single genome.

    To make efficient use of massively parallel systems, Georganas created a novel algorithm for de novo assembly that takes advantage of the one-sided communication and Partitioned Global Address Space (PGAS) capabilities of the UPC (Unified Parallel C) programming language. PGAS lets researchers treat the physically separate memories of each supercomputer node as one address space, reducing the time and energy spent swapping information between nodes.

    Tackling the metagenome

    Now that computation is no longer a bottleneck, scientists can try a number of different parameters and run as many analyses as necessary to produce very accurate results. This breakthrough means that Meraculous could also be used to analyze metagenomes — microbial communities recovered directly from environmental samples. This work is important because many microbes exist only in nature and cannot be grown in a laboratory. These organisms may be the key to finding new medicines or viable energy sources.

    “Analyzing metagenomes is a tremendous effort,” says Jarrod Chapman, who developed Meraculous at the US Department of Energy’s Joint Genome Institute (managed by the Berkeley Lab). “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress. Using Meraculous to effectively do this analysis would be a game changer.”

    –iSGTW is becoming the Science Node. Watch for our new branding and website this September.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 497 other followers

%d bloggers like this: