Tagged: DNA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:31 am on July 30, 2015 Permalink | Reply
    Tags: , , , DNA, ,   

    From livescience: “Origin-of-Life Story May Have Found Its Missing Link” 

    Livescience

    June 06, 2015
    Jesse Emspak

    1
    A field of geysers called El Tatio located in northern Chile’s Andes Mountains. Credit: Gerald Prins

    How did life on Earth begin? It’s been one of modern biology’s greatest mysteries: How did the chemical soup that existed on the early Earth lead to the complex molecules needed to create living, breathing organisms? Now, researchers say they’ve found the missing link.

    Between 4.6 billion and 4.0 billion years ago, there was probably no life on Earth. The planet’s surface was at first molten and even as it cooled, it was getting pulverized by asteroids and comets. All that existed were simple chemicals. But about 3.8 billion years ago, the bombardment stopped, and life arose. Most scientists think the “last universal common ancestor” — the creature from which everything on the planet descends — appeared about 3.6 billion years ago.

    But exactly how that creature arose has long puzzled scientists. For instance, how did the chemistry of simple carbon-based molecules lead to the information storage of ribonucleic acid, or RNA?

    2
    A hairpin loop from a pre-mRNA. Highlighted are the nucleobases (green) and the ribose-phosphate backbone (blue). Note that this is a single strand of RNA that folds back upon itself.

    The RNA molecule must store information to code for proteins. (Proteins in biology do more than build muscle — they also regulate a host of processes in the body.)

    The new research — which involves two studies, one led by Charles Carter and one led by Richard Wolfenden, both of the University of North Carolina — suggests a way for RNA to control the production of proteins by working with simple amino acids that does not require the more complex enzymes that exist today. [7 Theories on the Origin of Life on Earth]

    Missing RNA link

    This link would bridge this gap in knowledge between the primordial chemical soup and the complex molecules needed to build life. Current theories say life on Earth started in an “RNA world,” in which the RNA molecule guided the formation of life, only later taking a backseat to DNA, which could more efficiently achieve the same end result.

    3
    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    Like DNA, RNA is a helix-shaped molecule that can store or pass on information. (DNA is a double-stranded helix, whereas RNA is single-stranded.) Many scientists think the first RNA molecules existed in a primordial chemical soup — probably pools of water on the surface of Earth billions of years ago. [Photo Timeline: How the Earth Formed]

    The idea was that the very first RNA molecules formed from collections of three chemicals: a sugar (called a ribose); a phosphate group, which is a phosphorus atom connected to oxygen atoms; and a base, which is a ring-shaped molecule of carbon, nitrogen, oxygen and hydrogen atoms. RNA also needed nucleotides, made of phosphates and sugars.

    The question: How did the nucleotides come together within the soupy chemicals to make RNA? John Sutherland, a chemist at the University of Cambridge in England, published a study in May in the journal Nature Chemistry that showed that a cyanide-based chemistry could make two of the four nucleotides in RNA and many amino acids.

    That still left questions, though. There wasn’t a good mechanism for putting nucleotides together to make RNA. Nor did there seem to be a natural way for amino acids to string together and form proteins. Today, adenosine triphosphate (ATP) does the job of linking amino acids into proteins, activated by an enzyme called aminoacyl tRNA synthetase. But there’s no reason to assume there were any such chemicals around billions of years ago.

    Also, proteins have to be shaped a certain way in order to function properly. That means RNA has to be able to guide their formation — it has to “code” for them, like a computer running a program to do a task.

    Carter noted that it wasn’t until the past decade or two that scientists were able to duplicate the chemistry that makes RNA build proteins in the lab. “Basically, the only way to get RNA was to evolve humans first,” he said. “It doesn’t do it on its own.”

    Perfect sizes

    In one of the new studies, Carter looked at the way a molecule called “transfer RNA,” or tRNA, reacts with different amino acids.

    They found that one end of the tRNA could help sort amino acids according to their shape and size, while the other end could link up with amino acids of a certain polarity. In that way, this tRNA molecule could dictate how amino acids come together to make proteins, as well as determine the final protein shape. That’s similar to what the ATP enzyme does today, activating the process that strings together amino acids to form proteins.

    Carter told Live Science that the ability to discriminate according to size and shape makes a kind of “code” for proteins called peptides, which help to preserve the helix shape of RNA.

    “It’s an intermediate step in the development of genetic coding,” he said.

    In the other study, Wolfenden and colleagues tested the way proteins fold in response to temperature, since life somehow arose from a proverbial boiling pot of chemicals on early Earth. They looked at life’s building blocks, amino acids, and how they distribute in water and oil — a quality called hydrophobicity. They found that the amino acids’ relationships were consistent even at high temperatures — the shape, size and polarity of the amino acids are what mattered when they strung together to form proteins, which have particular structures.

    “What we’re asking here is, ‘Would the rules of folding have been different?'” Wolfenden said. At higher temperatures, some chemical relationships change because there is more thermal energy. But that wasn’t the case here.

    By showing that it’s possible for tRNA to discriminate between molecules, and that the links can work without “help,” Carter thinks he’s found a way for the information storage of chemical structures like tRNA to have arisen — a crucial piece of passing on genetic traits. Combined with the work on amino acids and temperature, it offers insight into how early life might have evolved.

    This work still doesn’t answer the ultimate question of how life began, but it does show a mechanism for the appearance of the genetic codes that pass on inherited traits, which got evolution rolling.

    The two studies are published in the June 1 issue of the journal Proceedings of the National Academy of Sciences.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:15 am on July 29, 2015 Permalink | Reply
    Tags: , CRISPR/Cas, DNA,   

    From The Conversation: “CRISPR/Cas gene-editing technique holds great promise, but research moratorium makes sense pending further study “ 

    The Conversation
    The Conversation

    July 29, 2015
    No Writer Credit

    1

    CRISPR/Cas is a new technology that allows unprecedented control over the DNA code. It’s sparked a revolution in the fields of genetics and cell biology, becoming the scientific equivalent of a household name by raising hopes about new ways to cure diseases including cancer and to unlock the remaining mysteries of our cells.

    The gene editing technique also raises concerns. Could the new tools allow parents to order “designer babies”? Could premature use in patients lead to unforeseen and potentially dangerous consequences? This potential for abuse or misuse led prominent scientists to call for a halt on some types of new research until ethical issues can be discussed – a voluntary ban that was swiftly ignored in some quarters.

    The moratorium is a positive step toward preserving the public’s trust and safety, while the promising new technology can be further studied.

    Editing DNA to cure disease

    While most human diseases are caused, at least partially, by mutations in our DNA, current therapies treat the symptoms of these mutations but not the genetic root cause. For example, cystic fibrosis, which causes the lungs to fill with excess mucus, is caused by a single DNA mutation. However, cystic fibrosis treatments focus on the symptoms – working to reduce mucus in the lungs and fight off infections – rather than correcting the mutation itself. That’s because making precise changes to the three-billion-letter DNA code remains a challenge even in a Petri dish, and it is unprecedented in living patients. (The only current example of gene therapy, called Glybera, does not involve modifying the patient’s DNA, and has been approved for limited use in Europe to treat patients with a digestive disorder.)

    That all changed in 2012, when several research groups demonstrated that a DNA-cutting technology called CRISPR/Cas could operate on human DNA. Compared to previous, inefficient methods for editing DNA, CRISPR/Cas offers a shortcut. It acts like a pair of DNA scissors that cut where prompted by a special strand of RNA (a close chemical relative of DNA). Snipping DNA turns on the cell’s DNA repair process, which can be hijacked to either disable a gene – say, one that allows tumor cells to grow uncontrollably – or to fix a broken gene, such as the mutation that causes cystic fibrosis. The advantages of the Cas9 system over its predecessor genome-editing technologies – its high specificity and the ease of navigating to a specific DNA sequence with the “guide RNA” – have contributed to its rapid adoption in the scientific community.

    The barrier to fixing the DNA of diseased cells appears to have evaporated.

    Playing with fire

    With the advance of this technique, the obstacles to altering genes in embryos are falling away, opening the door to so-called “designer babies” with altered appearance or intelligence. Ethicists have long feared the consequences of allowing parents to choose the traits of their babies. Further, there is a wide gap between our understanding of disease and the genes that might cause them. Even if we were capable of performing flawless genetic surgery, we don’t yet know how specific changes to the DNA will manifest in a living human. Finally, the editing of germ line cells such as embryos could permanently introduce altered DNA into the gene pool to be inherited by descendants.

    And making cuts in one’s DNA is not without risks. Cas9 – the scissor protein – is known to cleave DNA at unintended or “off-target” sites in the genome. Were Cas9 to inappropriately chop an important gene and inactivate it, the therapy could cause cancer instead of curing it.

    Take it slow

    All the concerns around Cas9 triggered a very unusual event: a call from prominent scientists to halt some of this research. In March of 2015, a group of researchers and lawyers called for a voluntary pause on further using CRISPR technology in germ line cells until ethical guidelines could be decided.

    Writing in the journal Science, the group – including two Nobel laureates and the inventors of the CRISPR technology – noted that we don’t yet understand enough about the link between our health and our DNA sequence. Even if a perfectly accurate DNA-editing system existed – and Cas9 surely doesn’t yet qualify – it would still be premature to treat patients with genetic surgery. The authors disavowed genome editing only in specific cell types such as embryos, while encouraging the basic research that would put future therapeutic editing on a firmer foundation of evidence.

    2
    The basic research isn’t ready for deployment in human embryos yet. Petri dishes image via http://www.shutterstock.com

    Pushing ahead

    Despite this call for CRISPR/Cas research to be halted, a Chinese research group reported on their attempts at editing human embryos only two months later. Described in the journal Protein & Cell, the authors treated nonviable embryos to fix a gene mutation that causes a blood disease called β-thalassemia.

    The study results proved the concerns of the Science group to be well-founded. The treatment killed nearly one in five embryos, and only half of the surviving cells had their DNA modified. Of the cells that were even modified, only a fraction had the disease mutation repaired. The study also revealed off-target DNA cutting and incomplete editing among all the cells of a single embryo. Obviously these kinds of errors are problematic in embryos meant to mature into fully grown human beings.

    George Daley, a Harvard biologist and member of the group that called for the moratorium, concluded that “their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

    In the enthusiasm and hype surrounding Cas9, it is easy to forget that the technology has been in wide use for barely three years.

    Role of a moratorium

    Despite the publication of the Protein & Cell study – whose experiments likely took place at least months earlier – the Science plea for a moratorium can already be considered a success. The request from such a respected group has brought visibility to the topic and put pressure on universities, regulatory boards and the editors of scientific journals to discourage such research. (As evidence of this pressure, the Chinese authors were rejected from at least two top science journals before getting their paper accepted.) And the response to the voluntary ban has thus far not included accusations of “stifling academic freedom,” possibly due to the scientific credibility of the organizers.

    While rare, the call for a moratorium on research for ethical reasons can be traced to an earlier controversy over DNA technology. In 1975, a group that came to be known as the Asilomar Conference called for caution with an emerging technology called recombinant DNA until its safety could be evaluated and ethical guidelines could be published. The similarity between the two approaches is no coincidence: several authors of the Science essay were also members of the Asilomar team.

    The Asilomar guidelines are now widely viewed as having been a proportionate and responsible measure, placing the right emphasis on safety and ethics without hampering research progress. It turns out recombinant DNA technology was much less dangerous than originally feared; existing evidence already shows that we might not be so lucky with Cas9. Another important legacy of the Asilomar conference was the promotion of an open discussion involving experts as well as the general public. By heeding the lessons of caution and public engagement, hopefully the saga of CRISPR/Cas will unfold in a similarly responsible – yet exciting – way.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:51 pm on July 20, 2015 Permalink | Reply
    Tags: , , DNA,   

    From Caltech: “Freezing a Bullet (+)” 

    Caltech Logo
    Caltech

    July 20, 2015
    Kimm Fesenmaier
    jnalick

    1
    Crystal structure of the assembly chaperone of ribosomal protein L4 (Acl4) that picks up a newly synthesized ribosomal protein when it emerges from the ribosome in the cytoplasm, protects it from the degradation machinery, and delivers it to the assembly site of new ribosomes in the nucleus. Credit: Ferdinand Huber/Caltech

    X-Ray Vision, an article in our Spring 2015 issue, examined the central role Caltech has played in developing a powerful technique for revealing the molecular machinery of life. In May, chemist André Hoeltz, who was featured in the article, published a new paper describing how he used the technique to reveal how protein-synthesizing cellular machines are built.

    Ribosomes are vital to the function of all living cells. Using the genetic information from RNA, these large molecular complexes build proteins by linking amino acids together in a specific order. Scientists have known for more than half a century that these cellular machines are themselves made up of about 80 different proteins, called ribosomal proteins, along with several RNA molecules and that these components are added in a particular sequence to construct new ribosomes, but no one has known the mechanism that controls that process.

    Now researchers from Caltech and Heidelberg University have combined their expertise to track a ribosomal protein in yeast all the way from its synthesis in the cytoplasm, the cellular compartment surrounding the nucleus of a cell, to its incorporation into a developing ribosome within the nucleus. In so doing, they have identified a new chaperone protein, known as Acl4, that ushers a specific ribosomal protein through the construction process and a new regulatory mechanism that likely occurs in all eukaryotic cells.

    The results, described in a paper that appears online in the journal Molecular Cell, also suggest an approach for making new antifungal agents.

    The work was completed in the labs of André Hoelz, assistant professor of chemistry at Caltech, and Ed Hurt, director of the Heidelberg University Biochemistry Center (BZH).

    “We now understand how this chaperone, Acl4, works with its ribosomal protein with great precision,” says Hoelz. “Seeing that is kind of like being able to freeze a bullet whizzing through the air and turn it around and analyze it in all dimensions to see exactly what it looks like.”

    That is because the entire ribosome assembly process—including the synthesis of new ribosomal proteins by ribosomes in the cytoplasm, the transfer of those proteins into the nucleus, their incorporation into a developing ribosome, and the completed ribosome’s export back out of the nucleus into the cytoplasm—happens in the tens of minutes timescale. So quickly that more than a million ribosomes are produced per day in mammalian cells to allow for turnover and cell division. Therefore, being able to follow a ribosomal protein through that process is not a simple task.

    Hurt and his team in Germany have developed a new technique to capture the state of a ribosomal protein shortly after it is synthesized. When they “stopped” this particular flying bullet, an important ribosomal protein known as L4, they found that its was bound to Acl4.

    Hoelz’s group at Caltech then used X-ray crystallography to obtain an atomic snapshot of Acl4 and further biochemical interaction studies to establish how Acl4 recognizes and protects L4. They found that Acl4 attaches to L4 (having a high affinity for only that ribosomal protein) as it emerges from the ribosome that produced it, akin to a hand gripping a baseball. Thereby the chaperone ensures that the ribosomal protein is protected from machinery in the cell that would otherwise destroy it and ushers the L4 molecule through the sole gateway between the nucleus and cytoplasm, called the nuclear pore complex, to the site in the nucleus where new ribosomes are constructed.

    “The ribosomal protein together with its chaperone basically travel through the nucleus and screen their surroundings until they find an assembling ribosome that is at exactly the right stage for the ribosomal protein to be incorporated,” explains Ferdinand Huber, a graduate student in Hoelz’s group and one of the first authors on the paper. “Once found, the chaperone lets the ribosomal protein go and gets recycled to go pick up another protein.”

    The researchers say that Acl4 is just one example from a whole family of chaperone proteins that likely work in this same fashion.

    Hoelz adds that if this process does not work properly, ribosomes and proteins cannot be made. Some diseases (including aggressive leukemia subtypes) are associated with malfunctions in this process.

    “It is likely that human cells also contain a dedicated assembly chaperone for L4. However, we are certain that it has a distinct atomic structure, which might allow us to develop new antifungal agents,” Hoelz says. “By preventing the chaperone from interacting with its partner, you could keep the cell from making new ribosomes. You could potentially weaken the organism to the point where the immune system could then clear the infection. This is a completely new approach.”

    Co-first authors on the paper, Coordinated Ribosomal L4 Protein Assembly into the Pre-Ribosome Is Regulated by Its Eukaryote-Specific Extension, are Huber and Philipp Stelter of Heidelberg University. Additional authors include Ruth Kunze and Dirk Flemming also from Heidelberg University. The work was supported by the Boehringer Ingelheim Fonds, the V Foundation for Cancer Research, the Edward Mallinckrodt, Jr. Foundation, the Sidney Kimmel Foundation for Cancer Research, and the German Research Foundation (DFG).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 1:51 pm on July 20, 2015 Permalink | Reply
    Tags: , DNA, , Reed-Solomon codes   

    From NOVA: “The Codes of Modern Life” 

    PBS NOVA

    NOVA

    15 Jul 2015
    Alex Riley

    On August 25th 2012, the spacecraft Voyager 1 exited our Solar System and entered interstellar space, set for eternal solitude among the stars. Its twin, Voyager 2, isn’t far behind. Since their launch from Cape Canaveral in Florida, in 1977, their detailed reconnaissance of the Jovian planets—Jupiter, Saturn, Uranus, Neptune—and over 60 moons extended the human senses beyond Galileo’s wildest dreams.

    After passing Neptune, the late astrophysicist Carl Sagan proposed that Voyager 1 should turn around and capture the first portrait of our planetary family. As he wrote in his 1994 book, Pale Blue Dot, “It had been well understood by the scientists and philosophers of classical antiquity that the Earth was a mere point in a vast encompassing Cosmos, but no one had ever seen it as such. Here was our first chance (and perhaps our last for decades to come).”

    1
    Earth, as seen from Voyager 1 more than 4 billion miles away.

    Indeed, our planet can be seen as a fraction of a pixel against a backdrop of darkness that’s broken only by a few scattered beams of sunlight reflected off the probe’s camera. The precious series of images were radioed back to Earth at the speed of light, taking five and a half hours to reach the huge conical receivers in California, Spain, and Australia more than 4 billion miles away. Over such astronomical distances, one pixel out of 640,000 can easily be replaced by another or lost entirely in transmission. It wasn’t, in part due to a single mathematical breakthrough published decades before.

    In 1960, Irving Reed and Gustave Solomon published a paper in the Journal of the Society for Industrial and Applied Mathematics, entitled, Polynomial Codes Over Certain Finite Fields, a string of words that neatly convey the arcane nature of their work. “Almost all of Reed and Solomon’s original paper doesn’t mean anything to most people,” says Robert McEliece, a mathematician and information theorist at California Institute of Technology. But within those five pages was the basic recipe for the most efficacious error-correction codes yet created. By adding just the right levels of redundancy to data files, this family of algorithms can correct for error that often occurs during transmission or storage without taking up too much precious space.

    Today, Reed-Solomon codes go largely unnoticed, but they are everywhere, reducing errors in everything from mobile phone calls to QR codes, computer hard drives, and data beamed from the New Horizons spacecraft as it zoomed by Pluto. As demand for digital bandwidth and storage has soared, Reed-Solomon codes have followed. Yet curiously, they’ve been absent in one of the most compact, longest-lasting, and most promising of storage mediums—DNA.

    2
    From Voyager to DNA

    3
    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    Several labs have investigated nature’s storage device to archive our ever-increasing mountain of digital information; encoding small amounts of data in DNA and, more importantly, reading it back. But those trials lacked sophisticated error correction, which DNA data systems will need if they are to become our storage medium of choice. Fortunately, a team of scientists, led by Robert Grass, a lecturer at ETH Zurich, rectified that omission earlier this year when they stored a duo of files in DNA using Reed-Solomon codes. It’s a mash up that could help us reliably store our fragile digital data for generations to come.

    Life’s Storage

    DNA is best known as the information storage device for life on Earth. Only four molecules—adenine, cytosine, thymine, and guanine, commonly referred to by their first letters—make up the rungs on the famous double helix of DNA. These sequences are the basis of every animal, plant, fungi, archaea, and bacteria that has ever lived in the 4 billion some years that life has existed on Earth.

    “It’s not a form of information that’s likely to be outdated very quickly,” says Sriram Kosuri a geneticist from University of California, Los Angeles. “There’s always going to be a reason for studying DNA as long as we’re still around.”

    It is also incredibly compact. Since it folds in three dimensions, we could store all of the world’s current data—everyone’s photos, every Facebook status update, all of Wikipedia, everything—using less than an ounce of DNA. And, with its propensity to replicate given the right conditions, millions of copies of DNA can be made in the lab in just a few hours. Such favorable traits make DNA an ideal candidate for storing lots of information, for a long time, in a small space.

    A Soviet scientist named Mikael Nieman recognized DNA’s potential back in 1964, when he first proposed the idea of storing data in natural biopolymers. In 1988, his theory was finally put into practice when the first messages were stored in DNA. Those strings were relatively simple. Only in recent years have laboratories around the world started to convert large amounts of the binary code that’s spoken by computers into genetic code.

    In 2012, by converting the ones of binary code into As or Cs, and zeros into Ts and Gs, Kosuri along with George Church and Yuan Gao stored an entire book called Regenesis, totaling 643 kilobytes, into the genetic code. A year later, Ewan Birney, Nick Goldman, and their colleagues from the European Bioinformatics Institute added a slightly more sophisticated way of translating binary to nucleic acid that reduced the number of repeated bases.

    Such repeats are a common problem when writing and reading of DNA, or synthesizing and sequencing, as they’re called. Although Birney, Goldman, and team stored a similar amount of information as Kosuri, Church, and Gao—739 kilobytes—it was spread over a range of media types: 154 Shakespearean sonnets, Watson and Crick’s famous 1953 paper that described DNA’s molecular structure, an audio file of Martin Luther King Jr.’s “I Have a Dream” speech, and a photograph of the building they were working in near Cambridge, UK.

    The European team also integrated a deliberate error-correction system: distributing their data over more than 153,000 short, overlapping sequences of DNA. Like shouting a drink order multiple times in a noisy bar, the regions of overlap increased the likelihood that the message would be understood at the other end. Indeed, after a Californian company called Agilent Technologies manufactured the team’s DNA sequences, packaged them, and sent them to the U.K. via Germany, the team was able to remove any errors that had occurred “by hand” using their overlapping regions. In the end, they recovered their files with complete fidelity. The text had no spelling mistakes, the photo was high-res, and the speech was clear and eloquent.

    “But that’s not what we do,” says Grass, the lecturer at the Swiss Federal Institute of Technology. After seeing Church and colleagues’ publication in the news in 2012, he wanted to compare how competent different storage media were over long periods of time.

    “The original idea was to do a set of tests with various storage formats,” he says, “and torture them with various conditions.” Hot and cold, wet and dry, at high pressure, and in an oxygen-rich environment, for example. He contacted Reinhard Heckel, a friend he had met at Belvoir Rowing Club in Zurich for advice. Heckel, who was a PhD student in communication theory at the time, voiced concern that such an experiment would be unfair since DNA didn’t have the same error-correction systems as other storage devices such as CDs and computer hard drives.

    To make it a fair fight, they implemented Reed-Solomon codes into their DNA storage method. “We quickly found out that we could ‘beat’ traditional storage formats in terms of long term reliability by far,” Grass says. When stored on most conventional storage devices—USB pens, DVDs, or magnetic tapes—data starts to degrade after 50 years or so. But, early on in their work, Grass and his colleagues estimated that DNA could hold data error-free for millennia, thanks to the inherent stability of its double helix and that breakthrough in mathematical theory from the mid-20th century.

    Out from Obscurity

    When storing and sending information from one place to another, you almost always run the risk of introducing errors. Like in the “telephone” game, key parts may be modified or lost entirely. There has been a rich history of reducing such errors, and few things have propelled the field more than the development of information theory. In 1948, Claude Shannon, an ardent blackjack player and mathematician, proposed that by simplifying files or transmissions into numerous smaller components—yes or no questions—combined with error-correcting codes, the relative risk of error becomes very low. Using the 1s and 0s of binary, he hushed the noise of telephone switching circuits.

    Using this binary foundation, Reed and Solomon attempted to shush these whispers even further. But their error-correction codes weren’t put into use straight away. They couldn’t, in fact—the cyphers needed to decode them weren’t invented until 1968. Plus, there wasn’t anything to use them on; the technology that could utilize them hadn’t been invented. “They are very clever theoretical objects, but no one ever imagined they were going to be practical until the digital electronics became so sophisticated,” says McEliece, the Caltech information theorist.

    Once technology did catch up, one of the codes’ first uses was in transmitting data back from Voyager 1 and 2. Since the redundancy provided by these codes (together with another type, known as convolution codes) cleaned up mistakes—the loss or alteration of pixels, for example—the space probes didn’t have to send the same image again and again. That meant more high-resolution images could be radioed back to Earth as Voyager passed the outer planets of our solar system.

    3
    Reed-Solomon codes correct for common transmission errors, including missing pixels (white), false signals (black), and paused transmissions (the white stripe).

    Reed-Solomon codes weren’t widely used until October 1982, when compact discs were commercialized by the music industry. To manufacture huge quantities en masse, factories used a master version of the CD to stamp out new copies, but subtle imperfections in the process along with inevitable scratches when the discs were handled all but guaranteed errors would creep into the data. But, by adding redundancy to accommodate for errors and minor scratches, Reed-Solomon codes made sure that every disc, when played, was as flawless as the next. “This and the hard disk was the absolute distribution of Reed-Solomon codes all over the world,” says Martin Bossert, director of the Institute of Telecommunications and Applied Information Theory at the University of Ulm, Germany.

    At a basic level, here’s how Reed-Solomon codes work. Suppose you wanted to send a simple piece of information like the equation for a parabola (a symmetrical curved line). In such an equation, there are three defining points: 4 + 5x + 7×2. By adding incomplete redundancy in the form of two extra numbers—a 4 and a 7, for example—a total of five numbers is sent in the transmission. As a result, any transposition or loss of information can be corrected for by feeding the additional numbers through the Reed-Solomon algorithm. “You still have an overrepresentation of your system,” Grass says. “It doesn’t matter which one you lose, you can still get back to the original information.”

    Using similar formulae, Grass and his colleagues converted two files—the Swiss Federal Charter from 1291 and an English translation of The Methods of Mechanical Theorems by Archimedes—into DNA. The redundant information, in the form of extra bases placed over 4,991 short sequences according to the Reed-Solomon algorithm, provided the basis for error-correction when the DNA was read and the data retrieved later on.

    That is, instead of wastefully overlapping large chunks of sequences as the EBI researchers did, “you just add a small amount of redundancy and still you can correct errors at any position, which seemed very strange at the beginning because it’s somehow illogical,” Grass says. As well as using fewer base pairs per kilobyte of data, this tack has the added bonus of automated, algorithmic error-correction.

    Indeed, with a low error-rate—less than three base changes per 117-base sequence—the overrepresentation in their sequences meant that the Reed-Solomon codes could still get back to the original information.

    The same basic principle is used in written language. In fact, you are doing something very similar right now. Even when text contains spelling errors or even when whole words are missing, you can still perfectly read the message and reconstruct the sentence accordingly. The reason? Language is inherently redundant. Not all combinations of letters—including spaces as a 27th option—give a meaningful word, sentence, or paragraph.

    On top of this “inner” redundancy, Grass and colleagues installed another genetic safety net. On the ends of the original sequences, they added large chunks of redundancy. “So if we lose whole sequences or if one is completely screwed and it can’t be corrected with the inner [redundancy], we still have the outer codes,” Grass says. It’s similar to how CDs safeguard against scratches.

    It may sound like overkill, but so much redundancy is warranted, at least for now. There simply isn’t enough information on the rate and types of errors that occur during DNA synthesis and sequencing. “We have an inkling of the error-rate, but all of this is very crude at this point,” Kosuri says. “We just don’t have a good feeling for that, so everyone just overdoes the corrections.” Further, given that the field of genomics is moving so fast, with new ways to write and read DNA, errors might differ depending on what technologies are being used. The same was true for other storage devices while still in their infancy. After further testing, the error-correction codes could be more attuned to the expected error rates and the redundancy reduced, paving the way for higher bandwidth and greater storage capacity.

    Into the Future

    Compared with the previous studies, storing two files totaling 83 kilobytes in DNA isn’t groundbreaking. The image below is roughly five times larger. But Grass and his colleagues really wanted to know just how much better DNA was at long-term storage. With their Reed-Solomon coding in place, Grass and colleagues mimicked nature to find out.

    “The idea was always to make an artificial fossil, chemically,” Grass says. They tried impregnating their DNA sequences in filter paper, they used a biopolymer to simulate the dry conditions within spores and seeds of plants, and they encapsulated them in microscopic beads of glass. Compared with DNA that hasn’t been modified chemically, all three trials led to markedly lower rates of DNA decomposition.

    4
    Grass and colleagues glass DNA storage beads

    The glass beads were the best option, however. Water, when unimpeded, destroys DNA. If there are too many breaks and errors in the sequences, no error-correction system can help. The beads, however, protected the DNA from the damaging effects of humidity.

    With their layers of error-correction and protective coats in place, Grass and his colleagues then exposed the glass beads to three heat treatments—140˚, 149˚, and 158˚ F—for up to a month “to simulate what would happen if you store it for a long time,” he says. Indeed, after unwrapping their DNA from the beads using a fluoride solution and then re-reading the sequences, they found that slight errors had been introduced similar to those which appear over long timescales in nature. But, at such low levels, the Reed-Solomon codes healed the wounds.

    Using the rate at which errors arose, the researchers were able to extrapolate how long the data could remain intact at lower temperatures. If kept in the clement European air outside their laboratory in Zurich, for example, they estimate a ballpark figure of around 2,000 years. But place these glass beads in the dark at –0.4˚ F, the conditions of the Svalbard Global Seed Bank on the Norwegian island of Spitsbergen, and you could save your photos, music, and eBooks for two million. That’s roughly ten times as long as our species has been on Earth.

    Using heat treatments to mimic the effects of age isn’t foolproof, Grass admits; a month at 159˚ F certainly isn’t the same as millennia in the freezer. But his conclusions aren’t unsupported. In recent years, palaeogenetic research into long-dead animals has revealed that DNA can persist long after death. And when conditions are just right—cold, dark, and dry—these molecular strands can endure long after the extinction of an entire species. In 2012, for instance, the genome of an extinct human relative that died around 80,000 years ago was reconstructed from a finger bone. A year later, that record was shattered when scientists sequenced the genome of an extinct horse that died in Canadian permafrost around 700,000 years ago. “We already have long-term data,” Grass says. “Real long-term data.”

    But despite its inherent advantages, there are still some major hurdles to surmount before DNA becomes a viable storage option. For one, synthesis and sequencing is still too costly. “We’re still on the order of a million-fold too expensive on both fronts,” Kosuri says. Plus, it’s still slow to read and write, and it’s not rewritable nor is it random access. Currently, today’s DNA data storage techniques are similar to magnetic tape—the whole memory has to be read to retrieve a piece of information.

    Such caveats limit DNA to archival data storage, at least for the time being. “The question is if it’s going to drop fast enough and low enough to really compete in terms of dollars per gigabyte,” Grass says. It’s likely that DNA will continue to be of interest to medical and biological laboratories, which will help to speed up synthesis and sequencing and drive down prices.

    Whatever new technologies are on the horizon, history has taught us that Reed-Solomon-based coding will probably still be there, behind the scenes, safeguarding our data against errors. Like the genes within an organism, the codes have been passed down to subsequent generations, slightly adjusted and optimized for their new environment. They have a proven track record that starts on Earth and extends ever further into the Milky Way. “There cannot be a code that can correct more errors than Reed-Solomon codes…It’s mathematical proof,” Bossert says. “It’s beautiful.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 4:24 pm on July 19, 2015 Permalink | Reply
    Tags: , , DNA, ,   

    From WIRED: “Chemists Invent New Letters for Nature’s Genetic Alphabet” 

    Wired logo

    Wired

    07.19.15
    Emily Singer

    1
    Olena Shmahalo/Quanta Magazine

    DNA stores our genetic code in an elegant double helix.

    1
    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    But some argue that this elegance is overrated. “DNA as a molecule has many things wrong with it,” said Steven Benner, an organic chemist at the Foundation for Applied Molecular Evolution in Florida.

    Nearly 30 years ago, Benner sketched out better versions of both DNA and its chemical cousin RNA, adding new letters and other additions that would expand their repertoire of chemical feats.

    2
    A hairpin loop from a pre-mRNA. Highlighted are the nucleobases (green) and the ribose-phosphate backbone (blue). Note that this is a single strand of RNA that folds back upon itself.

    He wondered why these improvements haven’t occurred in living creatures. Nature has written the entire language of life using just four chemical letters: G, C, A and T. Did our genetic code settle on these four nucleotides for a reason? Or was this system one of many possibilities, selected by simple chance? Perhaps expanding the code could make it better.

    Benner’s early attempts at synthesizing new chemical letters failed. But with each false start, his team learned more about what makes a good nucleotide and gained a better understanding of the precise molecular details that make DNA and RNA work. The researchers’ efforts progressed slowly, as they had to design new tools to manipulate the extended alphabet they were building. “We have had to re-create, for our artificially designed DNA, all of the molecular biology that evolution took 4 billion years to create for natural DNA,” Benner said.

    Now, after decades of work, Benner’s team has synthesized artificially enhanced DNA that functions much like ordinary DNA, if not better. In two papers published in the Journal of the American Chemical Society last month, the researchers have shown that two synthetic nucleotides called P and Z fit seamlessly into DNA’s helical structure, maintaining the natural shape of DNA. Moreover, DNA sequences incorporating these letters can evolve just like traditional DNA, a first for an expanded genetic alphabet.

    The new nucleotides even outperform their natural counterparts. When challenged to evolve a segment that selectively binds to cancer cells, DNA sequences using P and Z did better than those without.

    “When you compare the four-nucleotide and six-nucleotide alphabet, the six-nucleotide version seems to have won out,” said Andrew Ellington, a biochemist at the University of Texas, Austin, who was not involved in the study.

    Benner has lofty goals for his synthetic molecules. He wants to create an alternative genetic system in which proteins—intricately folded molecules that perform essential biological functions—are unnecessary. Perhaps, Benner proposes, instead of our standard three-component system of DNA, RNA and proteins, life on other planets evolved with just two.

    Better Blueprints for Life

    The primary job of DNA is to store information. Its sequence of letters contains the blueprints for building proteins. Our current four-letter alphabet encodes 20 amino acids, which are strung together to create millions of different proteins. But a six-letter alphabet could encode as many as 216 possible amino acids and many, many more possible proteins.

    3
    Expanding the genetic alphabet dramatically expands the number of possible amino acids and proteins that cells can build, at least in theory. The existing four-letter alphabet produces 20 amino acids (small circle) while a six-letter alphabet could produce 216 possible amino acids. Olena Shmahalo/Quanta Magazine

    Why nature stuck with four letters is one of biology’s fundamental questions. Computers, after all, use a binary system with just two “letters”—0s and 1s. Yet two letters probably aren’t enough to create the array of biological molecules that make up life. “If you have a two-letter code, you limit the number of combinations you get,” said Ramanarayanan Krishnamurthy, a chemist at the Scripps Research Institute in La Jolla, Calif.

    On the other hand, additional letters could make the system more error prone. DNA bases come in pairs—G pairs with C and A pairs with T. It’s this pairing that endows DNA with the ability to pass along genetic information. With a larger alphabet, each letter has a greater chance of pairing with the wrong partner, and new copies of DNA might harbor more mistakes. “If you go past four, it becomes too unwieldy,” Krishnamurthy said.

    But perhaps the advantages of a larger alphabet can outweigh the potential drawbacks. Six-letter DNA could densely pack in genetic information. And perhaps six-letter RNA could take over some of the jobs now handled by proteins, which perform most of the work in the cell.

    Proteins have a much more flexible structure than DNA and RNA and are capable of folding into an array of complex shapes. A properly folded protein can act as a molecular lock, opening a chamber only for the right key. Or it can act as a catalyst, capturing and bringing together different molecules for chemical reactions.

    Adding new letters to RNA could give it some of these abilities. “Six letters can potentially fold into more, different structures than four letters,” Ellington said.

    Back when Benner was sketching out ideas for alternative DNA and RNA, it was this potential that he had in mind. According to the most widely held theory of life’s origins, RNA once performed both the information-storage job of DNA and the catalytic job of proteins. Benner realized that there are many ways to make RNA a better catalyst.

    “With just these little insights, I was able to write down the structures that are in my notebook as alternatives that would make DNA and RNA better,” Benner said. “So the question is: Why did life not make these alternatives? One way to find out was to make them ourselves, in the laboratory, and see how they work.”

    3
    Steven Benner’s lab notebook from 1985 outlining plans to synthesize “better” DNA and RNA by adding new chemical letters. Courtesy of Steven Benner

    It’s one thing to design new codes on paper, and quite another to make them work in real biological systems. Other researchers have created their own additions to the genetic code, in one case even incorporating new letters into living bacteria. But these other bases fit together a bit differently from natural ones, stacking on top of each other rather than linking side by side. This can distort the shape of DNA, particularly when a number of these bases cluster together. Benner’s P-Z pair, however, is designed to mimic natural bases.

    One of the new papers by Benner’s team shows that Z and P are yoked together by the same chemical bond that ties A to T and C to G. (This bond is known as Watson-Crick pairing, after the scientists who discovered DNA’s structure.) Millie Georgiadis, a chemist at Indiana University-Purdue University Indianapolis, along with Benner and other collaborators, showed that DNA strands that incorporate Z and P retain their proper helical shape if the new letters are strung together or interspersed with natural letters.

    “This is very impressive work,” said Jack Szostak, a chemist at Harvard University who studies the origin of life, and who was not involved in the study. “Finding a novel base pair that does not grossly disrupt the double-helical structure of DNA has been quite difficult.”

    The team’s second paper demonstrates how well the expanded alphabet works. Researchers started with a random library of DNA strands constructed from the expanded alphabet and then selected the strands that were able to bind to liver cancer cells but not to other cells. Of the 12 successful binders, the best had Zs and Ps in their sequences, while the weakest did not.

    “More functionality in the nucleobases has led to greater functionality in nucleic acids themselves,” Ellington said. In other words, the new additions appear to improve the alphabet, at least under these conditions.

    But additional experiments are needed to determine how broadly that’s true. “I think it will take more work, and more direct comparisons, to be sure that a six-letter version generally results in ‘better’ aptamers [short DNA strands] than four-letter DNA,” Szostak said. For example, it’s unclear whether the six-letter alphabet triumphed because it provided more sequence options or because one of the new letters is simply better at binding, Szostak said.

    Benner wants to expand his genetic alphabet even further, which could enhance its functional repertoire. He’s working on creating a 10- or 12-letter system and plans to move the new alphabet into living cells. Benner’s and others’ synthetic molecules have already proved useful in medical and biotech applications, such as diagnostic tests for HIV and other diseases. Indeed, Benner’s work helped to found the burgeoning field of synthetic biology, which seeks to build new life, in addition to forming useful tools from molecular parts.

    Why Life’s Code Is Limited

    Benner’s work and that of other researchers suggests that a larger alphabet has the capacity to enhance DNA’s function. So why didn’t nature expand its alphabet in the 4 billion years it has had to work on it? It could be because a larger repertoire has potential disadvantages. Some of the structures made possible by a larger alphabet might be of poor quality, with a greater risk of misfolding, Ellington said.

    Nature was also effectively locked into the system at hand when life began. “Once [nature] has made a decision about which molecular structures to place at the core of its molecular biology, it has relatively little opportunity to change those decisions,” Benner said. “By constructing unnatural systems, we are learning not only about the constraints at the time that life first emerged, but also about constraints that prevent life from searching broadly within the imagination of chemistry.”

    5
    The genetic code—made up of the four letters, A, T, G and C—stores the blueprint for proteins. DNA is first transcribed into RNA and then translated into proteins, which fold into specific shapes. Olena Shmahalo/Quanta Magazine

    Benner aims to make a thorough search of that chemical space, using his discoveries to make new and improved versions of both DNA and RNA. He wants to make DNA better at storing information and RNA better at catalyzing reactions. He hasn’t shown directly that the P-Z base pairs do that. But both bases have the potential to help RNA fold into more complex structures, which in turn could make proteins better catalysts. P has a place to add a “functional group,” a molecular structure that helps folding and is typically found in proteins. And Z has a nitro group, which could aid in molecular binding.

    In modern cells, RNA acts as an intermediary between DNA and proteins. But Benner ultimately hopes to show that the three-biopolymer system—DNA, RNA and proteins—that exists throughout life on Earth isn’t essential. With better-engineered DNA and RNA, he says, perhaps proteins are unnecessary.

    Indeed, the three-biopolymer system may have drawbacks, since information flows only one way, from DNA to RNA to proteins. If a DNA mutation produces a more efficient protein, that mutation will spread slowly, as organisms without it eventually die off.

    What if the more efficient protein could spread some other way, by directly creating new DNA? DNA and RNA can transmit information in both directions. So a helpful RNA mutation could theoretically be transformed into beneficial DNA. Adaptations could thus lead directly to changes in the genetic code.

    Benner predicts that a two-biopolymer system would evolve faster than our own three-biopolymer system. If so, this could have implications for life on distant planets. “If we find life elsewhere,” he said, “it would likely have the two-biopolymer system.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:56 am on June 4, 2015 Permalink | Reply
    Tags: , , DNA,   

    From MIT: “DNA breakage underlies both learning, age-related damage” 


    MIT News

    June 4, 2015
    Helen Knight | MIT News correspondent

    Process that allows brains to learn and remember also leads to degeneration with age.

    1
    Early-response genes, which are important for synaptic plasticity, are “switched off” under basal conditions by topological constraints. Neuronal activity triggers DNA breaks in a subset of early-response genes, which overrides these topological constraints, and “switches on” gene expression. Shown here is the topological constraint to early-response genes represented as an open switch (left) that is tethered by intact DNA. Formation of the break severs the constraint, and promotes the circuit to be closed (right). The “brain bulb” represents the manifestation of neuronal activity. Courtesy of the researchers

    The process that allows our brains to learn and generate new memories also leads to degeneration as we age, according to a new study by researchers at MIT.

    The finding, reported in a paper published today in the journal Cell, could ultimately help researchers develop new approaches to preventing cognitive decline in disorders such as Alzheimer’s disease.

    Each time we learn something new, our brain cells break their DNA, creating damage that the neurons must immediately repair, according to Li-Huei Tsai, the Picower Professor of Neuroscience and director of the Picower Institute for Learning and Memory at MIT.

    This process is essential to learning and memory. “Cells physiologically break their DNA to allow certain important genes to be expressed,” Tsai says. “In the case of neurons, they need to break their DNA to enable the expression of early response genes, which ultimately pave the way for the transcriptional program that supports learning and memory, and many other behaviors.”

    Slower DNA repair

    However, as we age, our cells’ ability to repair this DNA damage weakens, leading to degeneration, Tsai says. “When we are young, our brains create DNA breaks as we learn new things, but our cells are absolutely on top of this and can quickly repair the damage to maintain the functionality of the system,” Tsai says. “But during aging, and particularly with some genetic conditions, the efficiency of the DNA repair system is compromised, leading to the accumulation of damage, and in our view this could be very detrimental.”

    In previous research into Alzheimer’s disease in mice, the researchers found that even in the presymptomatic phase of the disorder, neurons in the hippocampal region of the brain contain a large number of DNA lesions, known as double strand breaks.

    To determine how and why these double strand breaks are generated, and what genes are affected by them, the researchers began to investigate what would happen if they created such damage in neurons. They applied a toxic agent to the neurons known to induce double strand breaks, and then harvested the RNA from the cells for sequencing.

    They discovered that of the 700 genes that showed changes as a result of this damage, the vast majority had reduced expression levels, as expected. Surprisingly though, 12 genes — known to be those that respond rapidly to neuronal stimulation, such as a new sensory experience — showed increased expression levels following the double strand breaks.

    To determine whether these breaks occur naturally during neuronal stimulation, the researchers then treated the neurons with a substance that causes synapses to strengthen in a similar way to exposure to a new experience.

    “Sure enough, we found that the treatment very rapidly increased the expression of those early response genes, but it also caused DNA double strand breaks,” Tsai says.

    The good with the bad

    In further studies the researchers were able to confirm that an enzyme known as topoisomerase IIβ is responsible for the DNA breaks in response to stimulation, according to the paper’s lead author Ram Madabhushi, a postdoc in Tsai’s laboratory.

    “When we knocked down this enzyme, we found that both double strand break formation and the expression of early response genes was reduced,” Madabhushi says.

    Finally, the researchers attempted to determine why the genes need such a drastic mechanism to allow them to be expressed. Using computational analysis, they studied the DNA sequences near these genes and discovered that they were enriched with a motif, or sequence pattern, for binding to a protein called CTCF. This “architectural” protein is known to create loops or bends in DNA.

    In the early-response genes, the bends created by this protein act as a barrier that prevents different elements of DNA from interacting with each other — a crucial step in the genes’ expression.

    The double strand breaks created by the cells allow them to collapse this barrier, and enable the early response genes to be expressed, Tsai says.

    “Surprisingly then, even though conventional wisdom dictates that DNA lesions are very bad — as this ‘damage’ can be mutagenic and sometimes lead to cancer — it turns out that these breaks are part of the physiological function of the cell,” Tsai says.

    Previous research has shown that the expression of genes involved in learning and memory is reduced as people age. So the researchers now plan to carry out further studies to determine how the DNA repair system is altered with age, and how this compromises the ability of cells to cope with the continued production and repair of double strand breaks.

    They also plan to investigate whether certain chemicals could enhance this DNA repair capacity.

    The paper represents an important conceptual advance in our understanding of gene regulation, according to Bruce Yankner, a professor of genetics and neurology at Harvard Medical School who was not involved in the research.

    “The work elegantly links DNA strand break formation by the enzyme topoisomerase IIβ to the temporal control of transcription, providing the most compelling evidence to date that this is a core transcriptional control mechanism,” he says. “I anticipate that this advance will have broad implications ranging from the basic biology of transcription to pathological mechanisms involved in diseases such as Alzheimer’s disease.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 6:19 pm on May 5, 2015 Permalink | Reply
    Tags: , DNA,   

    From LBL: “Researchers from Berkeley Lab & University of Hawaii at Manoa map the first chemical bonds that eventually give rise to DNA.” 

    Berkeley Logo

    Berkeley Lab

    U Hawaii
    U Hawaii

    1
    Composite image of an energetic star explosion taken by the Hubble Space Telescope in March of 1997. Credit: NASA

    May 5, 2015
    Kate Greene 510-486-4404

    DNA is synonymous with life, but where did it originate? One way to answer this question is to try to recreate the conditions that formed DNA’s molecular precursors. These precursors are carbon ring structures with embedded nitrogen atoms, key components of nucleobases, which themselves are building blocks of the double helix.

    Now, researchers from the U.S. Department of Energy’s Lawrence Berkeley National Lab (Berkeley Lab) and the University of Hawaii at Manoa have shown for the first time that cosmic hot spots, such as those near stars, could be excellent environments for the creation of these nitrogen-containing molecular rings.

    In a new paper in the Astrophysical Journal, the team describes the experiment in which they recreate conditions around carbon-rich, dying stars to find formation pathways of the important molecules.

    “This is the first time anyone’s looked at a hot reaction like this,” says Musahid Ahmed, scientist in the Chemical Sciences Division at Berkeley Lab. It’s not easy for carbon atoms to form rings that contain nitrogen, he says. But this new work demonstrates the possibility of a hot gas phase reaction, what Ahmed calls the “cosmic barbeque.”

    For decades, astronomers have pointed their telescopes into space to look for signatures of these nitrogen-containing double carbon rings called quinoline, Ahmed explains. They’ve focused mostly on the space between stars called the interstellar medium. While the stellar environment has been deemed a likely candidate for the formation of carbon ring structures, no one had spent much time looking there for nitrogen-containing carbon rings.

    To recreate the conditions near a star, Ahmed and his long-time collaborator, Ralf Kaiser, professor of chemistry at the University of Hawaii, Manoa, and their colleagues, which include Dorian Parker at Hawaii, and Oleg Kostko and Tyler Troy of Berkeley Lab, turned to the Advanced Light Source (ALS), a Department of Energy user facility located at Berkeley Lab.

    LBL Advanced Light Source
    ALS

    At the ALS, the researchers used a device called a hot nozzle, previously used to successfully confirm soot formation during combustion. In the present study the hot nozzle is used to simulate the pressures and temperatures in stellar environments of carbon-rich stars. Into the hot nozzle, the researchers injected a gas made of a nitrogen-containing single ringed carbon molecule and two short carbon-hydrogen molecules called acetylene.

    Then, using synchrotron radiation from the ALS, the team probed the hot gas to see which molecules formed. They found that the 700-Kelvin nozzle transformed the initial gas into one made of the nitrogen-containing ring molecules called quinolone and isoquinoline, considered the next step up in terms of complexity.

    “There’s an energy barrier for this reaction to take place, and you can exceed that barrier near a star or in our experimental setup,” Ahmed says. “This suggests that we can start looking for these molecules around stars now.”

    These experiments provide compelling evidence that the key molecules of quinolone and isoquinoline can be synthesized in these hot environments and then be ejected with the stellar wind to the interstellar medium – the space between stars, says Kaiser.

    “Once ejected in space, in cold molecular clouds, these molecules can then condense on cold interstellar nanoparticles, where they can be processed and functionalized.” Kaiser adds. “These processes might lead to more complex, biorelevant molecules such as nucleobases of crucial importance to DNA and RNA formation.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 2:46 pm on April 28, 2015 Permalink | Reply
    Tags: , DNA,   

    From Rice: “Chromosome-folding theory shows promise” 

    Rice U bloc

    Rice University

    April 28, 2015
    Mike Williams

    Human chromosomes are much bigger and more complex than proteins, but like proteins, they appear to fold and unfold in an orderly process as they carry out their functions in cells.

    Rice University biophysicist Peter Wolynes and postdoctoral fellow Bin Zhang have embarked upon a long project to define that order. They hope to develop a theory that predicts the folding mechanisms and resulting structures of chromosomes in the same general way Wolynes helped revolutionize the view of protein folding through the concept of energy landscapes.

    The first fruit of their quest is a new paper in the Proceedings of the National Academy of Sciences that details a coarse-grained method to “skirt some of the difficulties” that a nucleotide-level analysis of chromosomes would entail.

    Essentially, the researchers are drawing upon frequently observed crosslinking contacts among domains – distinct sequences that form along folding strands of DNA – to apply statistical tools. With these tools, they can build computational models and infer the presence of energy landscapes that predict the dynamics of chromosomes.

    How macromolecules of DNA fold into chromosomes is thought to have a crucial role in biological processes like gene regulation, DNA replication and cell differentiation. The researchers argue that unraveling the dynamics of how they fold and their structural details would add greatly to the understanding of cell biology.

    “It’s inevitable that there’s a state of the chromosome that involves having structure,” Wolynes said. “Since the main theme of our work is gene regulation, it’s something we would naturally be interested in pursuing.”

    But it’s no small task. First, though a chromosome is made of a single strand of DNA, that strand is huge, with millions of subunits. That’s much longer than the average protein and probably a lot slower to organize, the researchers said.

    Second, a large “team of molecular players” is involved in helping chromosomes get organized, and only a few of these relevant proteins are known.

    Third, chromosome organization appears to vary from one cell to the next and may depend on the cell’s type and the phase in its lifecycle.

    All those factors led Wolynes and Zhang to conclude that treating chromosomes exactly as they do proteins — that is, figuring out how and when the individual units along the DNA strand attract and repel each other — would be impractical.

    “But the three-dimensional organization of chromosomes is of critical importance and is worthy of study by Rice’s Center for Theoretical Biological Physics,” Wolynes said. He holds out hope that the theory developed in this study will lead to a more detailed view of chromosome conformations and will result in a better understanding of the relationships of the structure, dynamics and function of the genome.

    He said there is already evidence for the idea that actual gene regulatory processes are influenced by the chromosomes’ structures. He noted work by Rice colleague Erez Lieberman Aiden to develop high-resolution, three-dimensional maps of folded genomes will be an important step toward specifying their structures.

    One result of the new study was the observation that, at least during the interphase state the Rice team primarily studied, chromosome domains take on the characteristics of liquid crystals. In such a state, the domains remain fluid but become ordered, allowing for locally funneled landscapes that lead to the “ideal” chromosome structures that resemble the speculative versions seen in textbooks.

    Wolynes and Rice colleague José Onuchic, a biophysicist, began developing their protein-folding theory nearly three decades ago. In short, it reveals that proteins, which start as linear chains of amino acids, are programmed by genes to quickly fold into their three-dimensional native states. In doing so, they obey the principle of minimal frustration, in which interactions between individual acids guide the protein to its final, stable form.

    Wolynes used the principle to conceptualize folding as a funnel. The top of the funnel represents all of the possible ways a protein can fold. As individual stages of the protein come together, the number of possibilities decreases. The funnel narrows and eventually guides the protein to its functional native state.

    He hopes the route to understanding chromosome folding will take much less time than the decades it took for his team’s protein-folding work to pay off.

    “We’re not the first in this area,” he said. “A lot of people have said the structure of the chromosome is an important problem. I see it as being as big a field as protein folding was – and when you look at it from that point of view, you realize the state of our ignorance is profound. We’re like where protein folding was, on the experimental side, in 1955.

    “The question for this work is whether we can leapfrog over the dark ages of protein folding that led to our energy-landscape theory. I think we can.”

    The Center for Theoretical Biological Physics, funded by the National Science Foundation, and the D.R. Bullard-Welch Chair at Rice supported the research. The researchers utilized the National Science Foundation-supported DAVinCI supercomputer and the BlueBioU supercomputer, both administered by Rice’s Ken Kennedy Institute for Information Technology.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Rice U campus

    In his 1912 inaugural address, Rice University president Edgar Odell Lovett set forth an ambitious vision for a great research university in Houston, Texas; one dedicated to excellence across the range of human endeavor. With this bold beginning in mind, and with Rice’s centennial approaching, it is time to ask again what we aspire to in a dynamic and shrinking world in which education and the production of knowledge will play an even greater role. What shall our vision be for Rice as we prepare for its second century, and how ought we to advance over the next decade?

    This was the fundamental question posed in the Call to Conversation, a document released to the Rice community in summer 2005. The Call to Conversation asked us to reexamine many aspects of our enterprise, from our fundamental mission and aspirations to the manner in which we define and achieve excellence. It identified the pressures of a constantly changing and increasingly competitive landscape; it asked us to assess honestly Rice’s comparative strengths and weaknesses; and it called on us to define strategic priorities for the future, an effort that will be a focus of the next phase of this process.

     
  • richardmitnick 4:04 pm on April 16, 2015 Permalink | Reply
    Tags: , DNA, , ,   

    From Quanta: “How Structure Arose in the Primordial Soup” 

    Quanta Magazine
    Quanta Magazine

    Life’s first epoch saw incredible advances — cells, metabolism and DNA, to name a few. Researchers are resurrecting ancient proteins to illuminate the biological dark ages.

    April 16, 2015
    Emily Singer

    1
    Olena Shmahalo/Quanta Magazine

    About 4 billion years ago, molecules began to make copies of themselves, an event that marked the beginning of life on Earth. A few hundred million years later, primitive organisms began to split into the different branches that make up the tree of life. In between those two seminal events, some of the greatest innovations in existence emerged: the cell, the genetic code and an energy system to fuel it all. All three of these are essential to life as we know it, yet scientists know disappointingly little about how any of these remarkable biological innovations came about.

    “It’s very hard to infer even the relative ordering of evolutionary events before the last common ancestor,” said Greg Fournier, a geobiologist at the Massachusetts Institute of Technology. Cells may have appeared before energy metabolism, or perhaps it was the other way around. Without fossils or DNA preserved from organisms living during this period, scientists have had little data to work from.

    Fournier is leading an attempt to reconstruct the history of life in those evolutionary dark ages — the hundreds of millions of years between the time when life first emerged and when it split into what would become the endless tangle of existence.

    He is using genomic data from living organisms to infer the DNA sequence of ancient genes as part of a growing field known as paleogenomics. In research published online in March in the Journal of Molecular Evolution, Fournier showed that the last chemical letter added to the code was a molecule called tryptophan — an amino acid most famous for its presence in turkey dinners. The work supports the idea that the genetic code evolved gradually.

    Using similar methods, he hopes to decipher the temporal order of more of the code — determining when each letter was added to the genetic alphabet — and to date key events in the origins of life, such as the emergence of cells.

    Dark Origins

    Life emerged so long ago that even the rock formations covering the planet at that time have been destroyed — and with them, most chemical and geological clues to early evolution. “There’s a huge chasm between the origins of life and the last common ancestor,” said Eric Gaucher, a biologist at the Georgia Institute of Technology in Atlanta.

    2
    The stretch of time between the origins of life and the last universal common ancestor saw a series of remarkable innovations — the origins of cells, metabolism and the genetic code. But scientists know little about when they happened or the order in which they occurred. Olena Shmahalo/Quanta Magazine

    Scientists do know that at some point in that time span, living creatures began using a genetic code, a blueprint for making complex proteins. It is those proteins that carry out the vital functions of the cell. (The structure of DNA and RNA also enables genetic information to be replicated and passed on from generation to generation, but that’s a separate process from the creation of proteins.) The components of the code and the molecular machinery that assembles them “are some of the oldest and most universal aspects of cells, and biologists are very interested in understanding the mechanisms by which they evolved,” said Paul Higgs, a biophysicist at McMaster University in Hamilton, Ontario.

    How the code came into being presents a chicken-and-egg problem. The key players in the code — DNA, RNA, amino acids, and proteins — are chemically complicated structures that work together to make proteins. But in modern cells, proteins are used to make the components of the code. So how did a highly structured code emerge?

    Most researchers believe that the code began simply with basic proteins made from a limited alphabet of amino acids. It then grew in complexity over time, as these proteins learned to make more sophisticated molecules. Eventually, it developed into a code capable of creating all the diversity we see today. “It’s long been hypothesized that life’s ‘standard alphabet’ of 20 amino acids evolved from a simpler, earlier alphabet, much as the English alphabet has accumulated extra letters over its history,” said Stephen Freeland, a biologist at the University of Maryland, Baltimore County.

    The earliest amino acid letters in the code were likely the simplest in structure, those that can be made from purely chemical means, without the assistance of a protein helper. (For example, the amino acids glycine, alanine and glutamic acid have been found on meteorites, suggesting they can form spontaneously in a variety of environments.) These are like the letters A, E and S — primordial units that served as the foundation for what came later.

    Tryptophan, in comparison, has a complex structure and is comparatively rare in the protein code, like a Y or Z, leading scientists to theorize that it was one of the latest additions to the code.

    That chemical evidence is compelling, but circumstantial. Enter Fournier. He suspected that by extending his work on paleogenomics, he would be able to prove tryptophan’s status as the last letter added to the code.

    The Last Letter

    Scientists have been reconstructing ancient proteins for more than a decade, primarily to figure out how ancient proteins differed from modern ones — what they looked like and how they functioned. But these efforts have focused on the period of evolution after the last universal common ancestor (or LUCA, as researchers call it). Fournier’s work delves further back than any other previous efforts. To do so, he had to move beyond the standard application of comparative genomics, which analyzes the differences between branches on the tree of life. “By definition, anything pre-LUCA lies beyond the deepest split in the tree,” he said.

    Fournier started with two related proteins, TrpRS (tryptophanyl tRNA synthetase) and TyrRS (tyrosyl tRNA synthetase), which help decode RNA letters into the amino acids tryptophan and tyrosine. TrpRS and TyrRS are more closely related to each other than to any other protein, indicating that they evolved from the same ancestor protein. Sometime before LUCA, that parent protein mutated slightly to produce these two new proteins with distinct functions. Fournier used computational techniques to decipher what that ancestral protein must look like.

    4
    Greg Fournier, a geobiologist at MIT, is searching for the origins of the genetic code. Helen Hill

    He found that the ancestral protein has all the amino acids but tryptophan, suggesting that its addition was the finishing touch to the genetic code. “It shows convincingly that tryptophan was the last amino acid added, as has been speculated before but not really nailed as has been done here,” said Nigel Goldenfeld, a physicist at the University of Illinois, Urbana-Champaign, who was not involved in the study.

    Fournier now plans to use tryptophan as a marker to date other major pre-LUCA events such as the evolution of metabolism, cells and cell division, and the mechanisms of inheritance. These three processes form a sort of biological triumvirate that laid the foundation for life as we know it today. But we know little about how they came into existence. “If we understand the order of those basic steps, it creates an arrow pointing to possible scenarios for the origins of life,” Fournier said.

    For example, if the ancestral proteins involved in metabolism lack tryptophan, some form of metabolism probably evolved early. If proteins that direct cell division are studded with tryptophan, it suggests those proteins evolved comparatively late.

    Different models for the origins of life make different predictions for which of these three processes came first. Fournier hopes his approach will provide a way to rule out some of these models. However, he cautions that it won’t definitively sort out the timing of these events.

    Fournier plans to use the same techniques to figure out the order in which other amino acids were added to the code. “It really reinforces the idea that evolution of the code itself was a progressive process,” said Paul Schimmel, a professor of molecular and cell biology at the Scripps Research Institute, who was not involved in the study. “It speaks to the refinement and subtlety that nature was using to perfect these proteins and the diversity it needed to form this vast tree of life.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:18 pm on April 10, 2015 Permalink | Reply
    Tags: , , DNA, ,   

    From LBL: “New target for anticancer drugs: RNA” 

    UC Berkeley

    UC Berkeley

    April 6, 2015
    Robert Sanders

    1
    DNA is transcribed into mRNA, which is then translated by ribosomes into proteins. UC Berkeley researchers demonstrated that dysregulation of the gene expression program governed by a ribosomal protein called eIF3 leads to increased cell growth and carcinogenesis. That makes this protein an ideal anticancer drug target. (Amy Lee graphic)

    Most of today’s anticancer drugs target the DNA or proteins in tumor cells, but a new discovery by University of California, Berkeley, scientists unveils a whole new set of potential targets: the RNA intermediaries between DNA and proteins.

    This RNA, called messenger RNA, is a blueprint for making proteins. Messenger RNA is created in the nucleus and shuttled to the cell cytoplasm to hook up with protein-making machinery, the ribosome. Most scientists have assumed that these mRNA molecules are, aside from their unique sequences, generic, with few distinguishing characteristics that could serve as an Achilles heel for targeted drugs.

    Jamie Cate, UC Berkeley professor of molecular and cell biology, and postdoctoral fellows Amy Lee and Philip Kranzusch have found, however, that a small subset of these mRNAs – most of them coding for proteins linked in some way to cancer – carry unique tags. These short RNA tags bind to a protein, eIF3 (eukaryotic initiation factor 3), that regulates translation at the ribosome, making the binding site a promising target.

    “We’ve discovered a new way that human cells control cancer gene expression, at the step where the genes are translated into proteins. This research puts on the radar that you could potentially target mRNA where these tags bind with eIF3,” Cate said. “These are brand new targets for trying to come up with small molecules that might disrupt or stabilize these interactions in such a way that we could control how cells grow.”

    These tagged mRNAs – fewer than 500 out of more than 10,000 mRNAs in a cell – seem to be special in that they carry information about specific proteins whose levels in the cell must be delicately balanced so as not to tip processes like cell growth into overdrive, potentially leading to cancer.

    Surprisingly, while some of the tags turn on the translation of mRNA into protein, others turn it off.

    “Our new results indicate that a number of key cancer-causing genes – genes that under normal circumstances keep cells under control – are held in check before the proteins are made,” Cate said. “This new control step, which no one knew about before, could be a great target for new anticancer drugs.

    “On the other hand,” he said, “the tags that turn on translation activate genes that cause cancer when too much of the protein is made. These could also be targeted by new anticancer drugs that block the activation step.”

    The new results will be reported April 6 in an advance online publication of the journal Nature. Cate directs the Center for RNA Systems Biology, a National Institutes of Health-funded group developing new tools to study RNA, a group of molecules increasingly recognized as key regulators of the cell.

    mRNA a messenger between DNA and ribosome

    While our genes reside inside the cell’s nucleus, the machinery for making proteins is in the cytoplasm, and mRNA is the messenger between the two. All the DNA of a gene is transcribed into RNA, after which nonfunctional pieces are snipped out to produce mRNA. The mRNA is then shuttled out of the nucleus to the cytoplasm, where a so-called initiation complex gloms onto mRNA and escorts it to the ribosome. The ribosome reads the sequence of nucleic acids in the mRNA and spits out a sequence of amino acids: a protein.

    “If something goes out of whack with a cell’s ability to know when and where to start protein synthesis, you are at risk of getting cancer, because you can get uncontrolled synthesis of proteins,” Cate said. “The proteins are active when they shouldn’t be, which over-stimulates cells.”

    The protein eIF3 is one component of the initiation complex, and is itself made up of 13 protein subunits. It was already known to regulate translation of the mRNA into protein in addition to its role in stabilizing the structure of the complex. Overexpression of eIF3 also is linked to cancers of the breast, prostate and esophagus.

    “I think eIF3 is able to drive multiple functions because it consists of a large complex of proteins,” Lee said. “This really highlights that it is a major regulator in translation rather than simply a scaffolding factor.”

    Lee zeroed in on mRNAs that bind to eIF3, and found a way to pluck them out of the 10,000+ mRNAs in a typical human cell, sequenced the entire set and looked for eIF3 binding sites. She discovered 479 mRNAS – about 3 percent of the mRNAs in the cell – that bind to eIF3, and many of them seem to share similar roles in the cell.

    “When we look at the biological functions of these mRNAs, we see that there is an emphasis on processes that become dysregulated in cancer,” Lee said. These involve the cell cycle, the cytoskeleton, and programmed cell death (apoptosis), along with cell growth and differentiation.

    “Therapeutically, one could screen for increased expression of eIF3 in a cancer tissue and then target the pathways that we have identified as being eIF3-regulated,” she said.

    Lee actually demonstrated that she could tweak the mRNA of two cancer-related genes, both of which control cell growth, to stop cells from becoming invasive.

    “We showed that we could put a damper on invasive growth by manipulating these interactions, so clearly this opens the door to another layer of possible anticancer therapeutics that could target these RNA-binding regions,” Cate said.

    The work was funded by a grant from NIH’s National Institute of General Medical Sciences to the Center for RNA Systems Biology.

    “A goal of systems biology is to map entire biological networks, such as genes and their regulatory mechanisms, to better understand how those complex networks function and can contribute to disease,” said Peter Preusch, chief of the biophysics branch of NIGMS. “This center is using cutting-edge technology to interrogate the structure and function of many RNAs at a time, which is helping piece together RNA’s regulatory components.”

    Lee is supported through the American Cancer Society Postdoctoral Fellowship Program.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Founded in the wake of the gold rush by leaders of the newly established 31st state, the University of California’s flagship campus at Berkeley has become one of the preeminent universities in the world. Its early guiding lights, charged with providing education (both “practical” and “classical”) for the state’s people, gradually established a distinguished faculty (with 22 Nobel laureates to date), a stellar research library, and more than 350 academic programs.

    UC Berkeley Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 452 other followers

%d bloggers like this: