Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:09 pm on December 11, 2014 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Volcanoes May Be Masking the Severity of Global Warming” 

    PBS NOVA

    NOVA

    Thu, 11 Dec 2014
    Christina Couch

    Global warming continues to heat up the earth, but volcanoes are keeping us just a little cooler.

    A new paper published in Geophysical Research Letters shows that volcanic eruptions may be part of the reason why the earth isn’t heating up quite as fast as climate models predict. Sneaky sulfur dioxide emissions coming from smaller volcanoes that weren’t previously factored into climate models are temporarily cooling surface temperatures, said research from MIT atmospheric scientist David Ridley.

    v

    Alaska’s Augustine Volcano Jan 12, 2006

    “If an eruption is powerful enough, the sulfur dioxide can reach the upper atmosphere, the stratosphere, where it forms literally liquid sulfuric acid droplets,” said Benjamin Santer, a research scientist at Lawrence Livermore National Laboratory and co-author on the study. “Those droplets reflect some fraction of incoming sunlight back to space, preventing that sunlight from penetrating deeper into the atmosphere. That’s the primary cooling mechanism.”

    According to Santer and Ridley’s research, that light-refracting cooling effect is strong enough to bring global temperatures down anywhere from 0.09˚ to 0.22˚ F since 2000. Unfortunately the cooling won’t do much to counteract global warming in the long term—Ridley said that the amount of sulfur dioxide released in a small eruption generally dissipates after about one year. But these emissions may be part of the reason why over the last ten to 15 years, average global temperatures haven’t increased as rapidly as they have in decades past. The Intergovernmental Panel on Climate Change estimates that average worldwide temperatures are currently increasing at about one-third the rate that they were between 1951 and 2012.

    “I think there’s quite a good case now that volcanoes are at least able to explain about a third of that,” Ridley said.

    On top of providing volcano emissions data, Ridley’s study also offers scientists a new way to explore the lower stratosphere. Both current research and climate models rely on data derived from satellite observations to measure what’s happening in the stratosphere. That works well until around nine to ten miles above the earth’s surface, where clouds contaminate the data and make it difficult to discern exactly what’s happening. The problem becomes even more complex around the poles where the stratosphere dips lower than it does in the tropics and creates “this kind of wedge of stratosphere that we’re missing when just using the satellites” Ridley said.

    Instead of making estimates based on satellite observations alone, Ridley’s team also used data from a balloon-borne particle counter and from measurement devices on the ground. These included four lidar systems, which measure atmospheric particles using laser light pulses, and data from a series of robotic solar photometers called AERONET that use sunlight to measure how effective aerosol particles are at blocking light. The ground and air-based measurements gave researchers a clearer picture of the chemical makeup of the lower stratosphere.

    “Even though it’s a small part of the atmosphere that we were able to include that hadn’t been included before, it probably has a majority of the aerosols that are important” in the short term, said Ryan R. Neely III, a co-author on the study and lecturer of observational atmospheric science at the University of Leeds.

    Alan Robock, a climate scientist who was not involved in the study but was quoted in the journal’s press release, commended Ridley’s team for using ground and air-based instruments to examine the lower stratosphere in a way that satellite data simply can’t. He said that the new observational methods can potentially help scientists make better climate predictions and create more accurate models in the future.

    Creating accurate climate models hasn’t been easy in the middle of a so-called global warming pause or “hiatus,” especially one that’s controversial among scientists. While some attribute the slowdown to the ocean storing heat, others chalk it up to solar cycles or temperature fluctuations from El Niño and La Niña weather patterns.

    “The hiatus, the pause, it’s a little misleading,” said Todd Sanford, a climate scientist with Climate Central. “We’re still setting global [temperature] records. Really what this is talking about is how quickly temperatures are increasing, not that they have stopped increasing.”

    Even with the pause, global warming is still a major environmental problem, one so large that some researchers are investigating whether a strategy like spraying sulfur dioxide into the stratosphere to mimic the cooling effects from volcanoes is a viable temporary solution.

    “We know that if this were to be done, we could get fairly rapid reductions in temperatures but there are issues with it,” Sanford said. “You’re masking the effect of CO2 in some ways. That’s good as long as you’re doing it, but if for any reason you stopped injecting these particles up into the atmosphere, you’re now very quickly unmasking all of that CO2 warming. You’d get all of that warming back. It’s one of these things where if you start it and you’re not doing anything else on CO2, you’ve got to keep it going.”

    Besides, Sanford added, simply cooling the atmosphere without reducing CO2 won’t address other problems caused by carbon, like increasing ocean acidification.

    Sulfur dioxide injections could also deteriorate the ozone, produce uneven temperature and precipitation patterns, completely obscure our view of the sky, and create global political issues as the world decides “what temperature to set the thermostat” Robock said. He added that the technology to execute this type of geoengineering doesn’t yet exist, though others like Harvard climate scientist David Keith argue that it does. Even if we were able to get a critical mass of sulfuric acid into the stratosphere, there’s no way to control it once its there.

    “If you have an existing cloud up there and you start spraying more sulfur, theory tells us the particles will grow and you’ll get larger particles rather than more particles and they’ll be much less effective at scattering sunlight,” Robock said. “They’ll also fall out faster so you have to put a lot more up there.”

    Kicking our carbon habit is the real solution to global warming, Robock added, but until that happens, creating more accurate climate models can help us better understand how the atmosphere is changing.

    Ridley warns against the dangers of placing too much emphasis on volcanic cooling. While volcanoes are playing a small but significant role in keeping rising temperatures a little in check, sulfur dioxide cooling isn’t a safeguard against the effects of global warming. “This is really just a bit of an offset on the warming rather than a change in the expected trend on warming,” he said. Besides, he added, no good global hiatus lasts forever. “We’ve got no reason to believe that that will continue.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 2:27 pm on December 10, 2014 Permalink | Reply
    Tags: , , , Microbiome, NOVA   

    From NOVA: “New Antibiotic Found in Bacteria from the Vaginal Microbiome” 

    PBS NOVA

    NOVA

    12 Sep 2014
    Tim De Chant

    Researchers announced yesterday that they had discovered a new molecule that could be a promising antibiotic capable of killing Staphylococcus aureus, a bacteria that can cause dangerous skin infections. That’s good news, especially since drug resistance among harmful bacteria is evolving at a rapid pace. But what makes this molecule unique is it’s source—our bodies.

    s
    Scanning electron micrograph of S. aureus; false color added.

    Microbiologists have long suspected that new classes of drugs—antibiotics in particular—could be lurking in our microbiomes, where various bacteria duke it out for dominance of a particular niche.

    m
    Lactobacillus bacteria, which produce the antibiotic lactocillin

    This new molecule, called lactocillin, was discovered in a sweep of a database containing genes culled from our microbiome. Michael Fischbach, a microbiologist at the University of California, San Francisco, and his team then traced the genes responsible for lactocillin back to bacteria living in the vagina.

    Erika Check Hayden, reporting for Nature News:

    “We used to think that drugs were discovered by drug companies and prescribed by a physician and then they get to you,” Fischbach says. “What we’ve found here is that bacteria that live on and inside of humans are doing an end-run around that process; they make drugs right on your body.”

    Fischbach’s team then purified one of these: a thiopeptide made by a bacterium that normally lives in the human vagina. The researchers found that the drug could kill the same types of bacteria as other thiopeptides — for instance, Staphylococcus aureus, which can cause skin infections. The scientists did not actually show that the human vaginal bacteria make the drug on the body, but they did show that when they grew the bacteria, it made the antibiotic.

    Fischbach told Check Hayden that, at the current time, he’s not interested in turning lactocillin into a bonafide drug. Instead, he’s going to continue plumbing the depths of these huge databases of microbiome genes, hoping to find even more intriguing and promising candidates.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 3:56 pm on December 9, 2014 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “Is There Anything Beyond Quantum Computing?” 

    PBS NOVA

    NOVA

    Thu, 10 Apr 2014
    Scott Aaronson

    A quantum computer is a device that could exploit the weirdness of the quantum world to solve certain specific problems much faster than we know how to solve them using a conventional computer. Alas, although scientists have been working toward the goal for 20 years, we don’t yet have useful quantum computers. While the theory is now well-developed, and there’s also been spectacular progress on the experimental side, we don’t have any computers that uncontroversially use quantum mechanics to solve a problem faster than we know how to solve the same problem using a conventional computer.

    cc
    computer_620
    Credit: Marcin Wichary/Flickr, under a Creative Commons license.

    Yet some physicists are already beginning to theorize about what might lie beyond quantum computers. You might think that this is a little premature, but I disagree. Think of it this way: From the 1950s through the 1970s, the intellectual ingredients for quantum computing were already in place, yet no one broached the idea. It was as if people were afraid to take the known laws of quantum physics and see what they implied about computation. So, now that we know about quantum computing, it’s natural not to want to repeat that mistake! And in any case, I’ll let you in on a secret: Many of us care about quantum computing less for its (real but modest) applications than because it defies our preconceptions about the ultimate limits of computation. And from that standpoint, it’s hard to avoid asking whether quantum computers are “the end of the line.”

    Now, I’m emphatically not asking a philosophical question about whether a computer could be conscious, or “truly know why” it gave the answer it gave, or anything like that. I’m restricting my attention to math problems with definite right answers: e.g., what are the prime factors of a given number? And the question I care about is this: Is there any such problem that couldn’t be solved efficiently by a quantum computer, but could be solved efficiently by some other computer allowed by the laws of physics?

    Here I’d better explain that, when computer scientists say “efficiently,” they mean something very specific: that is, that the amount of time and memory required for the computation grows like the size of the task raised to some fixed power, rather than exponentially. For example, if you want to use a classical computer to find out whether an n-digit number is prime or composite—though not what its prime factors are!—the difficulty of the task grows only like n cubed; this is a problem classical computers can handle efficiently. If that’s too technical, feel free to substitute the everyday meaning of the word “efficiently”! Basically, we want to know which problems computers can solve not only in principle, but in practice, in an amount of time that won’t quickly blow up in our faces and become longer than the age of the universe. We don’t care about the exact speed, e.g., whether a computer can do a trillion steps or “merely” a billion steps per second. What we care about is the scaling behavior: How does the number of steps grow as the number to be factored, the molecule to be simulated, or whatever gets bigger and bigger? Scaling behavior is where we see profound differences between today’s computers and quantum computers; it’s the whole reason why anyone wants to build quantum computers in the first place. So, could there be a physical device whose scaling behavior is better than quantum computers’?

    The Simulation Machine

    A quantum computer, as normally envisioned, would be a very specific kind of quantum system: one built up out of “qubits,” or quantum bits, which exist in “superpositions” of the “0” and “1” states. It’s not immediately obvious that a machine based on qubits could simulate other kinds of quantum-mechanical systems, for example, systems involving particles (like electrons and photons) that can move around in real space. And if there are systems that are hard to simulate on standard, qubit-based quantum computers, then those systems themselves could be thought of as more powerful kinds of quantum computers, which solve at least one problem—the problem of simulating themselves—faster than is otherwise possible.
    “It looks likely that a single device, a quantum computer, would in the future be able to simulate all of quantum chemistry and atomic physics efficiently.”

    So maybe Nature could allow more powerful kinds of quantum computers than the “usual” qubit-based kind? Strong evidence that the answer is “no” comes from work by Richard Feynman in the 1980s, and by Seth Lloyd and many others starting in the 1990s. They showed how to take a wide range of realistic quantum systems and simulate them using nothing but qubits. Thus, just as today’s scientists no longer need wind tunnels, astrolabes, and other analog computers to simulate classical physics, but instead represent airflow, planetary motions, or whatever else they want as zeroes and ones in their digital computers, so too it looks likely that a single device, a quantum computer, would in the future be able to simulate all of quantum chemistry and atomic physics efficiently.

    So far, we’ve been talking about computers that can simulate “standard,” non-relativistic quantum mechanics. If we want to bring special relativity into the picture, we need quantum field theory—the framework for modern particle physics, as studied at colliders like the LHC—which presents a slew of new difficulties. First, many quantum field theories aren’t even rigorously defined: It’s not clear what we should program our quantum computer to simulate. Also, in most quantum field theories, even a vacuum is a complicated object, like an ocean surface filled with currents and waves. In some sense, this complexity is a remnant of processes that took place in the moments after the Big Bang, and it’s not obvious that a quantum computer could efficiently simulate the dynamics of the early universe in order to reproduce that complexity. So, is it possible that a “quantum field theory computer” could solve certain problems more efficiently than a garden-variety quantum computer? If nothing else, then at least the problem of simulating quantum field theory?

    While we don’t yet have full answers to these questions, over the past 15 years we’ve accumulated strong evidence that qubit quantum computers are up to the task of simulating quantum field theory. First, Michael Freedman, Alexei Kitaev, and Zhenghan Wang showed how to simulate a “toy” class of quantum field theories, called topological quantum field theories (TQFTs), efficiently using a standard quantum computer. These theories, which involve only two spatial dimensions instead of the usual three, are called “topological” because in some sense, the only thing that matters in them is the global topology of space. (Interestingly, along with Michael Larsen, these authors also proved the converse: TQFTs can efficiently simulate everything that a standard quantum computer can do.)

    Then, a few years ago, Stephen Jordan, Keith Lee, and John Preskill gave the first detailed, efficient simulation of a “realistic” quantum field theory using a standard quantum computer. (Here, “realistic” means they can simulate a universe containing a specific kind of particle called scalar particles. Hey, it’s a start.) Notably, Jordan and his colleagues solve the problem of creating the complicated vacuum state using an algorithm called “adiabatic state preparation” that, in some sense, mimics the cooling the universe itself underwent shortly after the Big Bang. They haven’t yet extended their work to the full Standard Model of particle physics, but the difficulties in doing so are probably surmountable.

    So, if we’re looking for areas of physics that a quantum computer would have trouble simulating, we’re left with just one: quantum gravity. As you might have heard, quantum gravity has been the white whale of theoretical physicists for almost a century. While there are deep ideas about it (most famously, string theory), no one really knows yet how to combine quantum mechanics with [Albert] Einstein’s general theory of relativity, leaving us free to project our hopes onto quantum gravity—including, if we like, the hope of computational powers beyond those of quantum computers!

    Boot Up Your Time Machine

    But is there anything that could support such a hope? Well, quantum gravity might force us to reckon with breakdowns of causality itself, if closed timelike curves (i.e., time machines to the past) are possible. A time machine is definitely the sort of thing that might let us tackle problems too hard even for a quantum computer, as David Deutsch, John Watrous and I have pointed out. To see why, consider the “Shakespeare paradox,” in which you go back in time and dictate Shakespeare’s plays to him, to save Shakespeare the trouble of writing them. Unlike with the better-known “grandfather paradox,” in which you go back in time and kill your grandfather, here there’s no logical contradiction. The only “paradox,” if you like, is one of “computational effort”: somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them!
    “A time machine is definitely the sort of thing that might let us tackle problems too hard even for a quantum computer.”

    Using similar arguments, it’s possible to show that, if closed timelike curves exist, then under fairly mild assumptions, one could “force” Nature to solve hard combinatorial problems, just to keep the universe’s history consistent (i.e., to prevent things like the grandfather paradox from arising). Notably, the problems you could solve that way include the NP-complete problems: a class that includes hundreds of problems of practical importance (airline scheduling, chip design, etc.), and that’s believed to scale exponentially in time even for quantum computers.

    Of course, it’s also possible that quantum gravity will simply tell us that closed timelike curves can’t exist—and maybe the computational superpowers they would give us if they did exist is evidence that they must be forbidden!

    Simulating Quantum Gravity

    Going even further out on a limb, the famous mathematical physicist Roger Penrose has speculated that quantum gravity is literally impossible to simulate using either an ordinary computer or a quantum computer, even with unlimited time and memory at your disposal. That would put simulating quantum gravity into a class of problems studied by the logicians Alan Turing and Kurt Gödel in the 1930s, which includes problems way harder than even the NP-complete problems—like determining whether a given computer program will ever stop running (the “halting problem”). Penrose further speculates that the human brain is sensitive to quantum gravity effects, and that this gives humans the ability to solve problems that are fundamentally unsolvable by computers. However, virtually no other expert in the relevant fields agrees with the arguments that lead Penrose to this provocative position.

    What’s more, there are recent developments in quantum gravity that seem to support the opposite conclusion: that is, they hint that a standard quantum computer could efficiently simulate even quantum-gravitational processes, like the formation and evaporation of black holes. Most notably, the AdS/CFT correspondence, which emerged from string theory, posits a “duality” between two extremely different-looking kinds of theories. On one side of the duality is AdS (Anti de Sitter): a theory of quantum gravity for a hypothetical universe that has a negative cosmological constant, effectively causing the whole universe to be surrounded by a reflecting boundary. On the other side is a CFT (Conformal Field Theory): an “ordinary” quantum field theory, without gravity, that lives only on the boundary of the AdS space. The AdS/CFT correspondence, for which there’s now overwhelming evidence (though not yet a proof), says that any question about what happens in the AdS space can be translated into an “equivalent” question about the CFT, and vice versa.
    “Even if a quantum gravity theory seems ‘wild’—even if it involves nonlocality, wormholes, and other exotica—there might be a dual description of the theory that’s more ‘tame,’ and that’s more amenable to simulation by a quantum computer.”

    This suggests that, if we wanted to simulate quantum gravity phenomena in AdS space, we might be able to do so by first translating to the CFT side, then simulating the CFT on our quantum computer, and finally translating the results back to AdS. The key point here is that, since the CFT doesn’t involve gravity, the difficulties of simulating it on a quantum computer are “merely” the relatively prosaic difficulties of simulating quantum field theory on a quantum computer. More broadly, the lesson of AdS/CFT is that, even if a quantum gravity theory seems “wild”—even if it involves nonlocality, wormholes, and other exotica—there might be a dual description of the theory that’s more “tame,” and that’s more amenable to simulation by a quantum computer. (For this to work, the translation between the AdS and CFT descriptions also needs to be computationally efficient—and it’s possible that there are situations where it isn’t.)

    The Black Hole Problem

    So, is there any other hope for doing something in Nature that a quantum computer couldn’t efficiently simulate? Let’s circle back from the abstruse reaches of string theory to some much older ideas about how to speed up computation. For example, wouldn’t it be great if you could program your computer to do the first step of a computation in one second, the second step in half a second, the third step in a quarter second, the fourth step in an eighth second, and so on—halving the amount of time with each additional step? If so, then much like in Zeno’s paradox, your computer would have completed infinitely many steps in a mere two seconds!

    Or, what if you could leave your computer on Earth, working on some incredibly hard calculation, then board a spaceship, accelerate to close to the speed of light, then decelerate and return to Earth? If you did this, then Einstein’s special theory of relativity firmly predicts that, depending on just how close you got to the speed of light, millions or even trillions of years would have elapsed in Earth’s frame of reference. Presumably, civilization would have collapsed and all your friends would be long dead. But if, hypothetically, you could find your computer in the ruins and it was still running, then you could learn the answer to your hard problem!

    We’re now faced with a puzzle: What goes wrong if you try to accelerate computation using these sorts of tricks? The key factor is energy. Even in real life, there are hobbyists who “overclock” their computers, or run them faster than the recommended speed; for example, they might run a 1000 MHz chip at 2000 MHz. But the well-known danger in doing this is that your microchip might overheat and melt! Indeed, it’s precisely because of the danger of overheating that your computer has a fan. Now, the faster you run your computer, the more cooling you need—that’s why many supercomputers are cooled using liquid nitrogen. But cooling takes energy. So, is there some fundamental limit here? It turns out that there is. Suppose you wanted to cool your computer so completely that it could perform about 1043 operations per second—that is, one about operation per Planck time (where a Planck time, ~10-43 seconds, is the smallest measurable unit of time in quantum gravity). To run your computer that fast, you’d need so much energy concentrated in so small a space that, according to general relativity, your computer would collapse into a black hole!

    And the story is similar for the “relativity computer.” There, the more you want to speed up your computer, the closer you have to accelerate your spaceship to the speed of light. But the more you accelerate the spaceship, the more energy you need, with the energy diverging to infinity as your speed approaches that of light. At some point, your spaceship will become so energetic that it, too, will collapse into to a black hole.

    Now, how do we know that collapse into a black hole is inevitable—that there’s no clever way to avoid it? The calculation combines Newton’s gravitational constant G with Planck’s constant h, the central constant of quantum mechanics. That means one is doing a quantum gravity calculation! I’ll end by letting you savor the irony: Even as some people hope that a quantum theory of gravity might let us surpass the known limits of quantum computers, quantum gravity might play just the opposite role, enforcing those limits.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 11:43 am on December 5, 2014 Permalink | Reply
    Tags: , Information, , NOVA   

    From NOVA: “Living Bits: Information and the Origin of Life” 

    PBS NOVA

    NOVA

    04 Dec 2014
    Chris Adami

    What is life?

    When Erwin Schrödinger posed this question in 1944, in a book of the same name, he was 57 years old. He had won the Nobel in Physics eleven years earlier, and was arguably past his glory days. Indeed, at that time he was working mostly on his ill-fated “Unitary Field Theory.” By all accounts, the publication of What is Life?—venturing far outside of a theoretical physicist’s field of expertise—raised many eyebrows. How presumptuous for a physicist to take on one of the deepest questions in biology! But Schrödinger argued that science should not be compartmentalized:

    “Some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them—and at the risk of making fools of ourselves.”

    Schrödinger’s “What is Life” has been extraordinarily influential, in one part because he was one of the first who dared to ask the question seriously, and in another because it was the book that was read by a good number of physicists—famously both Francis Crick and James Watson independently, but also many a member of the “Phage group,” a group of scientists that started the field of bacterial genetics—and steered them to new careers in biology. The book is perhaps less famous for the answers Schrödinger suggested, as almost all of them have turned out to be wrong.

    In the 70 years since the book appeared, what have we learned about this question? Perhaps the greatest leap forward was provided by Watson and Crick, who by discovering the structure of DNA ushered in the age of information in biology. Indeed, a glib contemporary answer to Schrödinger’s question is simply: “Life is information that can copy itself.” But this statement offers little insight without a more profound analysis of the concept of information in the context of life. So instead of forging ahead, let’s take a step back instead and first ask: What is information?

    The meaning of information

    Information is a buzzword that is used in the press and in everyday conversation all the time, but it also has a very precise meaning in science. The theory of information was crafted by another great scientist, the mathematician and engineer Claude Shannon. Without going into the mathematical details, we can say that information is that which allows the holder of that information to make predictions, with accuracy better than chance.

    There are three important concepts in this definition. First: prediction. The colloquial use of “information” suggests “knowing.” But more precisely, information implies the ability to use that knowledge to predict something. The second important aspect of the definition is the focus on “something other,” which reminds us that information must be about something. The third and last part concerns the accuracy of prediction. I can easily make predictions about another system (say, the stock market), but if these predictions are only as good as random guessing, then I did not make these predictions using information.

    “It is possible to think of the entirety of the information stored in our genes in terms of the predictions it makes about the world in which we find ourselves.”

    One thing that the stock market example immediately suggests is that information is valuable. It is also valuable for survival: For example, knowledge enabling you to predict the trajectory of a predator so that you can escape it is extremely valuable information. Indeed, it is possible to think of the entirety of the information stored in our genes in terms of the predictions it makes about the world in which we find ourselves: how to make a body that uses the information so that it can be replicated, how to acquire the energy to keep the body going, and how to survive in the world up until replication is accomplished. And while it is gratifying to know that our existence can succinctly be described as “information that can replicate itself,” the immediate follow-up question is, “Where did this information come from?”

    The hardest question in science

    Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival. So, in the long run, more information is preferred to less information, and the amount of information in our genes will tend to increase over time.

    However, this insight does not tell us where the first self-replicating piece of information came from. Did it arise spontaneously? Now we find ourselves faced with the question that some have called “The hardest question in science.”

    “Information does not change whether it is encoded in bits, in nucleotides, or is scratched on a rock: Information is substrate-independent.”

    At first glance it might appear that this question cannot possibly be answered, unless the class of molecules that gave rise to the first information replicator has left some traces in today’s biochemistry. Different scientists have different opinions about what these molecules might have been. But there are some things we can say about the probability of spontaneous emergence without knowing anything about the chemistry involved, using the tools of information theory. Indeed, information does not change whether it is encoded in bits, in nucleotides, or is scratched on a rock: Information is substrate-independent.

    But information is also, mathematically speaking, extremely rare. The probability of finding a sequence encoding a sizable chunk of information by chance is so small that for practical purposes it is zero. For example, the probability that the information (not the exact sequence) of the HIV virus’s protease (a molecule that cuts proteins to size and is crucial for the virus’s self-replication) would arise by chance is less than 1 in 1096. There just aren’t enough particles in the universe (about 1080), and not enough time since the Big Bang, to try out all these different sequences. Of course, the information in the protease did not have to emerge by chance; it evolved. But before evolution, we have to rely on chance or assume that the information “fell from the sky” (an alternative hypothesis that assumes that life first occurred somewhere else and hitchhiked a ride on a meteorite to Earth).

    It turns out that scientists have been able to construct self-replicating molecules (based on RNA enzymes) that encode just 84 bits of information, but even such a seemingly small piece of information is still extremely unlikely to emerge by chance (about one chance in 1024). Fortunately, information theory can tell us that there are some circumstances (particular environments) that can very substantially increase these probabilities, so a spontaneous emergence of life on this planet is by no means ruled out by these arguments.

    Unfortunately, while given any particular environment we can estimate what the probability of spontaneous emergence might be, we have very little knowledge about the specifics of these environments on the ancient Earth. So while we can be more confident that spontaneous emergence is a possibility, the likelihood that the early Earth harbored just such environments is impossible to ascertain.

    The chances that life emerged beyond Earth are at least as good as the chances it emerged here. Indeed, many meteorites that made it to Earth’s surface carry organic molecules with them, and information-theoretic considerations suggest that the environments they arose in are precisely those that are conducive to life.

    Even though so many uncertainties about life and information remain, the information-theoretical analysis convincingly highlights the extraordinary power of life: While information is both enormously valuable and exceptionally rare, the simple act of copying (possibly with small modifications) can create information seemingly for free. So, from an information perspective, only the first step in life is difficult. The rest is just a matter of time.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 3:50 pm on November 20, 2014 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “Does Antimatter Fall Up or Down?” 

    PBS NOVA

    NOVA

    Wed, 19 Nov 2014
    Matthew Francis

    There are two kinds of matter in the universe: ordinary matter, which makes up all the stuff of everyday life, and antimatter, a sort of mirror image of matter. When the two meet, they annihilate in a flash of energy. It’s our good fortune that, in the early Universe, there was just a tiny bit more matter than antimatter, leaving us with a cosmos almost empty of stuff that could destroy us. Otherwise, we wouldn’t be here to ask what, exactly, antimatter is.

    Here’s what we know: Anti-electrons, known as positrons, are nearly identical to electrons, but instead of being negatively charged they are positively charged. The same goes for other antimatter counterparts: antiprotons are negatively charged and made of the antiquarks corresponding to the quarks in normal protons.

    But physicists think that the other properties of the particles should be the same. Each antimatter particle should have the same mass, spin, and equal but opposite electric charge, and other important properties. But that “should” hides something interesting: In some cases, we simply don’t know the fundamental properties of an antiparticle, because it’s much harder to experiment on antimatter than on matter. For example, it’s possible antimatter doesn’t feel gravity in the same way matter does.

    In other words, antimatter might fall up.

    b
    Up, up and away. Credit: Flickr user Shaun Fisher, adapted under a Creative Commons license.

    Now, that’s a very unlikely possibility. As far as we can tell, the differences between matter and antimatter are confined to interactions involving the weak nuclear force, one of the four fundamental interactions in nature. “Everybody including us would be shocked if we were actually to discover any significant differences” between matter and antimatter, says Joel Fajans, physics professor at the University of California at Berkeley who is studying how gravity affects antimatter. It may be a long shot, but if any experiment showed measurably different behavior, “it would really revolutionize our thinking about how the universe behaves.”

    The effort isn’t easy, though. First, there’s a lot more matter than antimatter in the universe, so any differences in behavior would be very difficult to observe and measure. Second, experiments must be done quickly, before antimatter runs into ordinary matter and everything goes kablooie.

    As a result, we only have rough estimates of some basic properties of antimatter—and some we haven’t measured experimentally at all. Take, for instance, a fundamental quantity called the positron inertial mass, a measure of how difficult it is to accelerate a positron. (The inertial mass is the “m” in E = mc2.) When an electron meets a positron and they annihilate, they give off gamma rays. Researchers can measure the spectrum of gamma rays and figure out how much m was needed to make the E they see. From that, physicists have concluded that the inertial mass of the electron and the positron are very close to equal, if not identical.

    We’d like to do better than “very close,” though. To understand antimatter fully, we need measurements as precise and accurate as our measurements of matter, and that’s a hard goal. Similarly, we don’t yet have precision measurements for the electric charge of the positron and the antiproton, though Fajans and his collaborators have shown that their charges are equal and opposite. This experiment, like many modern antimatter tests, involves atoms of antihydrogen, which are made of a single antiproton and positron. To see if antimatter falls up, Fajans and his colleagues at the ALPHA experiment use strong magnetic fields to trap antihydrogen atoms in a sort of virtual bottle.

    CERN ALPHA New
    ALPHA at CERN

    “If we very slowly turn off the ‘walls,’ the magnetic confining field, [the antihydrogen atoms] eventually get out,” Fajans says. “If we do it slowly enough, even though the effects for gravity are subtle, there’ll be a tendency for them to fall downwards presumably, or upwards if things really are weird.” So far, the results aren’t precise enough to distinguish between falling up and falling down, but that’s merely a sign of how inherently difficult the experiment is.

    However, there’s strong indirect evidence that antimatter behaves gravitationally like matter. According to the weak equivalence principle—a key part of the general theory of relativity [Albert Einstein]—the gravitational mass is precisely the same as inertial mass,. (The strong equivalence principle relates to the mathematical structure of gravitational theory.) Researchers have tested the weak equivalence principle to high precision for ordinary matter, using delicate balances capable of detecting tiny variations in gravitational attraction.

    While we can’t yet make the same lab equipment out of anti-atoms to test the weak equivalence principle for antimatter, we know that protons and neutrons contain “virtual” pairs of quarks and antiquarks, which don’t have independent existence but contribute to the particles’ overall structure. As Fajans points out, “Different isotopes have different ratios of virtual antimatter particles, and it’s very well known that there are no anomalies there. If virtual antimatter particles gravitate differently, that would have been noticed in all of these experiments.”

    There are also theoretical reasons to suspect gravity doesn’t work in reverse for antimatter. Raquel Ribeiro, a physicist at Case Western Reserve University, works on possible modifications to general relativity that could solve the riddle of cosmic acceleration. But Ribeiro doesn’t include antigravity antimatter, “because it leads to a number of physical violations of energy principles,” she says. While naively all it would take is turning mass from a positive into a negative number, the reality for stars and other astronomical bodies would be “some serious instabilities in the system.”

    Theory is a good guide, but we still need experiments to see if our theories are right or if they need modification. In fact, theory is so far unable to solve one of the deepest mysteries in physics. “There simply isn’t enough antimatter in the universe,” says Fajans, “and there isn’t a universally accepted reason as to why matter in the universe predominates by such a large ratio over antimatter. The Big Bang should have created exactly equal amounts of matter and antimatter.”

    That’s one reason why researchers will keep studying antimatter, and why some hold out hope for finding even small differences in the behavior of matter and antimatter. Maybe we won’t see antihydrogen falling up, but even a subtle deviation from expectations could open up a new world of possibilities. After all, that’s what the initial discovery of antimatter did.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:28 pm on November 14, 2014 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “Does Science Education Need A Civic Engagement Makeover?” 

    PBS NOVA

    NOVA

    12 Nov 2014
    Brooke Havlik

    Even though the political campaign signs have been brought inside following Election Day, social studies and government classrooms will continue to discuss civics throughout the school year. According to the National Council for Social Studies, the goal of social studies is to promote civic competence, or the knowledge and intellectual skills to be active participants in public life. Yet, engaging with the most complex public issues of our time—biodiversity, climate change, water scarcity, obesity, energy, and HIV/AIDS—also requires a deep understanding of the scientific process.

    dots
    Students connect the dots between their daily lives and climate change.

    While the economic argument for doing a better job preparing American students with 21st century skills in STEM has been made time and time again, teaching students what philosopher John Dewey called “the scientific habit of the mind,” also has broader benefits for society. The more students are able to connect the dots between scientific processes and science’s impact on society, the more informed their political decisions will be as adult citizens.

    Dr. Ricky Stern of “e” inc., a Boston-based environmental science program that educates hundreds of students each year, believes civic engagement and science education are a natural fit given the many challenges our planet faces. “Civic engagement tells urban youth there are real steps and genuine actions available to them and that rather than watch the events of the day from a distance, instead come in and join in. Come be part of a larger program—of something bigger than just yourself.”

    Bringing social issues into science classrooms may also open up more STEM career possibilities for youth. In 2012, Net Impact and Rutgers University partnered to find out what college Millennials (21-32 years old) look for in their careers. Over 70 percent of college Millennials surveyed responded that it was very important to secure a job that makes a difference, and 31 percent found it “essential.” This is higher than 49 percent of Generation X (33-48 years old) and 52 percent of Baby Boomers (49-65 years old) who found “making a difference” to be important to their career choices. Many of today’s youth grow up connecting the concept of “making a difference” to careers in social work, public health and education. While these are all worthwhile fields, careers in disciplines such as computer science, biotechnology or engineering rarely make the list, despite their strong potential to improve society through science.

    copy
    Dr. Stern believes, “Engagement is important as it signals to children or teens that they are needed or central to making things better.”

    So, what would more civic engagement look like in a science classroom? Every community, school, and educator may have a different approach. Here are three methods and examples of ways to help students participate in civic and socio-scientific conversations.

    Explore answers together

    Alex Miller, a teacher at Village Leadership Academy recently began conversations in her science classroom about the Tuskegee syphilis experiment, an infamous U.S. government study between 1932 and 1972 of rural African American men. Ms. Miller’s goal was to help her students see the connections between science and social justice. She brought in articles about the research and facilitated a conversation about the study’s implications. Ms. Miller knew she didn’t have all the answers for them. Instead of lecturing, she actively explored the history alongside her students and allowed space for them to explore science’s role in the problem. Consequently, she helped her students think deeply about bioethics.
    Make it normal to follow current science events

    Devote time to talking about science news during every class. If you are crunched for time to read it in class, provide students the story’s summary and facilitate a short discussion about the social or political implications. For example, in a recent story from NOVA Next, scientists developed a tool called CRISPR-Cas9 to control mosquito populations through genetic modification. The technique has the potential to control or eradicate malaria. Ask students how this new technology benefits society. What could be some negatives? Do the benefits outweigh the negatives? Give students more responsibility by asking them to pick articles and lead the conversation.
    Offer opportunities for project-based learning with a civic goal

    Dr. Stern mentioned that, “We teach science lessons every week to many children and teens. As they get more involved with the science ideas, simultaneously, we also begin to teach them that there are some related challenges on the planet going on right now.” After talking with students about what project and action they want to take on, students “pick a team project based on what they have learned and they maintain it for the year.”

    feet
    Pick a project devoted to a socio-scientific issue students care about and encourage them to take action on it throughout the year.

    Examples of project-based civic engagement might include recording and reducing school energy consumption, starting a compost program or educating other students and staff on a public health issue. Students can present orally to the class at the end of the year, and use scientific concepts to back up why their project matters.

    There is no doubt that science can have wider appeal by building opportunities for active civic engagement inside and outside the classroom. Are you already bringing civic engagement into your science classroom? NOVA would like to hear about it. Send your story to NOVAeducation@wgbh.org.

    Image Credits: T.E.J.A.S. Healthy Manchester Festival / Flickr CC-BY-NC-ND, 350.org /Flickr CC BY-NC-SA, Penn State / Flickr CC BY NC ND [this is a poor way to provide image credits. I am surprised at this NOVA blog for doing this.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition
    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:09 pm on November 14, 2014 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “For Every 1°C of Global Warming, Lightning Strikes Will Increase By 12%” 

    PBS NOVA

    NOVA

    13 Nov 2014
    Allison Eck

    In the not-too-distant future, as the Earth warms, the heat energy that churns our atmosphere could spark even more lightning than the 8 million that strike today.

    A new study published today in the journal Science suggests that we’ll see 12% more strikes for every 1˚ C of warming. Earlier models used cloud depth to determine how likely they were to generate enough energy to produce a lightning bolt. But climate scientist David Romps and his colleagues instead looked at precipitation, humidity, and temperature measurements taken from weather balloons. Put together, this data indicates how energetic an impending storm could be, and in turn, how probable it is that lightning bolts will streak through the sky.

    light
    Scientists project that lightning strikes could significantly increase in frequency by the turn of the next century.

    Here’s Andy Coghlan, writing for New Scientist:

    By knowing how much water is in the clouds and how much energy is available, Romps says his model can accurately predict how many lightning bolts will get generated. Typically, he says, about 1 per cent of the potential energy picked up by water gets converted to lightning, so by knowing how much water and energy is present, the team can work out how much lightning will form.

    They tested the model using real weather data from 2011, and compared the results with the data on every lightning strike in the US, collected by the National Lightning Detection Network. In simple terms, they found that it retrospectively correctly accounted for 77 per cent of that year’s ground strikes. “When I saw that result, I thought it was too good to be true,” says Romps.

    Romps and his team then applied their lightning model to 11 different climate models. In Romps’ model, lightning varies consistently with temperature and energy. Using that same math, he calculated the percent increase for every 1° C rise in global temperatures. At the extremes, some model runs even suggested that strikes could double by the year 2100.

    The team doesn’t know yet whether these strikes will cluster in particular areas, but one thing is for sure: increased bolts to the Earth’s surface means greater chance of wildfires and a shift in the chemical composition of the atmosphere.

    Here’s Victoria Gill, writing for BBC News:

    As well as triggering half of the wildfires in the U.S., each lightning strike—a powerful electrical discharge—sparks a chemical reaction that produces a “puff” of greenhouse gases called nitrogen oxides.

    “Lightning is the dominant source of nitrogen oxides in the middle and upper troposphere,” said Prof. Romps.

    And by controlling this gas, it indirectly regulates other greenhouse gases including ozone and methane.

    The result could be a vicious cycle: rising temperatures cause an increase in lightning strikes, thereby releasing into the atmosphere gases that perpetuate Earth’s warming even further.

    Of course, Romps’ model isn’t perfect—it doesn’t yet account for the fact that parts of the globe experience very little rainfall, nor does it factor in lightning strikes that don’t make it to the ground. The precipitation measurements could be made clearer, too. Right now, the model measures clouds’ water content and not its additional ice content. Nevertheless, it seems likely that someday soon, lightning will be even more prevalent than it is today.

    Experts aren’t sure what triggers lightning, but suspect it could be cosmic rays from outer space.

    See the full article, with video, here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition
    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:49 pm on November 13, 2014 Permalink | Reply
    Tags: , , , , , NOVA   

    From NOVA: “There’s More Than One Way To Hunt For Gravitational Waves” 

    PBS NOVA

    NOVA
    Part 1
    Physicists are poised to make the first-ever direct detection of gravitational waves. Will the detection come from a big-budget experiment already a decade deep into the search? Or will one of a handful of dark-horse experiments win an upset?

    When Albert Einstein published his general theory of relativity in 1916, he revolutionized physics and reenvisioned the nature of spacetime and gravity: he showed that spacetime was dynamic, not static, and reimagined gravity as the bending and warping of spacetime by massive objects. He also made the startling prediction that gravity travels in waves. Just as objects moving through water cause waves to ripple outward, objects moving through space should produce ripples in spacetime. The more massive the object, the more it will churn the surrounding spacetime, and the stronger the gravitational waves it should produce.

    i
    Gravitational waves from two merging black holes, as simulated by a supercomputer model at NASA’s Ames Research Center. Credit: Henze, NASA

    Most of these ripples would be small, and would dissipate too quickly to be detected, Einstein predicted. But certain bodies, such as merging black holes, supernovae, or orbiting pairs of neutron stars, are massive enough that they might produce detectable gravitational waves. Physicists have devised a number of ingenious methods to detect gravitational waves directly, from extremely precise laser interferometry to clever schemes for using stars, pulsars, even the Earth and moon as gravitational wave detectors.

    The big-ticket project is the half-a-billion-dollar Laser Interferometer Gravitational Wave Observatory (LIGO), which boasts two highly sensitive laser interferometers in Louisiana and Washington state. Here’s how it works: Split a laser beam in two and send each beam down one of two long, perpendicular tunnels, each with a mirror at the end. When the laser beams strike the mirrors they will be reflected back to the same spot, where they will recombine and cancel each other out. But if a gravitational wave happens to be passing through, it will warp the space between those mirrors ever so slightly. One beam will travel a longer path than the other, and when they meet up again, they won’t cancel each other out, producing light that will be picked up by a detector.

    LIGO ran from 2002 to 2010—almost a decade—yet failed to detect any gravitational waves. Furthermore, LIGO is sensitive only to a small fraction of the gravitational wave spectrum. Just as optical, radio, x-ray, infrared, and gamma-ray telescopes each reveal different, and complementary, electromagnetic views of the cosmos, says Montana State University physicist Neil Cornish, it will take more than one kind of gravitational wave telescope to “see” the full gravitational wave spectrum. “You can only see [the waves] in their particular [frequency] bands because the frequency they emit is set by the mass of the system,” Cornish explained in an interview. “We need to open up the entire gravitational wave spectrum just like we’ve opened up the entire electromagnetic spectrum [in astronomy].”

    LIGO’s range centers on stellar remnant black holes and other celestial objects of similar mass. Tackling the lowest end of the gravitational wave frequency spectrum are the headline-grabbing BICEP2 and Planck experiments, which are looking for imprints left by gravitational waves from the earliest moments of our universe in the polarization of the cosmic microwave background radiation.

    Cosmic Background Radiation Planck
    CMB per ESA/Planck

    To detect higher-frequency gravitational waves, like those produced by supermassive black holes, astronomers are using pulsars—rapidly rotating neutron stars that beam out regular radio pulses—like beacons on the sloshing sea of spacetime. One such effort is the North American Nanohertz Observatory for Gravitational Waves (NanoGRAV) , part of an international consortium that also includes the European Pulsar Timing Array, and the Parkes Pulsar Timing Array in Australia.

    The first pulsar was discovered in 1967, when Jocelyn Bell Burnell and Antony Hewish noticed strange, highly regular radio pulses coming from a fixed point in the night sky. They cheekily dubbed the mysterious object LGM-1 (for “little green men”). The signals weren’t coming from alien transmissions, however, but from a rapidly rotating neutron star. Pulsars form when stars more massive than our Sun explode and collapse into neutron stars. As they shrink, they spin faster and faster, because angular momentum is conserved. (Think of what happens when you swing an object around your head on a string: the more you shorten the tether, the faster it goes.) Pulsars also blast out radiation that can be picked up on Earth whenever that beam sweeps into our direction, like the rotating beam of a lighthouse.

    The fastest pulsars, spinning hundreds of times per second, make excellent clocks—on par with the best atomic clocks. “That regular rotation of a pulsar is like the swing of a pendulum,” said Cornish, and it enables astrophysicists to precisely time all kinds of astronomical systems. Pulsars have helped astronomers identify distant exoplanets, and provided the first indirect evidence for gravitational waves back in 1982, when astronomers observed energy leaking out of a binary pulsar system—probably in the form of gravitational radiation.

    The NanoGRAV network uses data from telescopes at the Arecibo Osbervatory in Puerto Rico and the Green Bank Telescope in West Virgina to monitor 19 pulsars in the Milky Way that serve as a galactic-scale gravitational wave detector. The method is described on NanoGRAV’s Website as a “cosmic Global Positioning System… looking for tiny changes in the position of the Earth that are due to the shrinking and stretching effect of passing gravitational waves,” although Cornish said the analogy is imperfect. The GPS employs multiple satellites to triangulate the three dimensions of space, thereby pinpointing the location of the source of a signal. NanoGRAV is looking for a common effect in the form of a telltale signature: a “shimmering” effect produced because pulses affected by gravitational waves should arrive slightly earlier or later in response to those ripples in spacetime. While no detection has yet been made, the collaborators are currently combining data from all three arrays to further improve accuracy and precision, according to Cornish. Those results should be released in the next several months.

    Arecibo Observatory
    Arecibo Observatory

    NRAO GBT
    NRAO/Green Bank Telescope

    “Compared to the cost of LIGO, this is the bargain basement way of detecting gravitational waves,” says Cornish. “The NSF has made this major investment using laser interferometers. For a tiny fraction of that, they have a chance to enable detection using pulsar timing. As far as bang for buck, it’s the cheapest way to go about it.”

    Part 2
    All of these techniques are exquisitely sensitive, seeking out minute changes. But gravitational waves might have a much stronger impact on matter than previously assumed, thanks to resonant frequencies. It’s something Alexander Graham Bell noticed as a young man: strike a chord on one piano and it will be echoed by a piano in another room. The effect is known as “sympathetic resonance.” Objects like a piano’s strings vibrate at very specific frequencies. If there is another object nearby that is sensitive to the same frequency, it will absorb the vibrations (sound waves) emanated from the other object and start to vibrate in response.

    pi
    Strike a chord on one piano and it will be echoed by a piano in another room. Physicists have proposed using a similar resonance phenomenon to detect gravitational waves. Credit: Flickr user Half Full Photography, adapted under a Creative Commons license.

    That’s the essence of a new paper in the Monthly Notices of the Royal Astronomical Society: Letters, proposing that certain stars could absorb energy from gravitational waves that ripple by. Should that happen, the stars would show a temporary marked increase in brightness from that excess energy that could be measurable.

    Co-author Saavik Ford of CUNY’s Graduate Center compared stars to the bars on a xylophone, each of which has a natural resonant frequency, just like piano strings. Striking those bars in sequence, moving from lower to higher frequencies, is akin to how two merging black holes produce gravitational waves of gradually increasing frequency. “If you have two black holes merging with each other and emitting gravitational waves at a certain frequency, you’re only going to hit one of the bars on the xylophone at a time,” Ford explained. “But because the black holes decay as they come closer together, the frequency of the gravitational waves changes and you’ll hit a sequence of notes. So you’ll likely see the big stars lighting up first followed by smaller and smaller ones.”

    Perhaps the Earth itself could be used as a gravitational wave detector: it, too, could vibrate like a bell in response to gravitational waves rippling through. Set up a global array of highly sensitive seismometers, and one could conceivably find evidence of such waves in the data That was the gist of a 1969 paper by physicist Freeman Dyson.

    Dyson’s work was the inspiration for Harvard graduate student Michael Coughlin and a colleague, Jan Harms of the National Institute of Nuclear Physics in Florence, Italy, who were working with seismic data relating to LIGO with an eye toward reducing the noisy background so that a signal would be more easily detected. Coughlin recalled Dyson’s paper and thought such an approach could be useful for setting some vital constraints on background noise, and he and Harms did an initial analysis of terrestrial seismic data.

    Then another professor recalled his earlier geophysical work with instruments placed on the moon during the Apollo missions to track so-called “moonquakes.” Those instruments collected lunar seismic data from July 1975 to March 1977. Intrigued, Coughlin and Harms analyzed that older dataset as well, correlating it with their earlier terrestrial analysis. They published their findings in Physical Review Letters earlier this year.

    Coughlin and Harms didn’t find any evidence of gravitational waves in their analysis, nor did they expect to. One reason is that there is a lot of seismic noise from other sources cluttering up the data. The moon might not have Earth’s plate tectonics, atmospheric fluctuations, or volcanic activity, but asteroids routinely hit the moon, causing it to “ring” for weeks from the impact. There is also background noise generated by thermal heating from the Sun and tidal forces.

    Cornish pronounced their work a good analysis but said it is unlikely to lead to direct detection of gravitational waves, even if NASA placed upgraded seismometers on the moon with far greater sensitivity to get a better dataset. He suggested that the best way to search for gravitational waves in that frequency range is the space-based LISA, now known as the Next Gravitational-wave Observatory (NGO), another very large and pricey collaboration similar to LIGO (in that it uses a similar laser interferometer array) that is still several years’ away from completion. Meanwhile, LIGO is currently undergoing upgrades, including an additional mirror to increase its sensitivity to other frequencies of gravitational waves, like those produced by binary pulsars.

    Still, there remains much uncertainty in the various proposed models for gravitational waves. Coughlin’s and Harms’ null result has helped further constrain the range in which we should expect to see gravitational waves in Earth’s vicinity. “If we thought we knew what the source distribution of gravitational waves looked like in the universe, then it wouldn’t be quite such a useful exercise,” Coughlin said. “Since we don’t, and the cost is relatively low, I don’t see why we shouldn’t try it.”

    See the full article here, for Part 1 and here for part 2

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition
    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 5:33 am on November 12, 2014 Permalink | Reply
    Tags: , , , Food research, NOVA   

    From NOVA: “The Next Green Revolution May Rely on Microbes” 

    PBS NOVA

    NOVA

    12 Jun 2014
    Cynthia Graber

    Ian Sanders wants to feed the world. A soft-spoken Brit, Sanders studies fungus genetics in a lab at the University of Lausanne in Switzerland. But fear not, he’s not on a mad-scientist quest to get the world to eat protein pastes made from ground-up fungi. Still, he believes—he’s sure—that these microbes will be critical to meeting the world’s future food needs.

    Sanders’s eyes widen with delight and almost childlike glee when he talks about a microscopic life form called mycorrhizal fungus, his chosen lifetime research subject. Mycorrhizal fungi live in a tightly wound, mutually beneficial embrace with most plants on the planet. Years of dedication have made Sanders into one of the world’s foremost experts on the genetics of the microbe, and he recently was part of a team that sequenced the first mycorrhizal fungi genome.

    mz
    Mycorrhizal fungi colonize the tip of a root, seen here under magnification.

    Despite his drive, Sanders comes across as light-hearted as he teases and jokes with fellow researchers. But he loses his affable smile as he fires off facts about the upcoming food shortage: The world’s population is expected to increase to between 9 billion and 16 billion people. Five million people per year die of direct causes of malnutrition. Three and a half million of those are children under five. Today, we have the means to grow enough food to feed all those people, but we will most certainly need to produce more in the very near future.

    Sanders may have come up with a way to do just that. He has successfully bred custom varieties of microbes that can help plants produce more food. It’s one of the ultimate goals of farming research—more food with, he hopes, little or no environmental downside.

    We’ve been looking at the wrong set of genes.

    The question of crop productivity is increasingly fraught. People in developed countries eat an enormous amount of food, and people in developing countries are beginning to close the gap. Meanwhile, the world’s population is swelling. By 2030, the UN’s Food and Agriculture Organization predicts food demand will soar by 35%. And then there’s the accelerating impact of climate change: The IPCC’s latest report on the subject, published in March, shows that scientists are predicting a 2% decrease in crop yields per decade over the next century. Higher temperatures and longer, more dramatic swings between drought and rain mean the plants that we rely on will have a hard time weathering the strain.

    According to the FAO, most of the growth in production that we’ll need has to come from increasing yields from crop plants. Selective breeding doesn’t seem to be offering the types of dramatic yield increases seen in the past. Meanwhile, genetic engineering has yet to lead to any significant increase in yields.

    Now, many scientists are saying that we’ve been looking at the wrong set of genes. Instead of in plants, the crucial genes may reside in the galaxy of bacteria and fungi that live in the soil and throughout a plant—the kind that Sanders studies.

    Sanders’ plan is to give existing fungi-plant relationships a boost by breeding better fungi. He’s testing varieties of lab-grown microbes out in the field in tropical Colombia. There, he’s hoping to help cassava plants grow heftier roots, as these potato-like crops are a staple for nearly a billion people around the world. So far, the results show that this approach just might work.

    Belowground Microbiome

    Microbes in the soil function much like the human microbiome, which helps us break down food, access nutrients, and defend against harmful invaders. A plant’s microbiome protects it against malevolent microbes. Microbes can also communicate with one another, flashing chemical alerts that let one plant know when another nearby is under attack. Bacteria and fungi even structure the soil so that it clumps together and doesn’t blow or wash away. And, just as our human cells are outnumbered by our microbial support, the microbial genes in and near the root system alone of a healthy plant greatly eclipse those of the plant itself.

    Plants have depended on microbial assistance since they first edged out of the water onto dry land, about 450 million years ago. They lassoed photosynthetic cyanobacteria and turned them into cellular machines known as chloroplasts, which harvest the sun’s energy. Today, plants are still supported by hundreds of thousands—perhaps millions—of different species of bacteria, fungi, even viruses. In fact, the rhizosphere, the area around a plant’s roots, is considered one of the most ecologically diverse regions on the planet.

    The microbiome in the rhizosphere acts as an extension of plants’ root systems, breaking down nutrients into forms that plants can use. Mycorrhizal fungi have whisper-thin fronds, called hyphae, that reach out past the root tips to access water and nutrients the plant needs to survive. They then trade those for carbohydrates the plant provides. Scientists believe that as much as 30% of the carbon that a plant produces through photosynthesis is pushed into the soil to support an entire city of microbes.

    Though mycorrhizal fungi are just a multitude microbe species in the soil in and around plant roots, they live in symbiosis with about 80–90% of agricultural crops in a relationship hundreds of millions of years old. Mycorrhizal fungi cannot survive without plants, and most plants cannot thrive without mycorrhizal fungi.
    As much as 30% of the carbon that a plant produces supports an entire city of microbes.

    On the most basic level, scientists have known that microbes associate with plants for more than a century, but, even today, many of the details of the interactions are still unknown. Part of the challenge in teasing them out is that they’ve been nearly impossible to study. Scientists estimate that perhaps 1% of all soil microbes can be grown on a petri dish, the conventional model for such research. By only being able to study the thinnest slice of life, we’ve been missing out on a vast, complicated, messy world. It’s like trying to guess what everyone on a city block does during the day by trailing just one person.

    Recently, though, scientists have begun to get a better glimpse. Genetic analyses can help classify and understand newly discovered microbes. Big Data-style techniques, with names like metagenomics, proteomics, and transcriptomics, describe methods by which scientists can take an overall picture of the genetic diversity of life in a given region, and even what genes are active. These types of studies might not be able to describe every individual, but they can give a sense of what genes are in play. Such tools are able to do more, do it more quickly, and do it for less money nearly every year.

    In only the last few years, scientists using these tools have begun to regularly uncover new information about the crucial links between microbes and plants. They’re unraveling clues as to which bacteria, fungus, or virus performs which function. They’re discovering microbes that can help plants withstand heat and drought. And they’re dialing into the genetics to understand how the microbes do what they do, how the plants react, and even what genetic material is exchanged. There’s still a world of research to be done, however. With many millions of individuals packed into every gram of soil, it’s a daunting task.
    tending-cassava-field

    ka
    Tending a cassava field in the Amazon

    Farmers have manipulated the plant-microbe relationship, unknowingly, for thousands of years. Compost, for example, does not simply contain beneficial nutrients—it also teems with living organisms, as does animal manure. Crop rotation, too, can enhance microbial diversity. Stalks and crop remains left on the field or plowed into soil provide microbes with food. And growing particular plants together—such as the traditional grouping of bean-squash-corn in the early Americas—does the same, as each plant likely contributes a complementary set of microbes.

    But, for the most part, the tightly braided relationship hasn’t yet factored into the workings of modern agriculture. Today, if a plant needs more of anything, we just add it—water, nitrogen, phosphorus, manganese, and so on. In the 20th century, this approach produced an abundance of crops and staved off starvation for millions. But it has also soaked groundwater with nitrogen, led to algal blooms in lakes and rivers, and spawned a massive dead zone in the Gulf of Mexico. Studies show that nitrogen fertilizers can also reduce the diversity of microbial life. Pesticides can be more harmful. Even tilling cleaves fungal networks. Until recently, we knew little about how we’ve been inadvertently crippling our crops’ complicated support network.

    “Over the last hundred years in agriculture, we’ve tried to take microorganisms out of the picture. And by doing that—by disrupting the soil with tillage, by using chemical pesticides—we have greatly altered the agricultural biome,” says Rusty Rodriguez, a former microbiologist with the U.S. Geological Survey who’s now head of Adaptive Symbiotic Technologies, a company developing microbial-based seed coatings. “The efficacy of many chemicals is beginning to wane.” Bacteria and fungi, Rodriguez says, “are the next paradigm for agriculture.”
    From Switzerland to Columbia

    Sanders’ Swiss workplace is immaculately clean, and the room where the fungi are taken out for study is scrupulously sterile. Every night, all night, UV lights shine a microbe-killing glare. They destroy anything that could infect his cultures of mycorrhizal fungi.

    Over the course of Sanders’ 26-year career, he’s made a number of key discoveries about fungi genetics and reproduction. He conducted early research that demonstrated that the greater the diversity of mycorrhizal fungi in a given ecosystem, the greater the diversity of plants. And in 2008, as he delved into genetics, he proved that they don’t just reproduce by cloning—they actually exchange genetic material, both in the lab and in the field.

    This gave him an idea. If the microbes created offspring that were different from one another, Sanders thought, “you have a good chance that some will be more effective on plant growth than others.” So he came up with a plan: Take different fungi, breed them, see if any help plants out more than others. In other words, take the approach to farming that breeders have used for thousands of years and use it on fungi.
    Without human intervention, the whole system of microbial support might not be optimally tweaked to match crossbred crops.

    This is where Sanders runs into occasional criticism from some of his microbe-studying colleagues, who say that nature has already bred all the best variety of microbes. “If you use the argument from these researchers,” he counters, “then no one would have produced any plants through plant breeding, because they would have said, ‘Well, nature’s already made the best plants, and we can’t make any more that are any better than what nature has made.’ Now, of course, we know from a few thousand years of agriculture that we can make plants better by crossing them, and we can get varieties that produce bigger yields than that which we see in natural-occurring varieties of those plants in nature.” Without similar human intervention, the whole system of microbial support might not be optimally tweaked to match.

    To test out his idea, Sanders partnered with a colleague in Switzerland who was studying the genetics of the fungi-rice relationship, and who already had conducted research in a university greenhouse set up for rice cultivation. Sanders grew the fungi and allowed them to exchange genetic material and reproduce, creating genetically distinct offspring. Then, he colonized rice with these distinct lines. Sanders used rice as a matter of convenience due to his colleague’s experience, but he also knew that rice, as farmed today, tends to actually grow more poorly when inoculated with mycorrhizal fungi, making it a good test bed. He was stunned when one of the lines produced a five-fold increase in growth over the other fungal lines. “To see such a huge growth increase was very, very surprising,” he says. The greenhouse was an artificial environment, and the microbe-enhanced soil was compared to sterile soil. It in no way mimicked nature. But it proved a point.

    Around that time, Sanders got back in touch with Alia Rodriguez, an agronomist in Colombia who also had expertise in mycorrhizal fungi. They had originally met when he was one of her PhD examiners in England. He was desperate to visit Colombia and see its amazing animal and plant biodiversity for himself, so they decided to try to find a research project together.

    It happened that Colombia offered the perfect field test for his new approach. Mycorrhizal fungi are skilled at helping plants access phosphorus, a key nutrient, which plants in tropical countries have a particular problem securing. The acidity of soil there results in a chemical reaction that ties up most of the phosphate that farmers add to soil. Farmers end up paying precious money to add phosphate that plants mostly can’t use. “I always tell my students, how can we rely on a practice that is so inefficient?” Rodriguez says. “It has to change, because it cannot be sustainable.”
    experimental-cassava-fields.

    fie
    Ian Sanders and Alia Rodriguez’s experimental plots in Columbia

    Colombia is also the home of cassava, a fleshy white root. Cassava is a major staple for nearly a billion people in more than 100 countries, from Brazil to Nigeria to Thailand, who rely on it in much the same way we rely on bread or potatoes. In its various homes and in various languages, it is called cassava, yuca, manioc, balinghoy, kamoteng kahoy, tapoica-root. If you can produce more cassava, then poor communities can eat more food.

    Sanders liked the idea of breeding microbes to increase cassava production. But they still had one major stumbling block ahead. There was no practical way to transport enough pure fungus from his Swiss lab to colonize the cassava trial fields in Colombia.

    This had also been a problem for the early pioneers in the field. In earlier decades, a variety of start-ups had marketed mycorrhizal fungi transported in soil, an imperfect medium that also contained plant roots and a host of other microbes. There was no way to tell whether it contained any live, viable material, let alone a specific species. Plus, transporting enough soil for every plant root on a farm would be heavy and prohibitively expensive.

    Fortunately for Sanders and Rodriguez, a company in Spain named Mycovitro coincidentally announced the culmination of decades of research of their own: a gel that could act as a vehicle for highly concentrated, purified mycorrhizal fungi. With the gel, Sanders would know that he was only transporting the microbes he wanted. A single small bottle could provide enough fungi for an entire field. Even more importantly, the gel base was capable of growing any variation that Sanders bred in his lab. The team partnered with Mycovitro to grow Sanders’ varieties. (The company has no financial connection to Sanders’ and Rodriguez’s research, and neither of the scientists have a stake in the company. The company, however, is providing its services for free, and it will have first right of refusal to commercialize any promising new line that Sanders and Rodriguez develop.)

    With the final piece in place, Sanders and Rodriguez set their research project in motion. They headed down to Columbia to test their varieties by growing hectares of cassava along the edge of the llanos, the country’s lush, damp tropical savannah.

    Catching On

    As the pieces of Sanders and Rodriguez’ research fell into place, the field of commercially-applicable bio products was undergoing a renaissance. A few decades ago, interest in microbes and their use in agriculture flared, but most of the commercial products quickly flickered out. Most of the laboratory successes hadn’t translated to the field. One of the few agricultural microbes that did catch hold was the bacterium Rhizobium, which helps legumes access nitrogen. It’s used extensively on crops such as soy. Other microbes, such as the bacterium Bacillus, are used to protect plants from pathogens. Rhizobium and Bacillus are not the only examples on the market, but the combined market share is still a small fraction of the multibillion dollar agro-chemical industry.

    But new, more effective products have begun to emerge. Marrone Bio Innovations’ most recent pesticide, called Grandevo, was developed from a soil bacterium and is marketed to protect vegetable crops from sucking insects and mites. The company, with more than 150 patents pending, has additional products in the pipeline, including a strain of Bacillus that both controls pathogens and encourages plant growth.

    Dozens of field trials in 14 states around the U.S. are testing microbial products for corn, soybeans, wheat, barley, and rice.

    Rusty Rodriguez (no relation to Alia Rodriguez in Colombia), the head of Adaptive Symbiotic Technologies, got his start in the 1990s when he and his colleagues discovered the symbiosis between plants and fungi in Yellowstone that allowed plants to survive in temperatures as high as 150˚ F. Once he identified and isolated the fungus responsible for the plant’s heat-survival ability, he realized he could use it to help other plants survive extreme heat.

    Rodriguez dove headfirst into extremophiles, sending company employees to collect plants from extreme environments around the U.S. He’s focusing on a number of products—some are single fungi, others are communities working together—that confer a variety of benefits to agricultural plants: drought tolerance, salt tolerance, and the ability to withstand swings in temperature. His company has developed tests that rule out any potential negative impacts of the strains, such as plant damage or toxicity to animals that might snack on them. They have dozens of field trials in place in 14 states around the U.S., working with farmers who are testing their products in corn, soybeans, wheat, barley, and rice.

    Farmers have been willing partners, Rodriguez says, happy to test products that might help what can be a razor thin profit margin. But, overall, the science of applying microbial products in agriculture has been hampered by one major challenge: moving from the lab to the field. “Field work is a lot more difficult to do,” says Rodriguez. “It fails way more often.”

    Sanders and Alia Rodriguez learned the same lesson in Colombia, when the floods came.

    To the llanos

    In Columbia, Sanders and Alia Rodriguez teamed up with an agricultural college named, appropriately, they hoped, Utopia. The professors and students served as field monitors for the crops and the research. Early one morning last July, the sun barely lifting off the flat green fields, I accompanied them and a group of students as they tromped out to visit their plants. Rodriguez poked fun at Sanders’ obsession with snapping photos: “We need to be moving on!” she nudged. “Yes, yes,” he muttered, bending down to focus his lens on a spider whose web spread across the spiny leaves of a pineapple plant.

    cas
    A graduate student tends cassava in an experimental plot.

    Finally we reached the experiment. The cassava looked nearly identical, all about three feet tall, creating a waist-high carpet of broad emerald leaves glittering with droplets misted from the low, grey sky. Despite the plants’ near uniform appearance, Sanders and Rodriguez knew that underground, where the fungi were going to work, the story would be different. There, they had expected to find roots of all sizes.

    The two scientists wandered out, half obscured by foliage: Rodriguez, with tight, dark ringlets woven into a long, single braid and tucked through the back of a salmon-colored baseball cap, and Sanders, whose pale skin clearly marked him as the outsider in the group. Isabel Ceballos, the Colombian PhD student managing the project, pulled a bright pink poncho over her head to ward off the rain.

    Each of the young cassava plants had started out as six-inch sticks. The team had laid them in the earth and covered them with a shallow layer of soil. Three weeks later, when the sticks started to form root buds, the students returned and carefully squeezed a layer of fungus-filled gel beneath a portion of each plant. As the roots stretched into the soil, they pushed down through the gel, inoculating them with mycorrhizae.

    That July day in Colombia, after checking the plants in the field, Sanders, Rodriguez, and I dragged plastic chairs together. They’d cleaned up from the morning’s mud. Rodriguez had changed into a striped cotton top, and her hair cascaded in waves over to the side, revealing beaded lime green and black earrings in the shape of lizards. Sanders’ short-sleeve plaid shirt looked clean and fresh. The sun set over Utopia’s low, red-roofed buildings, and the shrill blur of insects tussled with the frogs’ boggy croaks. The air was thick and warm. Fireflies flashed languidly, slow pulses of glowing and dimming light.

    “It was a good surprise to see the experiments up and running in the field now,” says Rodriguez, relaxing into the chair. “It’s been a process to get things going here. Finally to see it happening—it’s difficult, but it’s achievable. A good feeling.”

    Early on, the team had learned that Mycovitro’s own variety of mycorrhizal fungi increased cassava yields by as much as 20%. Now their own custom, lab-grown microbes were being tested. They had two studies in the field: one in which the cassava were planted in black plastic bags, and a second later one in which the cassava were planted directly in the field, with uninnoculated cassava as a barrier. Each study would take 11 months—the full time for a cassava to reach maturity.

    The first plants in the plastic bags looked a bit sickly; they’d be harvested in October. The second experiment with the plants directly in the ground were flourishing. Those would be harvested the following spring.

    Rodriguez is generally the positive one of the pair, sure that they can find a way to work through all challenges. Sanders tends to be more cautious, more pessimistic. “In Switzerland,” he joked, “we think of every single problem that could happen, and people here in Colombia are extremely optimistic—‘No worry! It will work!’” Rodriguez laughed in response. But things were looking good. Both scientists were pleased—even excited—about what they’d seen. Rodriguez’s optimism appeared justified.

    Her sunny outlook was tested only a few weeks later. The skies of the llanos, often thick and lazy with morning drizzle, turned dark. The clouds unleashed a month’s worth of merciless rain in only 48 hours. Water swept down over the cassava. When the rains finally faded, plant matter was clogging most of the field drains. Liquid mirrors pooled across the research field. Some of the plants, their roots surrounded by water and gasping for oxygen, listed to the side.

    Ceballos, the PhD student in charge of the project, heard the news first. She panicked and ran to Rodriguez to tell her what had happened. Rodriguez panicked as well, thinking, “What are we going to do?” But she quickly regrouped. “We need data,” she told Ceballos, and then called Sanders.

    un
    Unearthing cassava roots

    After a few days, students from Utopia who were dispatched to check on the fields sent back photos. Variation 1, with the older plants trapped in plastic bags, was fine. In the second one with healthier plants, the team received an incredible turn of luck. True, many of the plants were destroyed. But almost none of them had been coated with the fungi. Instead, almost all the dead cassava were just border plants.

    Sanders was relieved. “It would have been a disaster for us,” he says, if the plants had died. It would have set the project back at least a year—and the team’s funding was due to end in the summer of 2014.

    Three months later, in October, it was time to harvest the plants in the plastic bags. Ceballos headed back to Utopia. Each day for a week, she and another graduate student worked with students, crouching down and cutting open the thick black plastic. They shoved aside the soil that clung, damp, to the roots. The cassava poked out, some thicker than others, all with pale, purplish skin, smooth and wet, peeking through the dirt. Their flesh was bright white and oozed milky droplets.

    we
    Utopia students weigh cassava roots in the field.

    The team uncovered more than a thousand roots. All were quickly weighed at Utopia. Then Ceballos hauled the best, least damaged representatives of each cassava plant back to Bogotá, nearly 800 pounds of food. She stored them in a cavernous new freezer the lab bought specifically for this purpose. Over the next few months, she tested each plant’s dry weight and evaluated its fibrousness, starch content, acid content, and other variables that attest to the overall quality.

    Sanders didn’t have high hopes for the first harvest. After all, the crops didn’t look nearly as healthy as the cassava planted straight in the field. But the results thus far have surprised—and delighted—him. The data hasn’t been published in a scientific journal yet, but, he says, “We have actually seen huge differences in the weight of the cassava roots—much larger differences than seen in the rice experiment. We thought it would work but not to such an extent.”

    Into the Mainstream

    Rusty Rodriguez’s approach is proving successful, too. In 2014, his company is releasing two products, one for rice and one for corn, and he plans to have additional products for a wider variety of crops available by 2015. Based on his company’s field research, test plants are able to tolerate more stress from swings in temperature or water availability, and they can defend themselves more effectively against pests. He says his team is now looking at helping farmers decrease the amount of fertilizer they use by employing the fungi. They’re also publishing scientific studies on their research.

    The major agricultural seed and chemical companies are taking notice. In the fall of 2013, Monsanto paid the Danish company Novozymes $300 million to form a partnership called the BioAg Alliance. Novozymes creates what they call “microbial yield and fertilizer enhancers,” among other products in a variety of sectors. The partnership strengthens Monsanto’s role in what they term “sustainable microbial technology.”

    The rest of the field seems to be following suit. The trade journal Agrow: World Crop Protection News, wrote that the biopesticide sector was finally no longer “fringe” in April of 2012, and by 2013 proclaimed that it is now an “intrinsic part of the crop protection industry.” In 2012, Bayer bought the small biopesticide company AgraQuest. Syngenta bought Pasteuria Bioscience, and also has an exclusive international deal to sell a Bacillus-based biofungicide. The FDA is testing the spraying of bacteria on tomatoes that can destroy the human-harming salmonella and prevent other forms of contamination.

    There are plenty of concerns in the field of applied microbes for agriculture. One is whether any product that is successful on one farm will be equally successful on another. Then there’s the concern about releasing microbes into new environments, which means that regulatory agencies are demanding extensive environmental tests before certifying new products.

    The Colombia team is sensitive to this, and they’re studying the existing microbial ecosystems in the presence of the new fungi. They’ve also sent a grad student into the Amazon to collect fungi from wild versions of cassava, fungi that have co-evolved with the cassava for thousands of years, in hopes that they can isolate, grow, and breed these cassava-loving fungi as well.

    film
    Thin filaments of mycorrhizal fungi form a dense network between roots.

    Sanders has an ambitious, seemingly quixotic goal that he figures could be completed in 15 years, maybe 20. He wants to breed enough genetically distinct lines of fungi and try them out with enough crops in enough different environments so that researchers can create what’s called an “association map.” He would start by characterizing the genetics of the fungus and then map them against the crops and the environment. By peering deeply enough into the genetic code, he hopes we can catch a glimpse of which genes make quinoa grow better in Peru, for example. That way scientists could breed a new species of fungus and know in advance which crop it would improve without having to undertake years of trials.

    It seems nearly impossible to do enough studies, with enough crops, in enough farmland around the world to generate such a map. Genetic solutions also frequently seem to dance out of reach. Sanders insists, though, that big, crazy scientific goals in agriculture are crucial. “As one of the senior people in the Food and Agriculture Organization of the United Nations said to me, ‘If scientists don’t do that, then we are in trouble in the future.’ I believe he is right.”

    Sanders and Rodriguez are now setting up studies in Africa, where farmers, like many in Colombia, can find it difficult to pay for fertilizers and suffer from low yields. Cassava is also one of the top crops there. The team has formed partnerships with local research centers to test varieties of fungi on cassava crops in African soil. They’re hoping the research will begin soon, but they’re still searching for funding.

    The scientists believe they’re on their way to achieving their goal of helping farmers grow more food, sustainably. Says Sanders, “We really have to be working extremely hard now to produce the technology that’s going to be used in 10, 15, 20 years’ time. Even if we have something that’s good now, we don’t stop. We have to go for something that’s much better.”

    See the full article here.

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:37 pm on October 31, 2014 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Bioinspired Underwater Glue Could Soon Replace Stitches” 

    PBS NOVA

    NOVA

    Fri, 31 Oct 2014
    Sarah Schwartz

    It’s a sticky situation—in the best possible way. By combining proteins that mussels and bacteria use to stick to surfaces, scientists at the Massachusetts Institute of Technology have created a strong new underwater glue. This adhesive could tackle an important challenge in various fields, including surgery, where repairing wet surfaces is essential.

    For years, scientists have used marine organisms for insight in producing underwater glues. Water forms a “weak boundary” on surfaces it contacts, which prevents adhesives from attaching, says Dr. Herbert Waite, Professor of Molecular, Cellular, and Developmental Biology at the University of California, Santa Barbara. This becomes a challenge in fields where wet surfaces need to be repaired—marine salvage, dentistry, surgery, and more. But organisms like mussels and barnacles regularly overcome this obstacle, binding easily to wet rocks.

    The MIT team turned to these organisms for inspiration—and ingredients. “One of the promises in synthetic biology is to be able to mix and match and optimize biologically based materials,” says Dr. Timothy Lu, an associate professor in MIT’s Synthetic Biology group and an author of the study. Lu and his colleagues combined proteins from two different sources—the feet of mussels, and E. coli bacteria.

    mus
    By combining proteins that mussels and bacteria use to stick to surfaces, scientists at MIT have created a strong new underwater glue.

    A good adhesive has two properties, Waite says: It has to be able to stick to other surfaces, and it also has to bind to itself. DOPA, the protein mussels use to adhere to surfaces, can do both, but its behavior depends upon the conditions of its environment. Mussels use various “tricks” to control their DOPA that aren’t fully understood, Waite says. If you’re not a mussel, it can be hard to manage DOPA’s behavior.

    That’s where the second protein helps. Amyloids are also adhesive, water-resistant, and link strongly to one another. Barnacles, algae, and bacteria use them to stick to surfaces. Lu and his team saw an opportunity: “[W]e thought by combining the bacteria with the mussels, we might be able to get some synergistic behavior,” says Lu.

    The result was a glue stronger than any other bio-derived or bio-inspired adhesive made to date. Waite, who was not involved with the study, says the results “really impressed” him. The researchers only asked DOPA to work in the form where it adheres to surfaces, he explains, while the amyloid proteins held the glue together. This joint behavior gives the glue its strength.

    Lu believes that this is only the beginning. “We only looked at two of the proteins that are involved in mussel adhesion…If we could combine multiple proteins on top of that, maybe we can even get stronger performance,” he says. While the group has been focused on adhesion alone, in the future, the group plans to explore potential underwater and biomedical uses, says Lu.

    These biomedical applications could be profound, especially in surgery. Waterproof glues could help seal internal wounds, even when drenched in blood and other fluids. Sutures or staples are currently used to close such holes, but these are hard to affix and can damage tissues, says Dr. Jeffrey Karp, an associate professor at Brigham and Women’s Hospital and Harvard Medical School. Karp, who was not involved in the MIT study.

    “There’s a huge unmet need for better adhesives,” says Karp, who is also a co-founder of Gecko Biomedical, which is developing medical adhesives. “There’s really nothing available in the clinic that works well and doesn’t have its drawbacks,” he adds, calling Lu’s team’s work “excellent and very promising.” The next step, Karp says, is to test the glue at larger scales.

    To work inside the human body, an adhesive must be biocompatible, or “cell-friendly.” But strong glues are often toxic. “We really don’t have anything that is strong and biocompatible,” says Dr. Pedro del Nido, a specialist in cardiac surgery at Boston’s Children’s Hospital who was not involved with the MIT study.

    Lu says his group is interested in testing for biocompatibility and believes that natural sources will yield better biocompatible materials. Looking to nature for advice has served him well so far. “[N]ature has solved a lot of the same problems that we deal with in pretty creative ways…Often times, borrowing upon nature and then applying the tools that we have in our arsenal to improve those properties, I think, is a really powerful way to go.”

    See the full article here.

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 378 other followers

%d bloggers like this: