Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:39 pm on January 13, 2016 Permalink | Reply
    Tags: , , Cosmic inflation and contraction theory, NOVA   

    From NOVA: “Do We Live in an Anamorphic Universe?” 



    12 Jan 2016
    Anna Ijjas
    Paul Steinhardt

    Temp 1
    Anamorphic is a term often used in art or film for images that can be interpreted two ways, depending on your vantage point. Önarckép Albert Einsteinnel/Self portrait with Albert Einstein, Copyright Istvan Orosz

    A century ago, we knew virtually nothing about the large scale structure of the universe, not even the fact that there exist galaxies beyond our Milky Way. Today, cosmologists have the tools to image the universe as it is today and as it was in the past, stretching all the way back to its infancy when the first atoms were forming. These images reveal that the complex universe we see today, full of galaxies, black holes, planets and dust, emerged from a remarkably featureless universe: a uniform hot soup of elemental constituents immersed in a space that exhibits no curvature. (1)

    How did the universe evolve from this featureless soup to the finely-detailed hierarchy of stars, galaxies, and galaxy clusters we see today? A closer look reveals the primordial soup was not precisely uniform. Exquisitely sensitive detectors, such as those aboard the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck satellites, produced a map that shows the soup had a distribution of hot and cold spots arranged in a pattern with particular statistical properties.

    NASA WMAP satellite

    ESA Planck

    For example, if one only considers spots of a certain size and measures the distribution of temperatures for those spots only, it turns out the distribution has two notable properties: it is nearly a bell curve (“Gaussian”) and it is nearly the same for any size (“scale-invariant”). Thanks to high-resolution computer simulations, we can reproduce the story of how the hot and cold spots evolved into the structure we see today. But we are still struggling to understand how the universe came to be flat and uniform and where the tiny but critical hot and cold spots came from in the first place.

    Cosmic Background Radiation Planck
    Cosmic microwave background per Planck

    For example, if one only considers spots of a certain size and measures the distribution of temperatures for those spots only, it turns out the distribution has two notable properties: it is nearly a bell curve (“Gaussian”) and it is nearly the same for any size (“scale-invariant”). Thanks to high-resolution computer simulations, we can reproduce the story of how the hot and cold spots evolved into the structure we see today. But we are still struggling to understand how the universe came to be flat and uniform and where the tiny but critical hot and cold spots came from in the first place.

    Looking Beyond Inflation

    One leading idea is that, right after the big bang, a period of rapid expansion known as inflation set in, smoothing and flattening the observable universe.

    Temp 3
    Credit:NASA/WMAP Science Team

    However, there are serious flaws with inflation: inflation requires adding special forms of energy to the simple big bang picture that must be arranged in a very particular way in order for inflation to start, so the big bang is very unlikely to trigger a period of inflation; and, even if inflation were to start, it would amplify quantum fluctuations into large volumes of space that result in a wildly-varying multiverse consisting of regions that are generally neither smooth nor flat. Although inflation was originally thought to give firm predictions about the structure of our universe, the discovery of the multiverse effect renders the theory unpredictive: literally any outcome, any kind of universe is possible.

    Another leading approach, known as the ekpyrotic picture, proposes that the smoothing and flattening of the universe occurs during a period of slow contraction. This may seem counterintuitive at first. To understand how this could work, imagine a film showing the original big bang picture. The universe would be slowly expanding and become increasingly non-uniform and curved over time. Now imagine running this film backwards. It would show a slowly contracting universe becoming more uniform and less curved over time. Of course, if the smoothing and flattening occur during a period of slow contraction, there must be a bounce followed by slow expansion leading up to the present epoch. In one version of this picture, the evolution of the universe is cyclic, with periods of expansion, contraction, and bounce repeating at regular intervals. In contrast to inflation, smoothing by ekpyrotic contraction does not require special arrangements of energy and is easy to trigger. Furthermore, contraction prevents quantum fluctuations from evolving into large patches that would generate a multiverse. However, making the scale-invariant spectrum of variations in density requires more ingredients than in inflation.

    The best of both worlds?

    While experimentalists have been feverishly working to determine which scenario is responsible for the large-scale properties of the universe—rapid expansion or slow contraction—a novel third possibility has been proposed: Why not expand and contract at the same time? This, in essence, is the idea behind anamorphic cosmology. Anamorphic is a term often used in art or film for images that can be interpreted two ways, depending on your vantage point. In anamorphic cosmology, whether you view the universe as contracting or expanding during the smoothing and flattening phase depends on what measuring stick you use.

    If you are measuring the distance between two points, you can use the Compton wavelength of a particle, such as an electron or proton, as your fundamental unit of length. Another possibility is to use the Planck length, the distance formed by combining three fundamental physical “constants”: Planck’s constant, the gravitational constant and the speed of light [in a vacuum]. In [Albert] Einstein’s theory of general relativity, both lengths are fixed for all times, so measuring contraction or expansion with respect to either the particle Compton wavelength or the Planck length gives the same result. However, in many theories of quantum gravity—that is, extensions of Einstein’s theory aimed at combining quantum mechanics and general relativity—one length varies in time with respect to the other. In the anamorphic smoothing phase, the Compton wavelength is fixed in time and, as measured by rulers made of matter, space is contracting. Simultaneously, the Planck length is shrinking so rapidly that space is expanding relative to it. And so, surprisingly, it is really possible to have contraction (with respect to the Compton wavelength) and expansion (with respect to the Planck length) at the same time!

    The anamorphic smoothing phase is temporary. It ends with a bounce from contraction to expansion (with respect to the Compton wavelength). As the universe expands and cools afterwards, both the particle Compton wavelengths and the Planck mass become fixed, as observed in the present phase of the universe.

    By combining contraction and expansion, anamorphic cosmology potentially incorporates the advantages of the inflationary and ekpyrotic scenarios and avoids their disadvantages. Because the universe is contracting with respect to ordinary rulers, like in ekpyrotic models, there is no multiverse problem. And because the universe is expanding with respect to the Planck length, as in inflationary models, generating a scale-invariant spectrum of density variations is relatively straightforward. Furthermore, the conditions needed to produce the bounce are simple to obtain, and, notably, the anamorphic scenario can generate a detectable spectrum of primordial gravitational waves, which cannot occur in models with slow ekpyrotic contraction. International efforts currently underway to detect primordial gravitational waves from land-based, balloon-borne and space-based observatories may prove decisive in distinguishing these possibilities.

    (1)According to Einstein’s theory of general relativity, space can be bent so that parallel light rays converge or diverge, yet observations indicate that their separation remains fixed, as occurs in ordinary Euclidean geometry. Cosmologists refer to this special kind of unbent space as “flat.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:57 pm on January 12, 2016 Permalink | Reply
    Tags: , Nearly Two-Thirds of Earth’s Minerals Were Created by Life, NOVA, Oxidation   

    From NOVA: “Nearly Two-Thirds of Earth’s Minerals Were Created by Life” 



    12 Jan 2016
    Tim De Chant

    Geothite—chemical formula FeO(OH)—is formed by the oxidation of iron

    Planet Earth’s stunning diversity of 4,500 minerals may be thanks to its stunning diversity of life, according to a recent theory proposed by minerologists.

    Rocks helped give life its start—serving as storehouses of chemicals and workbenches atop which the key processes sparked the complex reactions that now power living things—so it only seems fair that life may have returned the favor. “Rocks create, life creates rocks. They’re intertwined in ways that are just now coming into focus,” Robert Hazen, a research scientist at the Carnegie Institution of Washington’s Geophysical Laboratory, told NOVA.

    According to Hazen and his colleagues, who have published a slew of papers on the theory over the past several years, up to two-thirds of minerals on Earth may be the result of oxidation, a chemical reaction that occurs when one element loses electrons to another. The reaction was first discovered with oxygen as the oxidizing agent, hence the name, though other elements such as chlorine (Cl2) can also act as oxidizers.

    But it was oxygen that played an outsize role in Earth’s history. About 2.5 billion years ago, O2 was released as a waste product by newly photosynthesizing algae. Within the span of about 300 million years, those microbes had boosted oxygen from nothing to 1% of the atmosphere, Hazen said. It was a rapid shift that would have wide reaching consequences.

    As O2 came into contact with iron dissolved in the ocean, it precipitated a rusty rain that sank to the bottom. Today, those vast swaths of Precambrian rust are still found in the trillions of tons of iron ore that are locked in banded formations around the world.

    Other elements were similarly affected. Two-thirds of Earth’s minerals are the result of oxidation, Hazen said, and most oxygen on Earth was created by life.

    “As a mineralogist when I look at earth history. I see big new transitions, I see the moon forming impact, I see the formation of oceans and so forth,” Hazen said. :But nothing, nothing matches what life and oxygen did to create new minerals.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 3:52 pm on January 8, 2016 Permalink | Reply
    Tags: , Metallic hydrogen, NOVA   

    From NOVA: “Researchers Appear to Be One Step Away from Metallic Hydrogen” 



    08 Jan 2016
    Tim De Chant

    Temp 1
    Researchers used Raman spectroscopy to examine the properties of their new phase of hydrogen.

    Temp 2
    Energy-level diagram showing the states involved in Raman signal.

    Sometimes, chemistry torpedoes your view of the world. Take hydrogen, for example. Scientists are extraordinarily close to turning the universe’s most abundant element—which is almost always found as either a gas or a plasma—into a metal.

    Researchers have been probing hydrogen’s different phases for decades, and metallic hydrogen was first proposed over 80 years ago by two Princeton physicists. Now, three physicists based in Scotland have created a fifth phase of hydrogen, one that’s definitely not a gas and quite probably almost a metal.

    To get there, the team squished hydrogen molecules (H2) between diamond anvils at some of the highest pressures ever produced in a lab—380 gigapascals, or about 3.75 million times more pressure than is found at sea level. At that point, the hydrogen ceased to be transparent and turned very dark, a sign that the usual covalent bond was breaking down. It all took place at a relatively balmy 300 K, or about 80˚ F.

    Here’s John Timmer, writing for Ars Technica:

    “Since some signs of a bond are still present, the authors posit that this represents a new phase of hydrogen accompanying the gas and more readily achieved liquid forms. Because of some other exotic phases discovered in the search for hydrogen metal, this one is named phase V. The authors don’t think that it represents a fully metallic form, since there are some signs that the bonds between hydrogen atoms are still present. But they’re clearly not as robust as they were, so the authors speculate that it represents “the onset of the predicted non-molecular and metallic state of hydrogen.”

    The researchers suspect that just a little more pressure will finally eliminate the bonds and create truly atomic, metallic hydrogen.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:54 pm on January 6, 2016 Permalink | Reply
    Tags: , NOVA, , Shape dynamics   

    From NOVA: “A Radical Reinterpretation of Einstein’s Theory” 



    06 Jan 2016
    Dan Falk

    “It is not easy to walk alone in the country without musing upon something,” Charles Dickens once observed. For Julian Barbour, those musings most often involve the nature of space and time. Barbour, 78, is an independent physicist who contemplates the cosmos from College Farm, a rustic thatched-roof country house some twenty miles north of Oxford. He is perhaps best know for his 1999 book The End of Time: The Next Revolution in Physics, in which he argues that time is an illusion.

    While country walks may be best enjoyed on one’s own, musings about theoretical physics can benefit from good, smart company—and Barbour has made a point of inviting a handful of bright young physicists to join him for periodic brainstorming sessions at College Farm—think Plato’s Academy in the English countryside.

    Their latest offering is something called shape dynamics. (If you’ve never heard of shape dynamics, that’s OK—neither have most physicists.) It could, of course, be a dead end, as most bold new ideas in physics are. Or it could be the next great revolution in our conception of the cosmos. Its supporters describe it as a new way of looking at gravity, although it could end up being quite a bit more than that. It appears to give a radical new picture of space and time—and of black holes in particular. It could even alter our view of what’s “real” in the universe.

    Temp 1
    The shape of an object is a real, objective quality according to the theory of shape dynamics. No image credit found.

    Last summer, Barbour and his colleagues gathered for a workshop at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, to hash out the ideas behind shape dynamics. During a break in the workshop, I sat down with a young physicist named Sean Gryb, one of Barbour’s protégés.

    “We’re trying to re-evaluate the basic assumptions of Einstein’s theory of relativity—in particular, what it has to say about gravity,” Gryb says. “It’s a shift in what we view as the fundamental elements of reality.”

    Gryb, 33, is a tall and athletic figure; he’s affable and good-humored. He’s now a postdoc at Radboud University in the Netherlands, but he grew up in London, Ontario, and did his PhD down the road from Perimeter, at the University of Waterloo. The fact that he travels so much—the Netherlands, England, Canada—may explain why Gryb’s accent is so hard to pin down. “If I’m in the UK, it turns more British,” he says.

    His PhD supervisor was Lee Smolin, one of Perimeter’s superstar scientists. (Perimeter isn’t a degree-granting institution, so students who work with the institute’s scientists earn their degrees from Waterloo.) Smolin, like Barbour, is known for his outside-the-box ideas; he’s the author of The Trouble With Physics and several other provocative books and has been a vocal critic of string theory, the leading contender for a theory of quantum gravity, a framework that unites Einstein’s theory of gravity, known as general relativity, with quantum mechanics. Gryb, too, seems most comfortable outside the box. Sure, he could work on problems where the questions are well defined and the strategies clearly mapped, slowly adding to what we know about the universe. There’s no shame in that; it’s what most physicists do. Instead, like Barbour and Smolin, he focuses on the very foundations of physics—space, time, gravity.

    Shape, Scale, and Gravity

    Let’s stick with gravity for a moment. It’s surely the most basic of nature’s forces. You drop a hammer, it falls down. Of course, there’s a bit more to it than that: Three and a half centuries ago, Isaac Newton showed that the force that pulls the hammer to the ground is the same force that keeps the moon in its orbit around the earth—a pretty impressive leap of logic, but one that Newton was able to prove with hard data and mathematical rigor.

    Then we come to [Albert] Einstein, who tackled gravity in his masterpiece, general relativity—a theory that’s just celebrated its 100th anniversary. Back in 1915, Einstein showed how gravity and geometry were linked, that what we imagine as the “force” of gravity can be thought of as a curvature in space and time. Ten years earlier, Einstein had shaken things up by showing that space and time are relative: What we measure with our clocks and yardsticks depends on the relative motion of us and the object being measured.

    But even though space and time are relative in Einstein’s theory, scale remains absolute. A mouse and elephant can roam the cosmos, but if the elephant is bigger somewhere, it’s bigger everywhere. The elephant is “really” bigger than the mouse. In shape dynamics, though, size is relative, but the shape of objects becomes a real, objective quality. From the shape dynamics perspective, we’d say that we can only be sure that the elephant is bigger than the mouse if they’re right next to each other, and we’re there too, with our yardstick. Should either beast stray from our location, we can no longer be certain of their true sizes. Whenever they reunite, we can once again measure their relative sizes; that ratio won’t change—but again, we can only perform the measurement if we’re all next to one another. Shape, unlike size, doesn’t suffer from such uncertainty.

    “Absolute size is something that seems to be built into Einstein’s theory of relativity,” says Gryb. “But it’s something that actually we don’t see. If I want to measure the length of something, I’m always comparing it against a meter stick. It’s the comparison that’s important.”

    Perhaps the best way to understand what Gryb is saying is to imagine that we double the size of everything in the universe. But wait: If we double the size of everything, then we’re also doubling the size of the yardsticks—which means the actual measurements we make don’t change.

    This suggests that “size” isn’t real in any absolute sense; it’s not an objective quantity. With shape dynamics, says Gryb, “we’re taking this very simple idea and trying to push it as far as we can. And what we realized—which was a surprise to me, actually—is that you can have relativity of scale and reproduce a theory of gravity which is equivalent to Einstein’s theory—but you have to abandon the notion of relative time.”

    Does this mean that Einstein was wrong about time being relative? Surely we’re not heading back to Isaac Newton’s notion of absolute space and time? Gryb assures me that we’re not. “We’re not going all the way back to Newton,” Gryb says.


    Even though Newton’s conception of space and time turned out to be flawed, his ideas have continued to serve as an inspiration—or at least a jumping-off point—for countless scientists following in his footsteps. In fact, Julian Barbour tells me that his own thinking on shape dynamics began with an analysis of exactly how and why the Newtonian picture fails. Some 50 years ago, Barbour picked up a book called The Science of Mechanics by Ernst Mach, the 19th-century Austrian physicist and philosopher. In the book, Barbour found Mach’s nuanced critique of Newton’s conception of space and time. (I interviewed Barbour at length for a 2008 radio documentary called Living on Oxford Time, which aired on the CBC.)

    Newton had imagined that space was laced with invisible grid-lines—something like the lines of latitude and longitude on a globe—that specify exactly where every object is located in the universe. Similarly, he imagined a “universal clock” that ticks away the hours, minutes, and seconds for all observers at a single, uniform rate. But Mach saw that this was wishful thinking. In real life, there are no grid lines and no universal clock.

    “What happens in the real universe is that everything is moving relative to everything else,” Barbour says. It is the set of relative positions that matters. Only that, Mach concluded, can serve as a foundation for physics. Einstein, as a youngster, was deeply influenced by Mach’s thinking. Now Barbour, too, was hooked—and he’s devoted his life to expanding on Mach’s ideas.

    Barbour isn’t alone. “Julian’s interpretation of Mach’s ideas are at the bedrock of what we’re doing,” Gryb says.

    About 16 years ago, Barbour started collaborating with an Irish physicist, Niall Ó Murchadha. Together they struggled to work out a theory in which only angles and ratios count. Size would have no absolute meaning. (To see why angles are important, think of a triangle: As it moves through space, we can misjudge its size, but can’t misjudge the angles of its three vertices; those angles, which determine the triangle’s shape, will not change.) Ideas like these—together with a good deal of advanced mathematics—would eventually evolve into shape dynamics.

    Intriguingly, shape dynamics reproduces all of the peculiar effects found in general relativity: Massive objects still warp the space around them, clocks still run more slowly in a strong gravitational field, just like in Einstein’s theory. Physicists call this a “duality”—a different mathematical description, but the same end results.

    “In many ways, it’s just Einstein’s theory in a radically different description,” says Barbour. “It’s a radical reinterpretation.”
    Identical, Almost

    In most situations, shape dynamics predicts what Einstein’s theory predicts. “For the vast majority of physical situations, the theories are equivalent,” Gryb says. In other words, the two frameworks are almost identical—but not quite.

    Imagine dividing space-time up into billions upon billions of little patches. Within each patch, shape dynamics and general relativity tell the same story, Gryb says. But glue them all together, and a new kind of structure can emerge. For a concrete example of how this can happen, think of pulling together the two ends of a long, narrow strip of paper: Do it the usual way, and you get a loop; do it with a twist and you get a Möbius strip. “If you glue all the regions together to form a kind of global picture of space and time, then that global picture might actually be different.” So while shape dynamics may recreate Einstein’s theory on a small scale, the big-picture view of space and time may be novel.

    There is one kind of object where the shape dynamics picture differs starkly from the traditional view—the black hole. In the standard picture, a black hole forms when a massive star exhausts its nuclear fuel supply and collapses. If the star is large enough, nothing can stop that collapse, and the star shrinks until it’s smaller than its own event horizon—the point of no return for matter falling toward it. A black hole’s gravitation field is so intense that nothing—not even light—can escape from within the event horizon. At the black hole’s core, a singularity forms—a point where the gravitational field is infinitely strong, where space and time are infinitely curved. The unlucky astronaut who reaches this point will be spaghettified, as Stephen Hawking3 has put it, or burned to a crisp. Singularities don’t sit well with physicists. They’re usually seen as a sign that something is not quite right with the underlying theory.

    But according to shape dynamics’ proponents, the theory does away with singularities—a definite selling point. But the picture of black holes in shape dynamics is more radical than that. “It looks like black holes—in shape dynamics—are qualitatively different from what happens in general relativity,” Gryb says.

    At first, the astronaut approaching the black hole sees nothing that’s different from the Einsteinian description; outside of the event horizon, general relativity and shape dynamics give the same picture. But beyond the horizon, the story changes dramatically.

    Not only is there no singularity in a shape dynamics universe, there’s no head-long rush toward the place where you’d expect it to be. In fact, an astronaut who sails past the event horizon finds herself not in a shrinking world but an expanding one. The astronaut “comes into this new region of space—which was formed effectively by the collapse of a star—and is now free to wander around in that space.” You can think of the black hole as a wormhole into that new space, Gryb says.

    True, the astronaut can never exit back to the region outside the event horizon—but in this new space “he or she is free to wander around wherever they would like. And that’s a very different picture,” Gryb says. “But it’s still very early, and we’re trying to understand better what that means.”

    Is it a parallel world? “I wouldn’t necessarily call it that—it’s just a pocket of space that was created by the collapse of the star,” he says. It’s “basically the region between the horizon and the surface of the collapsed star. And that region gets larger and larger as the star starts to collapse more and more.”

    In other words, space—dare we say it—has been turned inside out. The region inside the event horizon, which had seemed tiny, now appears huge. What had been the surface of the collapsing star is now the “sky,” and rather than shrinking, it’s getting larger. The space inside the event horizon “is the mirror image” of the space that our traveller left behind, outside the horizon, Gryb says.

    In shape dynamics, falling into a black hole seems an awful lot like falling into a rabbit hole and discovering a strange new world on the other side, just like Alice did in Wonderland. The only problem is that we can’t see down the rabbit hole. Whatever may happen within the event horizon, we have no hope of observing it from the outside. Of course, you could jump into a black hole, and see what’s there—but you could never communicate your findings to those outside.

    Putting It To the Test

    But Gryb is hopeful. We’ve known since the 1970s that black holes don’t stick around forever—Stephen Hawking showed that, given enough time, they evaporate by a mechanism known as Hawking radiation. “It’s possible that the story about what happens on the other side of the horizon might change the story of what happens when the black hole evaporates,” he says. “If we can make definite predictions for this, then it might provide a way to test our scenario against general relativity.”

    Such tests are “just wild fantasies” at the moment, Gryb admits—but then, he notes, so are some of the predictions of other novel approaches, such as the recently-popular firewall hypothesis.

    The physicists that I spoke with—the few who have been following what the shape dynamics crew have been up to—are understandably cautious. This new picture of black holes is interesting, of course, but the critical question is whether it can be tested.

    “What do black holes look like in their picture?” says Astrid Eichhorn, a physicist at Imperial College London and also a visiting fellow at Perimeter. “Is it just mathematical differences? Or is there something we can really observe—for instance with the Event Horizon Telescope—where we can see a physical difference and make an observation or experiment to see which of the two [shape dynamics or general relativity] is correct?”

    Eichhorn has other concerns, too. “I’m skeptical of how this will work out, both on the conceptual side and also on the technical side,” she says. “It seems that, by giving up the space-time picture, they have a lot of technical complications in formulating the theory.” Figuring out how to handle quantum effects, for example, “seems to become much more challenging in their framework than it already is in the standard approach to quantum gravity.”

    Indeed, the word “quantum” rarely came up at the Perimeter workshop—although the hope is that the new framework will provide some insight into reconciling gravity and quantum theory.

    Gryb, for his part, admits that the problem of unifying these two pillars of modern physics is a daunting one—perhaps as daunting in shape dynamics as it has been in earlier approaches. “We’ve made progress on trying to understand what shape dynamics might have to say about quantum gravity—but we’ve also run into a bunch of dead ends.”
    Looking for Clarity

    Also attending the workshop was physicist Paul Steinhardt of Princeton University, known for his work on the inflation model of the Big Bang and on alternative cosmological models. Several times during the workshop, Steinhardt would call on a speaker to be more clear, more explicit. Like Eichhorn, Steinhardt is concerned about the seeming lack of anything quantum-mechanical in the shape dynamics picture. And of course there’s the issue of falsifiability—that is, putting the theory to the test.

    “My question was, what is scientifically meaningful that you expect to come out of this?” he says. “What’s different about this approach to gravity—as opposed to others—that you could test and experiment with and verify that would change our view about anything?”

    The answers he got during the workshop didn’t satisfy him. “Some people said, ‘The discipline is too young, so we don’t know yet. It might bring us something new.’ And my brain is thinking, ‘OK, good—come back when you’ve got that something.’ ”

    Others, meanwhile, spoke of the new ontology that shape dynamics offers. Ontology is a word that crops up frequently in the philosophy of science. It refers to the labeling of what’s “real” in a scientific theory, but it doesn’t necessarily change what you actually see when you observe nature. To Steinhardt, a change in ontology isn’t very exciting on its own. It’s just a way of describing something in a different way—a change of narrative, as it were, rather than a change in what we’d expect to see or measure. “Sometimes that’s useful,” Steinhardt says, “but it doesn’t obviously give you anything really new.”

    And yet, in the history of physics—and of cosmology in particular—changes in narrative sometimes seem rather profound. Think of the change from the Earth-centered cosmos of the ancient Greeks to the sun-centered cosmos of Copernicus. They were the same observations, but a radically different “story.”

    Still, Steinhardt sticks to his guns. Switching from a sun-centered to an Earth-centered description of the cosmos didn’t immediately bring any “new science.” Yes, it gave us a new story, but the new model wasn’t much better than the old one in terms of explaining the observed motion of the planets. That didn’t come until a half century later, when Johannes Kepler worked out the true shape of planetary orbits (they’re ellipses, it turns out, not circles). “I would have been skeptical of Copernicus—but I would have been really blown away by Kepler,” Steinhardt says.

    A Risky Pursuit

    The resistance to shape dynamics—like the skepticism that surrounds any new idea in physics—is par for the course. Science is, by its nature, a skeptical pursuit. The onus is on those who believe they’ve found something new to convince the community that they’ve really done so. In theoretical particle physics and cosmology, in particular, new ideas are always bubbling up like a tea kettle on the boil. There’s no way to read everything that gets published, so one reads only what seems genuinely promising.

    For those with the necessary physics background, Mercati has published a 67-page shape dynamics tutorial online; Gryb, meanwhile, has a short introductory essay on his Web page. There’s also a brief description of the theory in Smolin’s recent book, Time Reborn.)

    Even for those who find shape dynamics compelling, it may be risky to pursue it.

    Most of those working on shape dynamics are young, and shape dynamics, at least for now, lies somewhat toward the fringes of mainstream physics—which means that junior researchers are taking a risk by pursuing it.

    Flavio Mercati is currently a post-doc at Perimeter; he did his PhD at the Sapienza University of Rome. But when he first expressed an interest in working with Barbour on fundamental physics, his professors tried to talk him out of it. “They said, ‘Look, I suggest you don’t,’” he recalls. “Try something more down to earth.” Because of the vagaries of the job market for academic physicists, there’s pressure to steer clear of deep, foundational issues, Mercati says. Pursue matters that are too esoteric and “you pay a price, career-wise.” Most of these researchers have yet to secure tenured academic positions—and it’s not clear if working on shape dynamics helps or hinders that quest. (At least Mercati will soon have a book to show for his efforts—the first textbook on shape dynamics, to be published by Oxford University Press.)

    All of this leaves these young shape dynamics researchers poised uncomfortably on the knife-edge between excitement (a new paradigm!) and humility (we’re probably wrong).

    In the end, Barbour, Gryb, Mercati, and their colleagues are taking the only route possible—they’re going where their equations lead them.

    “We’re saying something totally different from what everyone else is saying,” Gryb says toward the end of our interview. “Can it possibly be right?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 5:14 pm on December 28, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “The Man Who Rewrote the Tree of Life” 2014 but Interesting and Important 



    30 Apr 2014
    Carrie Arnold

    Carl Woese may be the greatest scientist you’ve never heard of. “Woese is to biology what [Albert] Einstein is to physics,” says Norman Pace, a microbiologist at the University of Colorado, Boulder. A physicist-turned-microbiologist, Woese specialized in the fundamental molecules of life—nucleic acids—but his ambitions were hardly microscopic. He wanted to create a family tree of all life on Earth.

    Woese certainly wasn’t the first person with this ambition. The desire to classify every living thing is ageless. The Ancient Greeks and Romans worked to develop a system of classifying life. The Jewish people, in writing the Book of Genesis, set Adam to the task of naming all the animals in the Garden of Eden. And in the mid-1700s, Swedish botanist Carl von Linné published Systema Naturae, introducing the world to a system of Latin binomials—Genus species—that scientists use to this day.

    Temp 1
    Carl Woese in his later years. Photo credits: Jason Lindsey/University of Illinois, Tim Bocek/Flickr (CC BY-NC-SA)

    What Woese was proposing wasn’t to replace Linnaean classification, but to refine it. During the late 1960s, when Woese first started thinking about this problem as a young professor at the University of Illinois, biologists were relying a lot on guesswork to determine how organisms were related to each other, especially microbes. At the time, researchers used the shapes of microbes—their morphologies—and how they turned food into energy—their metabolisms—to sort them into bins. Woese was underwhelmed. To him, the morphology-metabolism approach was like trying to create a genealogical history using only photographs and drawings. Are people with dimples on their right cheeks and long ring fingers all members of the same family? Maybe, but probably not.

    “If you wanted to build a tree of life prior to what Woese did, there was no way to put something together that was based upon actual data,” says Jonathan Eisen, an evolutionary microbiologist at the University of California Davis.

    Just as outward appearances aren’t the best way to determine family relations, Woese believed that morphology and metabolism were inadequate classifiers for life on Earth. Instead, he figured that DNA could sketch a much more accurate picture. Today, that approach may seem like common sense. But in the late 60s and early 70s, this was no easy task. Gene sequencing was a time-consuming, tedious task. Entire PhDs were granted for sequencing just one gene. To create his tree of life, Woese would need to sequence the same gene in hundreds, if not thousands, of different species.

    So Woese toiled in his lab, sometimes with his postdoc George Fox but often alone, hunched over a light box with a magnifying glass, sequencing genes nucleotide by nucleotide. It took more than a decade. “When Woese first announced his results, I thought he was exaggerating at first,” Fox recalls. “Carl liked to think big, and I thought this was just another of his crazy ideas. But then I looked at the data and the enormity of what we had discovered hit me.”

    Woese and Fox published their results in 1977 in a well-respected journal, the Proceedings of the National Academy of Science. They had essentially rewritten the tree of life. But Woese still had a problem: few scientists believed him. He would spend the rest of his life working to convince the biological community that his work was correct.

    Animal, Vegetable, Mineral

    Following the publication of Linnaeus’s treatise in the 18th century, taxonomy progressed incrementally. The Swedish botanist had originally sorted things into three “kingdoms” of the natural world: animal, vegetable, and mineral. He placed organisms in their appropriate cubbyholes by looking at similarities in appearance. Plants with the same number of pollen-producing stamens were all lumped together, animals with the same number of teeth per jaw were grouped, and so on. With no knowledge of evolution and natural selection, he didn’t have a better way to comprehend the genealogy of life on Earth.

    The publication of [Charles]Darwin’s On the Origin of Species in 1859, combined with advances in microscopy, forced scientists to revise Linnaeus’s original three kingdoms to include the tiniest critters, including newly visible ones like amoebae and E. coli. Scientists wrestled with how to integrate microbial wildlife into the tree of life for the next 100 years. By the mid-20th century, however, biologists and taxonomists had mostly settled on a tree with five major branches: protists, fungi, plants, animals, and bacteria. It’s the classification system that many people learned in high school biology class.

    Woese and other biologists weren’t convinced, though. Originally a physics major at Amherst College in Massachusetts and having received a PhD in biophysics from Yale in 1953, Woese believed that there had to be a more objective, data-driven way to classify life. Woese was particularly interested in how microbes fit into the classification of life, which had escaped a rigorous genealogy up until that point.

    He arrived at the University of Illinois Urbana-Champaign as a microbiologist in the mid-1960s, shortly after James Watson and Francis Crick won the Nobel prize for their characterization of DNA’s double-helix form. It was the heyday of DNA. Woese was enthralled. He believed that DNA could unlock the hidden relationships between different organisms. In 1969, Woese wrote a letter to Crick, stating that:

    ” …this can be done by using the cell’s ‘internal fossil record’—i.e., the primary structures of various genes. Therefore, what I want to do is to determine primary structures for a number of genes in a very diverse group of organisms, on the hope that by deducing rather ancient ancestor sequences for these genes, one will eventually be in the position of being able to see features of the cell’s evolution….”

    This type of thinking was “radically new,” says Norman Pace, a microbiologist at the University of Colorado, Boulder. “No one else was thinking in this direction at the time, to look for sequence-based evidence of life’s diversity.”

    Evolution’s Timekeeper

    Although the field of genetics was still quite young, biologists had already figured out some of the basics of how evolution worked at the molecular level. When a cell copies its DNA before dividing in two, the copies aren’t perfectly identical. Mistakes inevitably creep in. Over time, this can lead to significant changes in the sequence of nucleotides and the proteins they code for. By finding genes with sites that mutate at a known rate—say 4 mutations per site per million years—scientists could use them as an evolutionary clock that would give biologists an idea of how much time had passed since two species last shared a common ancestor.

    To create his evolutionary tree of life, then, Woese would need to choose a gene that was present in every known organism, one that was copied from generation to generation with a high degree of precision and mutated very slowly, so he would be able to track it over billions of years of evolution.

    “This would let him make a direct measure of evolutionary history,” Pace says. “By tracking these gene sequences over time, he could calculate the evolutionary distance between two organisms and make a map of how life on Earth may have evolved.”

    Some of the most ancient genes are those coding for molecules known as ribosomal RNAs. In ribosomes, parts of the cell that float around the soupy cytoplasm, proteins and ribosomal RNA, or rRNA, work together to crank out proteins. Each ribosome is composed of large and small subunits, which are similar in both simple, single-celled prokaryotes and more complex eukaryotes. Woese had several different rRNA molecules to choose from in the various subunits, which are classified based on their length. At around 120 nucleotides long, 5S rRNA wasn’t big enough to use to compare lots of different organisms. On the other end of the spectrum, 23S rRNA was more than 2300 nucleotides long, making it far too difficult for Woese to sequence using the technologies of the time. The Goldilocks molecule—long enough to allow for meaningful comparisons but not too long and difficult to sequence—was 16S rRNA in prokaryotes and its slightly longer eukaryotic equivalent, 18S rRNA. Woese decided to use these to create his quantitative tree of life.

    His choice was especially fortuitous, Eisen says, because of several factors inherent in 16S rRNA that Woese couldn’t have been aware of at the time, including its ability to measure evolutionary time on several different time scales. Certain parts of the 16S rRNA molecule mutate at different speeds. Changes to 16S rRNA are, on the whole, still extremely slow (humans share about 50% of their 16S rRNA sequence with the bacterium E. coli), but one portion mutates much more slowly than the other. It’s as if the 16S rRNA clock has both an hour hand and a minute hand. The very slowly evolving “hour hand” lets biologists study the long-term changes to the molecule, whereas the more quickly evolving “minute hand” provides a more recent history. “This gives this gene an advantage because it lets use ask questions about deep evolutionary history and more recent history at the same time,” Eisen says.

    Letter by letter

    Selecting the gene was just Woese’s first challenge. Now he had to sequence it in a variety of different organisms. In the late 60s and early 70s, when Woese began his work, DNA sequencing was far from automated. Everything, down to the last nucleotide, had to be done by hand. Woese used a method to catalog short pieces of RNA developed in 1965 by British scientist Frederick Sanger, which used enzymes to chop RNA into small pieces. These small pieces were sequenced, and then scientists had to reassemble the overlapping pieces to determine the overall sequence of the entire molecule—a process that was tedious, expensive, and time-consuming, but that was seen as a minor annoyance to a workhorse like Woese, Fox says. “All he cared about was getting the answer.”

    Woese started with prokaryotes, the single-celled organisms that were his primary area of interest. He and his lab started by growing bacteria in a solution of radioactive phosphate, which the cells incorporated into backbones of their RNA molecules. This made the 16S rRNA radioactive. Then, Woese and Fox extracted the RNA from the cells and chopped it into smaller pieces using enzymes that acted like scissors. The enzymatic scissors would only cut at certain sequences. If a sequence was present in one organism but missing in a second, the scissors would pass over the second one’s sequence. Its fragment would be longer.

    Since RNA’s sugar-phosphate backbone is negatively charged, the researchers could use a process known as electrophoresis to separate the different length pieces. As electricity coursed through gels containing samples, it pulled the smaller, lighter bits farther through the gels than the longer, heavier chunks. The result was distinct bands of different lengths of RNA. Woese and Fox then exposed each gel to photographic paper over several days. The radioactive bands in the gel transferred marks to the paper. This created a Piet Mondrian-esque masterpiece of black bands on a white background. Each different organism left its own mark. “To Carl, each spot was a puzzle that he would solve,” Fox says.

    After developing each image, Woese and Fox returned to the gel and neatly cut out each individual blotch that contained fragments of a certain length. They then chopped up these fragments with another set of enzymes until they were about five to 15 nucleotides long, a length that made sequencing easier. For some of the longer fragments, it took several iterations of the process before they were successfully sequenced. The sequences were then recorded on a set of 80-column IBM punch cards. The cards were then run through a large computer to compare band patterns and RNA sequences among different organisms to determine evolutionary relationships. At the beginning, it took Woese and Fox months to obtain a single 16S rRNA fingerprint.

    “This process was a huge breakthrough,” says Peter Moore, an RNA chemist at Yale University who worked with Woese on other research relating to RNA’s structure. “It gave biologists a tool for sorting through microorganisms and giving them a conceptual way to understand the relationship between them. At the time, the field was just a total disaster area. Nobody knew what the hell was going on.”

    RNA is so fundamental to life that some scientists think it’s the spark that started it all. To learn more about RNA, visit NOVA’s RNA Lab.

    By the spring of 1976, Woese and Fox had created fingerprints of a variety of bacterial species when they turned to an oddball group of prokaryotes: methanogens. These microbes produce methane when they break down food for energy. Because even tiny amounts of oxygen are toxic to these prokaryotes, Woese and Fox had to grow them under special conditions.

    After months of trial and error, the two scientists were finally able to obtain an RNA fingerprint of one type of methanogen. When they finally analyzed its fingerprint, however, it looked nothing like any of the other bacteria Woese and Fox had previously analyzed. All of the previous bacterial gels contained two large splotches at the bottom. They were entirely absent from these new gels. Woese knew instantly what this meant.

    To fellow microbiologist Ralph Wolfe, who worked in the lab next door, Woese announced, “I don’t even think these are bacteria, Wolfe.”

    He dropped the full bombshell on Fox. “The methanogens didn’t have any of the spots he was expecting to see. When he realized this wasn’t a mistake, he just went nuts. He ran into my lab and told me we had discovered a new form of life,” Fox recalls.

    The New Kingdom

    The methanogens Woese and Fox had analyzed looked superficially like other bacteria, yet their RNA told a different story, sharing more in common with nucleus-containing eukaryotes than with other bacteria. After more analysis of his RNA data, Woese concluded that what he was tentatively calling Archaea (from Latin, meaning primitive) wasn’t a minor twig on the tree of life, but a new main branch. It wasn’t just Bacteria and Eukarya any more .

    To prove to their critics that these prokaryotes really were a separate domain on the tree of life, Woese and Fox knew the branch needed more than just methanogens. Fox knew enough about methanogen biology to know that their unique RNA fingerprint wasn’t the only thing that made them strange. For one thing, their cell walls lacked a mesh-like outer layer made of peptidoglycan. Nearly every other bacterium Fox could think of contained peptidoglycan in its cell wall—until he recalled a strange fact he had learned as a graduate student—another group of prokaryotes, the salt-loving halophiles, also lacked peptidoglycan.

    Grand Prismatic Spring in Yellowstone National Park is home to many species of thermophilic archaea.

    Fox turned to the research literature to search for other references to prokaryotes that lack peptidoglycan. He found two additional examples: Thermoplasma and Sulfolobus. Other than the missing peptidoglycan, these organisms and the methanogens seemed nothing alike. Methanogens were found everywhere from wetlands to the digestive tracts, halophiles flourished in salt, Thermoplasma liked things really hot, and Sulfolobus are often found in volcanoes and hot, acidic springs.

    Despite their apparent differences, they all metabolized food in the same, unusual way—unlike anything seen in other bacteria—and the fats in the cell membrane were alike, too. When Woese and Fox sequenced the 16S rRNA of these organisms, they found that these prokaryotes were most similar to the methanogens.

    “Once we had the fingerprints, it all fell together,” Fox says.

    Woese believed his findings were going to revolutionize biology, so he organized a press conference when the paper was published in PNAS in 1977. It landed Woese on the front page of the New York Times, and created animosity among many biologists. “The write-ups were ludicrous and the reporters got it all wrong,” Wolfe says. “No biologists wanted anything to do with him.”

    It wasn’t just distaste for what looked like a publicity stunt that was working against Woese. He had spent most of the last decade holed up in his third floor lab, poring over RNA fingerprints. His reclusive nature had given him the reputation of a crank. It also didn’t help that he had single-handedly demoted many biologists’ favorite species. Thanks to Woese, Wolfe says, “Microbes occupy nearly all of the tree. Then you have one branch at the very end where all the animals and plants were. And the biologists just couldn’t believe that all the plants and all the animals were really just one tiny twig on one branch.”

    Although some specialists were quick to adopt Woese’s new scheme, the rest of biology remained openly hostile to the idea. It wasn’t until the mid-1980s that other microbiologists began to warm to the idea, and it took well over another decade for other areas of biology to follow suit. Woese had grown increasingly bitter that so many other scientists were so quick to reject his claims. He knew his research and ideas were solid. But he was left to respond to what seemed like an endless stream of criticism. Shying from these attacks, Woese retreated to his office for the next two decades.

    “He was a brash, iconoclastic outsider, and his message did not go down well,” says Moore, the Yale RNA chemist.

    Woese’s cause wasn’t helped by his inability to engage critics in dialogue and discussion. Both reticent and abrupt, he preferred his lab over conferences and presentations. In place of public appearances to address his detractors, he sent salvos of op-eds and letters to the editor. Still, nothing seemed to help. The task of publicly supporting this new tree of life fell to Woese’s close colleagues, especially Norman Pace.

    But as technology improved, scientists began to obtain the sequences of an increasing number of 16S rRNAs from different organisms. More and more of their analyses supported Woese’s hypothesis. As sequencing data poured in from around the world, it became clear to nearly everyone in biology that Woese’s initial tree was, in fact, been correct.

    Now, when scientists try to discover unknown microbial species, the first gene they sequence is 16S rRNA. “It’s become one of the fundamentals of biology,” Wolfe says. “After more than 20 years, Woese was finally vindicated.”

    Woese died on December 30, 2012, at the age of 84 of complications from pancreatic cancer. At the time of his death, he had won some of biology’s most prestigious awards and had become one of the field’s most respected scientists. Thanks to Woese’s legacy, we now know that most of the world’s biodiversity is hidden from view, among the tiny microbes that live unseen in and around us, and in them, the story of how life first evolved on this planet.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:47 pm on December 20, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “The Power Plants That Can Reverse Climate Change” 



    09 Dec 2015 [Just appeared]
    Tim De Chant

    It’s another steamy day on the outskirts of Houston, Texas. The temperatures are hovering just above 90˚ F, and my car’s air conditioning is struggling to keep up. The engine, probably laboring under the strain of the AC compressor, is groaning loudly as I hurtle down a backroad past cattle ranches and cotton fields. I’m on my way to see a promising first step in what might be our best hope for reversing climate change—not just reducing our carbon emissions, but removing CO2 from the atmosphere.

    Suburban Houston is perhaps the least likely place to kick off the carbon-negative revolution. Sprawling over hundreds of square miles of south Texas’s coastal plains, the metropolitan region is bound together by cheap gas and massive ten-lane expressways flanked by three-lane access roads that feed strip mall after strip mall, each less distinguishable than the last, their parking lots brimming with full-size trucks and SUVs.

    But soon, over the long horizon, under a hazy, cotton-candy sky, the near future resolves itself. Rising beneath the four towering smoke stacks of W.A. Parish—the nation’s largest fossil fuel plant—is a more modest tangle of beams and pipes known as Petra Nova. When finished, NRG’s newest five-acre chemistry kit will draw a portion of the exhaust from Unit 8, a 610-megawatt coal-fired electric generator, remove 90% of its carbon dioxide, compress the greenhouse gas, and send it to be stored in an oilfield some 80 miles to the southwest.

    Unit 8 at NRG’s W.A. Parish plant will soon be hooked up to a carbon capture system.

    Petra Nova will capture 1.6 million tons of CO2 annually, and by itself, it’s not going to do much to alleviate climate change. But the technology it uses could someday—soon perhaps—transform the dirtiest coal power plants into terraforming machines that could rein in today’s runaway CO2 levels.

    In other words, by the end of this century, this coal plant, or one very much like it, could be saving the planet. But can we build enough of them in time?

    Capturing Carbon

    The road to the Petra Nova field office is lined with imposing steel cubes and half-finished metal frames. Cherry pickers hoist workers to dizzying heights as portable generators and compressors thrum below. I park my car and step out into the sweltering sun where I’m greeted by John Ragan, president of both NRG’s Gulf Coast region and the company’s Carbon360 business group. Ragan is a veteran of the Gulf Coast oil and gas industry, and even in his crisp white shirt and pressed slacks he seems perfectly comfortable in the heat, humidity, and organized chaos that define Southern construction sites. After a brief chat inside the mercifully air conditioned field office, we head out for a tour of the plant with Ragan, Jim Tharp, senior director at NRG overseeing construction here, and Dave Knox, senior director of communications for the company.

    John Ragan, president of NRG’s Carbon360 business group, explains how Petra Nova’s CO2 scrubber will work once it’s assembled.

    Coal-fired power plants may seem imposingly complex from the roadside, but they’re surprisingly simple. Pulverized coal is fed into the boiler and burned, turning water into steam which powers a turbine that turns a generator. Even the pollution control equipment is straightforward. In one chamber, giant bags—similar to those in a vacuum cleaner—trap particulates from the exhaust gas. In another, limestone slurry is mixed with the exhaust to react with sulfur dioxide, which produces gypsum.

    Carbon capture systems are just as simple. At Petra Nova, exhaust gas flows into a 320-foot-tall tower packed with a dense thicket of metal that’s drenched in an amine solution. The CO2 reacts with water and the NH2 of the amine to produce bicarbonate (HCO3–). The solution is then pumped to a 180-foot-tall regenerator—delivered from Korea last week in one piece—which heats up the amine to release the CO2. The gas is then compressed and injected underground into an oil field to push out more crude. (When the goal isn’t oil production, it’s stored in deep saline aquifers.)

    Exhaust gas from W.A. Parish’s Unit 8 flows through a duct to the CO2 scrubber where it reacts with an amine solution. CO2 is then released from the amine solution in the regenerator, which is powered by a natural gas power plant.

    Work on Petra Nova started in 2009 after the company was awarded a $167 million grant from the Department of Energy, a little more than 10% of the demonstration plant’s estimated $1 billion price tag. “That really gave us the momentum to move forward,” Ragan says.

    That momentum would soon be tested. In the early days of the Obama administration, when Democrats still controlled the House of Representatives and had a simple majority in the Senate, it was a foregone conclusion that CO2 emissions would be regulated in some fashion, most likely through a cap-and-trade program where utilities and other polluters could swap or buy emissions permits to stay under a legally mandated cap. A bill was introduced in the House, but it never made it to the Senate floor.

    “When we started planning this, everyone assumed there would eventually be a price on carbon. Then there wasn’t,” Ragan says. “Our CEO David Crane told us to figure out how to make this work without a price on carbon.”

    John Ward, managing director of Vivid Economics, a London-based consultancy, says that the lack of a price on carbon has scuttled a lot of similar projects. “A large part of what’s holding carbon capture and storage back is around the carbon price,” he says. For the technology to succeed, he adds, the price needs to be “sufficiently strong and reliable to make really quite significant capital investments.”

    For NRG, the trick was finding someone willing to pay for the excess CO2. This being Texas, there was a nearby oilfield that could use the gas to squeeze more crude from the rock. The partnership will make Petra Nova profitable, but burning the extra oil it helps extract will counter the climate benefit of the CO2 it stores.

    The Petra Nova demonstration plant, then, represents something of a hedge. For now, without regulatory or economic incentives to capture the carbon simply for storage, the project doesn’t make financial sense for NRG. But Ragan believes that’s likely to change. “We’re going to live in a carbon constrained world,” he says. “We have to do something with our existing coal power plants.”

    From Neutral to Negative

    There is something else that power companies can do with their existing coal power plants, and that’s burn biomass. While burning coal releases CO2 inhaled by plants millions of years ago, burning fresh biomass captures CO2 that’s circulating today. The idea isn’t new—power companies have been burning biomass for more than 20 years. Currently, there’s about 16.1 GW of biomass generating capacity in the U.S., or about 1.4% of the total. Some of that is burned in pure biomass plants, the rest in so-called co-fired plants that mix biomass with fossil fuels.

    From a climate perspective, biomass energy is appealing because it burns plants, which suck CO2 out of the atmosphere as a part of everyday life. We don’t need to build specialized structures to capture CO2—we can let plants do it for us.

    Wood pellets are frequently used as sources of biomass for power plants.

    When done right, burning biomass is almost carbon neutral, where the amount of CO2 it emits is balanced by the CO2 plants absorb. The caveat is that the biomass has to be appropriately harvested or grown, with a focus on organic waste and quick-growing plants. Slow-growing hardwoods and old growth forests are definitely out of the question. “If you cut down a 100 year old rainforest, then it could take up to 400 years to pay back that debt, to make up for all that biomass that was standing perfectly happy in the Amazon,” says Daniel Kammen, director of the Renewable and Appropriate Energy Laboratory at the University of California, Berkeley. Biomass harvested for energy also has to be replaced with new plantings—if not, then burning biomass is worse than coal.

    It’s tempting to think of biomass as an easy fix—that we could switch the grid from fossil fuels to biomass—but it would place enormous demands on both human ingenuity and life on Earth. “We would need something like a quarter of all the net primary production, the total plant growth on the Earth’s surface,” says Chris Field, director of the Carnegie Institution’s Department of Global Ecology. “That’s completely unrealistic.”

    “But what’s a meaningful level?” he continues. “Would a meaningful level be at one, two, five percent of the global energy system? I think the answer is that we’re looking at a 21st century energy system that’s likely to have lots and lots of components so that contributions of a few percent will be meaningful. There’s every reason to think that biomass should be considered at that kind of scale.”

    Even utilizing 5% of all plant growth—about 12.3 gigatons, an amount approaching the productivity of the world’s farms—won’t do much to tamp down carbon emissions. In fact, biomass is not quite carbon neutral because it still has to be harvested and hauled before it’s combusted, and right now, both require fossil fuels. On balance, burning biomass still releases CO2, just less than burning coal.

    But the good news is that biomass power plants, just like their coal cousins, release their CO2 in conveniently concentrated streams of hot gas. And as projects like Petra Nova and others are demonstrating, we know how to capture and store CO2 from those emissions.

    So to start removing CO2 from the atmosphere—and possibly begin reversing climate change—all we have to do is combine them. “The innovation is putting the two together,” Kammen says.

    Best of Both Worlds

    Scrubbing CO2 from power plant emissions is based on old technology. The amine-based process used at Petra Nova and other carbon capture and sequestration (CCS) plants has been around for for a long time. “It was patented in the 1930s,” says Howard Herzog, a senior research engineer at MIT’s Energy Initiative. “The process has improved since then, but the fundamentals are basically the same. You’ve got something that’s been around 80 years and developed. A lot of the issues have been worked out.”

    The idea to combine bioenergy with CCS had emerged early in the 1990s, and the original goal was to make coal power stations carbon neutral. A little later in the decade, other scientists started exploring how to remove CO2 directly from the atmosphere. It wasn’t until 2001, when Kenneth Möllersten, an engineer with the Swedish Energy Agency, and Jinyue Yan, a professor at the Swedish Royal Institute of Technology, put two and two together. Rather than push the limits of chemistry to capture CO2 from the open air, they realized that we could let trees, grasses, and other plants do the hard work. All we’d need to do is collect and burn them, capture the CO2, and find somewhere to store it for a long, long time.

    A crane lowers a piece of the CO2 scrubber into place at Petra Nova.

    Burying the CO2 from power plants deep underground has some inherent benefits. Unlike forests, which are also excellent long-term carbon sinks, stored CO2 can’t easily be rereleased. Once buried, it isn’t likely to surface for thousands, perhaps millions, of years. Today, we have no way of guaranteeing that a forest will be left standing for that long. Plus, all plants eventually die and decay, releasing their carbon. Bioenergy with CCS is a best-of-both-worlds approach. With it, we can take advantage of plants’ natural ability to capture CO2 and then use a proven technology to lock those emissions away.

    “Neither piece of what we’re talking about, individually, is technically hard,” Kammen says. “But then when you start looking at it as a system, then it gets interesting.”

    Searching for Supplies

    Recently, Kammen and a handful of his students decided to see if, by 2050, they could reduce carbon emissions by 145% below 1990s levels for a chunk of North America known as the Western Interconnection—the regional power grid that supplies the Western U.S., the Canadian provinces of Alberta and British Columbia, and a chunk of Baja California in Mexico. Essentially, they would be transforming a region from one that produced CO2 pollution into one that would remove it from the atmosphere.

    They started their simulation by replacing nearly all fossil fuel power sources with renewables, including wind, solar, hydro, and geothermal. Then they ramped up biomass energy with CCS, also known as BECCS, to provide an always-on source of power that also removed CO2 from the atmosphere. By 2050, they were using nearly all available biomass supplies, which included everything from trash to orchard waste and wood from fast growing trees.

    Biomass energy’s insatiable demand for combustible material is usually where it hits a roadblock. There’s only so much biomass to go around, and collecting and trucking it to various power plants will require entirely new supply chains that don’t currently exist. “It becomes increasingly expensive to supply large quantities of biomass as opposed to smaller quantities,” says Ed Rubin, a professor of engineering and public policy at Carnegie Mellon University. “Most biomass facilities are relatively small—an order of magnitude or sometimes two orders of magnitude smaller than a typical coal fired plan. It’s a supply issue.”

    The current cost of supplying biomass is what’s kept NRG from co-firing any of their 19 coal plants with biomass. “We have explored biomass options at a number of plants across our fleet,” says Knox, the senior director of communications. “The problem we have encountered is getting a guaranteed and consistent supply that is close enough to the plant that you do not add to your carbon footprint through carbon-intensive trucking of the biomass.”

    There’s also the danger that if BECCS is a runaway success it will start eating into food supplies. “We’re going to have to feed 9–10 billion people by 2050,” says Pete Smith, a professor of plant and soil science at the University of Aberdeen. “People are asking, is this the best use of land when we’ve got all these additional mouths to feed?”

    Still, there are sources of biomass that can be used responsibly. “The clearest pool of biomass that’s available is waste products in agriculture and forestry,” says Field, the Carnegie Institution director. “That’s hundreds of millions of tons. It’s not a trivial quantity, but it’s not enough to dominate energy system. Whether there’s more biomass available really depends on one thing, critically: how much we’re able to increase agricultural yields in years ahead.”

    Kammen’s study lists a variety of biomass options that wouldn’t eat into the food supply, from municipal waste to sawdust and dead corn stalks. At its most aggressive, the simulation also relies on wood and switchgrass grown specifically for BECCS, but those represent only a little more than 10% of the total biomass energy.

    The post-harvest remains of corn stalks, known as corn stover, is a potential source of biomass.

    Still, to roll out BECCS on a wide scale, the demands for land could be massive, especially if only dedicated crops were used. “This would be on an order of magnitude of several hundred megahectares of land,” says Sabine Fuss, head of sustainable resource management and global change at Mercator Research Institute on Global Commons and Climate Change in Berlin. A paper published this week by Smith, Fuss, and others suggests that relying on dedicated crops—no municipal, agricultural, or forestry waste—would consume between 320–970 megahectares of land. Currently, there are about 1,600 megahectares, or about 4 billion acres, of land under cultivation.

    Then there’s the issue of transporting the resulting CO2 to a storage location. “We need an infrastructure in place to do that,” Smith says. “That’s probably an infrastructure on the size that we currently use to move gas and oil.”

    Those are big hurdles, but none of the experts I spoke with saw them as insurmountable. “This is something that we could, in limited amounts, do yesterday.” Kammen says. “We’re already doing all of it, just not in a coordinated way.”

    Key Piece of the Puzzle

    NRG expects their Petra Nova carbon capture project to be operational sometime next year, which means it will have taken about six years to move from planning to completion. In terms of large capital projects based on new technology, that’s relatively quick, and as more are built, we’ll probably get faster at it.

    But can we build them quickly enough? Nearly everyone I spoke with said the optimal time to to deploy BECCS was yesterday. Realistically, though, Petra Nova’s timeline seems about right. “It’s probably instructive to look backwards a bit before you look forward to see how quickly other kinds of technologies have been deployed absent a true wartime footing,” Rubin says. New natural gas power plants, he says, take about three to four years to build, while new coal plants take about eight to ten.

    Start to finish, Petra Nova will take about six years to plan, engineer, and build.

    The International Energy Agency estimates that about $4 trillion will have to be spent building CCS facilities between now and 2050. That may seem like a vast sum, but consider that countries around the world currently subsidize fossil fuels at more than quadruple that amount, or about $490 billion every year. If that figure holds constant for the next 35 years, we’ll have spent $17 trillion of public money supporting oil and gas. (They receive the vast amount of subsidies—coal only gets about $3 billion per year.)

    We may not have to build entirely new power plants, either, but just add CCS to existing ones. Here again, the Petra Nova retrofit can be instructive. “One way you could ease into a BECCS environment is looking at coal fired power plants and beginning to increase the fraction of biomass you burn in those,” Field says. Many coal plants can already burn small amounts of biomass with few if any modifications. “If you’re cofiring those with biomass, that provides a possibility of a carbon negative component of a system that you can scale in in a very gradual way so you’re beginning to make a difference right away. You could think about having thousands of power plants that are running 5–10% biomass in a way that really begins to change the equation and doesn’t require building any new power plants.”

    But, Kammen is quick to caution, “we’re not going to solve the climate story with BECCS.” Pete Smith agrees, seeing “BECCS, rather than a magic bullet, being another piece—and maybe another significant piece—of a jigsaw of future possibilities.”

    We’ll still have to move most of our power supply to renewables like wind and solar, but BECCS seems too promising to overlook. “The big upside of BECCS is that you have something which solar, wind, and geothermal can’t get you, and that is an ability to make up for our past emissions and draw carbon numbers down,” Kammen says. “We’re so far above a reasonable trajectory that we’re going to need carbon negative.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:37 pm on December 18, 2015 Permalink | Reply
    Tags: , , NOVA, Water on Enceladus?   

    From NOVA: “Saturn’s Moon, Enceladus, Shows Signs of Hydrothermal Activity” 



    Saturn’s ice moon may not be so cold beneath the surface.

    In fact, a primordial world could be rumbling in the depths of its massive ocean.

    Last year, scientists discovered Enceladus’s gravitational asymmetry—the pull of gravity is weaker near the south pole—but they were confused as to why the gravity at this location was weaker than it should be in icy terrain, given that the depression in the landscape there is so large. They discerned that water, which is 8% denser than ice, must be making up the difference. Based on that information, they calculated that a sea up to six miles thick probably exists 20 to 25 miles beneath the surface.

    An artist’s impression of the interior of Saturn’s moon Enceladus, showing clashes between hard rock and water

    What’s more, they had proof that this water was coming into direct contact with Enceladus’ rocky core. NASA’s Cassini spacecraft found tiny dust nanoparticles rich in silicon escaping from Saturn’s ring system.

    NASA Cassini Spacecraft

    Scientists studied their spectra, and found that they were made of silicon dioxide, or silica, which is a trademark of water-rock interactions found on Earth. They postulated that hydrothermal vents may be generating some of these particles, making for chemical conditions conducive to life.

    The particles originated in Saturn’s E-ring, which is also home to ice particles known to come from Enceladus. Here’s Amina Khan, writing for the Los Angeles Times:

    “The scientists then ran experiments in the lab to determine how such silica particles came to be. With the particles’ particular makeup and size distribution, they could only have formed under very specific circumstances, the study authors found, determining that the silica particles must have formed in water that had less than 4% salinity and that was slightly alkaline (with a pH of about 8.5 to 10.5) and at temperatures of at least 90 degrees Celsius (roughly 190 degrees Fahrenheit).”

    The heat was likely being generated in part by tidal forces as Saturn’s gravity kneads its icy moon. (The tidal forces are also probably what open the cracks in its surface that vent the water vapor into space.)

    Somewhere inside the icy body, there was hydrothermal activity—salty warm water interacting with rocks. It’s the kind of environment that, on Earth, is very friendly to life.

    The researchers say that since these silica particles haven’t yet clumped together, they must be recent refugees of Enceladus. In other words, this hydrothermal activity is likely still happening now—knowledge that greatly expands astrobiologists’ outlook in the search for life beyond Earth. Still, they’ll have to find out how long this activity has been going on, and how stable it is, in order to consider Enceladus the top contender.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:02 pm on December 11, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Newly Discovered ‘Stop Neurons’ Could Save Your Life” 



    11 Dec 2015
    Margaux Phares

    Neuroscientists have known since the 1960s what nerves tell a person’s legs to step off the curb to cross the street. But until now, they had no idea which hold the person back to avoid getting hit by a car.

    By stimulating nerve cells with light, a group of neuroscientists at the Karolinska Institut in Stockholm both defined the aptly named “stop neurons” and saw how they work in walking mice. The team used a “bottom-up approach” to explore how the spinal cord, lower in the chain of neural command, communicates with the brain stem, which is higher in the chain.

    “Stop neurons” tell our bodies to stop moving.Photo credits: O. Bendorf/Flickr (CC BY-NC-ND), Julien Bouvier.

    Nerve cells that give rise to other functions we do not consciously think about, like breathing and keeping balance, are located in same area—effectively, as coauthor Ole Kiehn puts it, “one big mess of integrated networks.”

    To find the stop neurons, Kiehn and Julien Bouvier first modified a mouse’s brain stem to be sensitive to light stimulation, then sliced it into smaller and smaller segments. They removed parts until light no longer stimulated the segment. From this, the researchers pinpointed a cluster of “stop neurons” that extend down part of the spinal cord that, when stimulated tell the spinal cord to halt locomotion.

    What particularly surprised Bouvier was that “those stop cells are excitatory.” In order to stop motion, the cells need to be stimulated. It’s not enough to simply interrupt the locomotion signal.

    Bouvier compares it to driving a car. As long as you press the gas pedal, your car will move forward. Going into the study, scientists thought that releasing the pedal would eventually stop the car, or gradually mute the instructions to keep walking. ”But what we found was a brake pedal used only to stop,” Bouvier said.

    Watching the pathway unfold in mice supported their earlier findings. When the researchers pulsed light on stop neurons, the mice came to a stop. Light did not have an effect on mice that had blocked stop neurons—instead of stopping, they kept walking.

    A mouse rigged for an optogenetics experiment is given a blue activation signal.

    Interestingly, the mice that could stop did so smoothly. They finished the step they were about to do. This behavior is very different from freezing, an all-over muscle contraction in response to fear. Bouvier said the smooth stopping allows animals to “keep posture,” making them less likely to fall or lose balance.

    The study, published in the November issue of Cell, is a step toward understanding how the body controls marching orders at the neural level and beyond the muscular level. Thomas Knopfel, a professor of neuroscience at Imperial College London, thinks Bouvier’s study “might be a step forward with medical problems associated with the brain and spinal cord.”

    Leg paralysis from a damaged nerve can disrupt communication between the brain and spinal cord. Knopfel speculated that an implantable device could be connected to this injured nerve, which could help patch this faulty circuit and help a patient learn to move his or her leg again. The same technology the researchers at Karolinska used—called optogenetics—could be used to make this device.

    Kiehn speculated that stop neuron activity might contribute to motor symptoms of Parkinson’s disease. One common symptom of late stage Parkinson’s is an involuntary “freezing gait.” Kiehn thinks this could be a sign that the locomotion “start signal” does not work properly, or that stop neurons may be less active than normal. Future tests will involve trying to identify these neurons in diseased mice.

    Bouvier has further questions in further exploring stop neurons and understanding how the spinal cord is controlled by brain stem. Among them: Are these neurons a “general brake for all behaviors?”

    We may not consciously think about every time we start and stop to walk, but locomotion is the output of many brain activities. Stop neurons are a critical link in this chain of command; they are the neural brake pedal that saves us from cars having to slam theirs’ in the crosswalk. “Even though movement may sound like a boring, noncognitive behavior, it is really one of the most important behaviors,” Kiehn said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 1:26 pm on December 8, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Doctors Finally Decide When a Mole Is Benign and When It’s Cancerous” 



    08 Dec 2015
    Conor Gearin

    A doctor examines a mole on a patient’s back.

    For over 30 years, cancer researchers have argued about moles.

    Specifically, they’ve debated at what point a skin lesion becomes a melanoma, more commonly known as skin cancer.

    A melanoma of approximately 2.5 cm by 1.5 cm

    Some claimed that lesions which looked halfway between a benign mole and a melanoma make up a class of their own, while others held that lesions can only be either harmless or cancerous, with no in-between.

    Over 70,000 new melanomas will be diagnosed in the United States in 2015, according to the American Cancer Society. Recently, a team of researchers led by Hunter Shain of the University of California-San Francisco announced at the Society for Melanoma Research convention that they have new data that settles the debate—a better way of telling whether a lesion is on track for becoming melanoma.

    Intermediate lesions, Shain said, “are very difficult to study simply because they are difficult to identify,” Shain said. Their appearance lies in a gray zone between obviously benign and obviously malignant. When Shain and the team had eight expert dermatologists try to classify them, there was little agreement.

    To get around this problem, the team devised a clever solution, using a characteristic of skin cancer to their advantage. Mature melanomas often lie next to skin tissue that represents earlier stages of the lesion. Taking 37 samples of melanomas from patients, the scientists micro-dissected them into their component sections—healthy skin, precursor lesion, possibly intermediate lesion, and mature melanoma.

    The researchers sequenced nearly 300 cancer-related genes in each section of the samples to discover which genes changed at which stage of development. They found that intermediate lesions had some harmless mutations but also some genetic changes that could lead to cancerous cell growth. That told them those lesions did not just look in between—their DNA actually made their behavior intermediate between a mole and a melanoma.

    The team presented their work at the melanoma convention last week and <a href="http://“>published the full version of the project in the New England Journal of Medicine.

    Sorting out the genetic gray zone also gave them a clearer view of how a melanoma develops. “In our study, we observe the canonical order of mutations that allow a melanocyte [a skin cancer cell] to overcome these barriers as it progresses to melanoma,” Shain said.

    In the 1980s, some researchers described an intermediate lesion as a dysplastic nevus, while others argued that the term would lead to confusions in melanoma diagnosis and shouldn’t be used. Shain’s team avoided using the term dysplastic nevus.

    David Elder of the University of Pennsylvania, who coined the term “dysplastic nevus syndrome” in 1980, attended the team’s presentation in San Francisco. “I think this is an important step forward,” Elder said. “And it does add materially to our understanding of a question that has certainly been hotly debated in our community for many years.”

    While the study’s authors didn’t use his terminology, Elder said they are basically describing the same thing. He said that at the presentation, “I asked the question, ‘Were these intermediate lesions dysplastic nevi or were they something else?’ And the response was, ‘Well, yes, they are.’” Elder agreed that using the term could distract from the study’s main message.

    Iwei Yeh of UC-San Francisco, a co-author of the study and one of the eight expert dermatologists, said that the new genetic data will allow the next stage of research to narrow the list of suspects for which mutations are most likely to lead to melanoma. Researchers could observe people with intermediate lesions and see which ones develop cancer. Then, they could use those genes as diagnostic signals for a mole on track to become malignant.

    “I think that kind of ties into this whole idea of personalized medicine,” Yeh said. “What are the individual alterations that are really contributing to the person’s cancer, and what does that mean for them in terms of their outcome?”

    Elder said this kind of study could reduce unnecessary procedures to remove harmless moles, saving time for both doctors and patients.

    “More precise diagnosis could be very valuable in more appropriate precision treatment of these lesions,” he said. “If you don’t need to do [a procedure], you don’t want to do it.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:18 pm on December 7, 2015 Permalink | Reply
    Tags: , , NOVA, Therapeutic hypothermia   

    From NOVA: “Injecting Ice Cold Saline Can Protect the Brains of Cardiac Arrest Patients” 



    07 Dec 2015
    Margaux Phares

    For Dr. Sarah Perman, it is not enough for a cardiac arrest patient’s heart to work properly again. “What we care about is not just survival to when they’re discharged from the hospital,” she said, “but neurologic recovery at discharge.”

    While a medical resident, Perman once cared for an elderly cardiac arrest patient. The attending team felt that the patient would have a grave prognosis. Their quick conclusion surprised Perman, who “started to learn that there was some generalized hesitancy” in treating cardiac arrest patients who have chronic heart conditions or do not initially respond to a defibrillator. The patient’s relatives still wanted the doctors to try everything they could, so they started therapeutic hypothermia, which was a success. After cooling the patient’s body temperature and a period of rewarming, the patient was revived and could go home.

    A nurse attends to a patient during a simulated emergency.

    Dr. Perman, now an assistant professor of emergency medicine at the University of Colorado, and her colleagues found that lowering the body’s internal temperature can help preserve neurological function for cardiac arrest patients with heart rhythms that do not respond to defibrillators. This controlled cooling is called therapeutic hypothermia, or TH, which is versatile enough to be carried out within ambulances and emergency rooms. Medical responders can cool the chest and limbs with ice packs or thread a catheter carrying cold saline through their patient.

    Coauthor Dr. Benjamin Abella said most research and treatment effort has been placed on patients with shockable rhythms. With so little data on patients with nonshockable rhythms, hospitals are hesitant to use this therapy.

    The American Heart Association estimates 530,000 individuals suffer cardiac arrest per year in the United States. For every minute a patient does not receive treatment, survival decreases by 7%.

    “The human body has no real good mechanism to come back from a cardiac arrest,” said EMT Elizabeth Watkins. During cardiac arrest, the heart stops pumping blood and oxygen throughout the body. This is especially bad news for the brain, as cerebral fluid can also build up and put pressure on the brain. Death can result in minutes.

    Defibrillators can restore shockable rhythms, while other rhythms are nonshockable and do not respond to electrical activity. Without treatment, survival chances for nonshockable patients are slim—as low as 10%.

    Dr. Benjamin Abella explains how therapeutic hypothermia can aid recovery in patients with nonshockable heart rhythms.

    TH can help preserve the neurobiology of comatose patients that have a pulse. This sort of corporeal refrigeration slows down the rate of cell death and acid build-up in the brain. Two landmark trials in 2002 endorsed TH for patients with shockable rhythms. This treatment, as Dr. Perman and her team found, could also benefit people who may not initially benefit from a defibrillator.

    Drs. Perman and Abella’s team gathered data from 262 adult patients with non-shockable rhythms and paired them on characteristics such as age. “You have to make sure you’re comparing apples to apples,” Dr. Abella explained. Half of the patients had received TH, the other half did not.

    The researchers were especially interested in how well neurologic function was preserved after TH, and so they then compared each patient’s recovery through a Cerebral Performance Category (CPC) test. This widely used clinical measurement describes level of consciousness on a scale of 1—someone who is stable and alert enough to go home—to 5—death. The researchers found that patients who received TH had 3.5 times better neurologic recovery (a CPC of “1” or “2”) compared to those who did not.

    Drs. Perman and Abella’s study was published in Circulation. For Watkins, the EMT, a future study in TH effectiveness should examine patients who receive TH while they are in cardiac arrest. Starting TH sooner, rather than when patients regain a pulse, “decreases metabolic demand and causes less stress on the heart overall.” A drawback is that drugs typically given with cardiac arrest, like amiodarone, may not be as effective while going through a chilled body.

    In order to understand how to better apply TH, Dr. Perman is looking to a bigger sample size. Thankfully, “there’s been a push for a national cardiac arrest database.” A September 2015 Institute of Medicine report outlined this is a major strategy to improve cardiac arrest survival.

    “Cardiac arrest is a fairly grave occurrence and outcomes are not great, but we are doing everything we can to improve outcomes,” Dr. Perman said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 535 other followers

%d bloggers like this: