Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:42 am on March 6, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Powerful, Promising New Molecule May Snuff Antibiotic Resistant Bacteria” 

    PBS NOVA

    NOVA

    09 Jan 2015
    R.A. Becker

    1
    Methicillin-resistant staph surround human immune cell.

    Antibiotic resistant bacteria pose one of greatest threats to public health. Without new weapons in our arsenal, these bugs could cause 10 million deaths and cost nearly $100 trillion worldwide each year by the year 2050, according to a recent study commissioned by the British government.

    But just this week, scientists announced that they have discovered a potent new weapon hiding in the ground beneath our feet—a molecule that kills drug resistant bacteria and might itself be resistant to resistance. The team published their results Wednesday in the journal Nature.

    Scientists have been coopting the arsenal of soil-dwelling microorganisms for some time, said Kim Lewis, professor at Northeastern University and senior investigator of the study. Earth-bound bacteria live tightly packed in an intensely competitive environment, which has led to a bacterial arms race. “The ones that can kill their neighbors are going to have an advantage,” Lewis said. “So they go to war with each other with antibiotics, and then we borrow their weapons to fight our own pathogens.”

    However, by the 1960s, the returns from these efforts were dwindling. Not all bacteria that grow in the soil are easy to culture in the lab, and so antibiotic discovery slowed. Lewis attributes this to the interdependence of many soil-dwelling microbes, which makes it difficult to grow only one strain in the lab when it has been separated from its neighbors. “They kill some, and then they depend on some others. It’s very complex, just like in the human community,” he said.

    But a new device called the iChip, developed by Lewis’s team in collaboration with NovoBiotic Pharmaceuticals and colleagues at the University of Bonn, enables researchers to isolate bacteria reluctant to grow in the lab and cultivate them instead where they’re comfortable—in the soil.

    Carl Nathan, chairman of microbiology and immunology at Weill Cornell Medical School and co-author of a recent New England Journal of Medicine commentary about the growing threat of antibiotic resistance, called the team’s discovery “welcome,” adding that it illustrates a point that Lewis has been making for several years, that soil’s well of antibiotic-producing microorganisms “is not tapped out.”

    The researchers began by growing colonies of formerly un-culturable bacteria on their home turf and then evaluating their antimicrobial defenses. They discovered that one bacterium in particular, which they named Eleftheria terrae, makes a molecule known as teixobactin which kills several different kinds of bacteria, including the ones that cause tuberculosis, anthrax, and even drug resistant staph infections.

    Teixobactin isn’t the first promising new antibiotic candidate, but it does have one quality that sets it apart from others. In many cases, even if a new antibiotic is able to kill bacteria resistant to our current roster of drugs, it may eventually succumb to the same resistance that felled its predecessors. (Resistance occurs when the few bacteria strong enough to evade a drug’s killing effects multiply and pass on their genes.)

    Unlike current antibiotic options, though, teixobactin attacks two lipid building blocks of the cell wall, which many bacteria strains can’t live without. By attacking such a key part of the cell, it becomes harder for a bacterium to mutate to escape being killed.

    “This is very hopeful,” Nathan said. “It makes sense that the frequency of resistance would be very low because there’s more than one essential target.” He added, however, that given the many ways in which bacteria can avoid being killed by pharmaceuticals, “Is this drug one against which no resistance will arise? I don’t think that’s actually proved.”

    Teixobactin has not yet been tested in humans. Lewis said the next steps will be to conduct detailed preclinical studies as well as work on improving teixobactin’s molecular structure to solve several practical problems. One they hope to address, for example, is its poor solubility; another is that it isn’t readily absorbed when given orally—as is, it will have to be administered via injection.

    While Lewis predicts that the drug will not be available for at least five years, this new method offers a promising new avenue of drug discovery. Nathan agrees, though he cautions it’s too soon to claim victory. The message of this recent finding, he said, “is not that the problem of antibiotic resistance has been solved and we can stop worrying about it. Instead it’s to say that there’s hope.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:33 am on February 28, 2015 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “A Brief History of the Speed of Light” 

    PBS NOVA

    NOVA

    27 Feb 2015
    Jennifer Ouellette

    One night over drinks at a conference in San Jose, Miles Padgett, a physicist at Glasgow University in Scotland, was chatting with a colleague about whether or not they could make light go slower than its “lawful” speed in a vacuum. “It’s just one of those big, fundamental questions you may want to ask yourself at some point in the pub one night,” he told BBC News. Though light slows down when it passes through a medium, like water or air, the speed of light in a vacuum is usually regarded as an absolute.

    1
    Image: Flickr user Steve Oldham, adapted under a Creative Commons license.

    This time, the pub talk proved to be a particularly fruitful exchange. Last month, Padgett and his collaborators made headlines when they revealed their surprising success: They raced two photons down a one-meter “track” and managed to slow one down just enough that it finished a few millionths of a meter behind its partner. The experiment showed that it is possible for light to travel at a slower speed even in free space—and Padgett and his colleagues did it at the scale of individual photons.

    The notion that light has a particular speed, and that that speed is measurable, is relatively new. Prior to the 17th century, most natural philosophers assumed light traveled instantaneously. Galileo was one of the first to test this notion, which he did with the help of an assistant and two shuttered lanterns. First, Galileo would lift the shutter on his lantern. When his assistant, standing some distance away, saw that light, he would lift the shutter on his lantern in response. Galileo then timed how long it took for him to see the return signal from his assistant’s lantern, most likely using a water clock, or possibly his pulse. “If not instantaneous, it is extraordinarily rapid,” Galileo concluded, estimating that light travels at about ten times the speed of sound.

    Over the ensuing centuries, many other scientists improved upon Galileo’s work by devising ingenious new methods for measuring the speed of light. Their results fell between 200,000 kilometers per second, recorded in 1675 by Ole Roemer, who made his measurement by studying eclipse patterns in Jupiter’s moons, and 313,000 kilometers per second, recorded in 1849 by Hippolyte Louis Fizeau, who sent light through a rotating tooth wheel and then reflected it back with a mirror. The current accepted value is 299,792.458 kilometers per second, or 669,600,000 miles per hour. Physicists represent this value with the constant c, and it is broadly understood to be the cosmic speed limit: all observers, no matter how fast they are going, will agree on it, and nothing can go faster.

    This limit refers to the speed of light in a vacuum—empty space, with no “stuff” in it with which light can interact. Light traveling through air, water, or glass, for example, will move more slowly as it interacts with the atoms in that substance. In some cases, light will move so slowly that other particles shoot past it. This can create Cherenkov radiation, a “photonic boom” shockwave that can be seen as a flash of blue light. That telltale blue glow is common in certain types of nuclear reactors. (Doctor Manhattan, the ill-fated atomic scientist in Alan Moore’s classic “Watchmen” graphic novel, sports a Cherenkov-blue hue.) It is useful for radiation therapy and the detection of high-energy particles such as neutrinos and cosmic rays—and perhaps one day, dark matter particles—none of which would be possible without the ability of certain materials to slow down light.

    But just how slow can light go? In his 1933 novel Master of Light, French science fiction writer Maurice Renard imagined a special kind of “slow glass” through which light would take 100 years to pass. Slow glass is very much the stuff of fiction, but it has an intriguing real-world parallel in an exotic form of matter known as a Bose-Einstein Condensate (BEC), which exploits the wave nature of matter to stop light completely. At normal temperatures atoms behave a lot like billiard balls, bouncing off one another and any containing walls. The lower the temperature, the slower they go. At billionths of a degree above absolute zero, if the atoms are densely packed enough, the matter waves associated with each atom will be able to “sense” one another and coordinate themselves as if they were one big “superatom.”

    First predicted in the 1920s by Albert Einstein and the Indian physicist Satyendra Bose, BEC wasn’t achieved in the lab until 1995. The Nobel Prize winning research quickly launched an entirely new branch of physics, and in 1999, a group of Harvard physicists realized they could slow light all the way down to 17 miles per hour by passing it through a BEC made of ultracold sodium atoms. Within two years, the same group succeeded in stopping light completely in a BEC of rubidium atoms.

    What was so special about the recent Glasgow experiments, then? Usually, once light exits a medium and enters a vacuum, it speeds right back up again, because the reduced velocity is due to changes in what’s known as phase velocity. Phase velocity tracks the motion of a particular point, like a peak or trough, in a light wave, and it is related to a material’s refractive index, which determines just how much that material will slow down light.

    Padgett and his team found a way to keep the brakes on in their experiment by focusing on a property of light known as group velocity. Padgett likens the effect to a subatomic bicycle race, in which the photons are like riders grouped together in a peloton (light beam). As a group, they appear to be moving together at a constant speed. In reality, some individual riders slow down, while others speed up. The difference, he explained to BBC News, is that instead of using a light pulse made up of many photons, “We measure the speed of a single photon as it propagates, and we find it’s actually being slowed below the speed of light.”

    The Glasgow researchers used a special liquid crystal mask to impose a pattern on one of two photons in a pair. Because light can act like both a particle and a wave—the famous wave-particle duality—the researchers could use the mask to reshape the wavefront of that photon, so instead of spreading out like an ocean wave traveling to the shore, it was focused onto a point. That change in shape corresponded to a slight decrease in speed. To the researchers’ surprise, the light continued to travel at the slightly slower speed even after leaving the confines of the mask. Because the two photons were produced simultaneously from the same light source, they should have crossed the finish line simultaneously; instead, the reshaped photon lagged just a few millionths of a meter behind its partner, evidence that it continued to travel at the slower speed even after passing through the medium of the mask.

    Padgett and his colleagues are still pondering the next step in this intriguing line of research. One possibility is looking for a similar slow-down in light randomly scattered off a rough surface.

    If so, it would be one more bit of evidence that the speed of light, so often touted as an unvarying fundamental constant, is more malleable than physicists previously thought. University of Rochester physicist Robert Boyd, while impressed with the group’s ingenuity and technical achievement, calmly took the news in stride. “I’m not surprised the effect exists,” he told Science News. “But it’s surprising that the effect is so large and robust.”

    His nonchalance might strike non-physicists as strange: Shouldn’t this be momentous news poised to revolutionize physics? As always, there are caveats. When it comes to matters of light speed, it’s important to read the fine print. In this case, one must be careful not to confuse the speed at which light travels, which is just a feature of light, with its central role in special relativity, which holds that the speed of light is constant in all frames of reference. If Galileo measures the speed of light, he gets the same answer whether he is lounging at home in Pisa or cruising in a horse-drawn carriage. The same goes for his trusty assistant. This still holds true, centuries later, despite the exciting news out of Glasgow last month.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 4:54 am on February 23, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “In Once-Mysterious Epigenome, Scientists Find What Turns Genes On” 

    PBS NOVA

    NOVA

    19 Feb 2015
    R.A. Becker

    1
    A handful of new studies provide epigenetic roadmaps to understanding the human genome in action.(No image credit)

    Over a decade ago, the Human Genome Project deciphered the “human instruction book” of our DNA, but how cells develop vastly different functions using the same genetic instructional text has remained largely a mystery.

    As of yesterday, it became a bit less mysterious. A massive NIH consortium called the Roadmap Epigenomics Program published eight papers in the journal Nature which report on their efforts to map epigenetic modifications, or the changes to DNA that don’t alter its code. These subtle modifications make genes more or less likely to be expressed, and the collection of epigenetic modifications is called the epigenome.

    One of the eight studies mapped over 100 epigenomes characterizing every epigenetic modification occurring in human tissue cells. “These 111 reference epigenome maps are essentially a vocabulary book that helps us decipher each DNA segment in distinct cell and tissue types,” Roadmap researcher Bing Ren, a professor of cellular and molecular medicine at the University of California, San Diego, said in a news release. “These maps are like snapshots of the human genome in action.”

    This kind of mapping has challenged the field because of the huge amount of data needed to make sense of the chaotic arrangements of genes and their regulators. “The genome hasn’t nicely arranged the regulatory elements to be cheek by jowl with the elements they regulate,” Broad Institute director Eric Lander told Gina Kolata at The New York Times. “It can be very hard to figure out which regulator lines up with which genes.”

    Here’s how Lander described the detective process used to Kolata:

    If you knew when service on the Red Line was disrupted and when various employees were late for work, you might be able to infer which employees lived on the Red Line, he said. Likewise, when a genetic circuit was shut down, certain genes would be turned off. That would indicate that those genes were connected, like the employees who were late to work when the Red Line shut down.

    Diseases can be linked to epigenetic variations as well. For example, another of the eight papers published yesterday proposed that the roots of Alzheimer’s disease lie in immune cell genetic dysfunction and epigenetic alterations in brain cells.

    Creating an epigenetic road map is a huge step, but it’s just a first step. As Collins wrote in 2001 when the human genome had been mostly mapped, “This is not even the beginning of the end. But it may be the end of the beginning.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:03 am on February 16, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “The New Power Plants That Could Actually Remove Carbon from the Atmosphere” 

    PBS NOVA

    NOVA

    12 Feb 2015
    Tim De Chant

    1
    The Kemper County Energy Facility, seen here under construction, will use CCS, one of the two technologies proposed for negative-carbon power plants.

    What’s better than a zero-carbon source of electricity like solar or wind? One that removes carbon from the atmosphere—a negative-carbon source.

    It’s entirely possible, too. By combining two existing, though still not entirely proven, technologies, researchers have devised a strategy that would allow much of western North America to go carbon negative by 2050. In just a few short decades, we could scrub carbon dioxide from the air and reverse the emissions trend that’s causing climate change.

    The trick involves pairing power plants that burn biomass with carbon capture and sequestration equipment, also known as CCS. While politicians and engineers in the U.S. have been trying—unsuccessfully—to build commercial-scale, coal-fired CCS power plants for more than a decade, the technology is well understood. Originally envisioned as a way to keep dirty coal plants in operation, CCS may be even better suited for biomass power plants, which burn plant material, essentially turning them into carbon dioxide scrubbers that also happen to produce useful amounts of electricity.

    2
    Schematic showing both terrestrial and geological sequestration of carbon dioxide emissions from a coal-fired plant

    The power plants would take excess biomass, burn it just as they would coal, and then concentrate and inject the emitted carbon dioxide deep into the earth where it would be remain sequestered for generations, if not millennia. (Technically, its the plants in this scenario that are scrubbing carbon from the atmosphere, but the CCS equipment ensures it doesn’t return.)

    John Timmer, writing for Ars Technica:

    The authors estimate that it would be economically viable to put up to 10GW of biomass powered plants onto the grid, depending on the level of emissions limits; that corresponds to a bit under 10 percent of the expected 2050 demand for electricity. The generating plants would be supplied with roughly 2,000 PetaJoules of energy in the form of biomass, primarily from waste and residue from agriculture, supplemented by municipal and forestry waste. In all low-emissions scenarios, over 90 percent of the available biomass supply ended up being used for electricity generation.

    Dedicated bioenergy crops are more expensive than simply capturing current waste, and they therefore account for only about seven percent of the biomass used, which helpfully ensures that the transition to biomass would come with minimal land-use changes.

    The tidy proposal suggests that we could add these power plants to actively remove carbon from the atmosphere while, as Timmer points out, still allowing us to use fossil fuels like natural gas to help stabilize the grid. In fact, the biomass plants equipped with CCS could begin their lives burning coal while the market for biomass waste collection and distribution develops, smoothing the transition.

    There’s still the matter of shifting the current system, which favors fossil fuels, over to this more diverse mix. But it’s a sign that, with the right investments, we could achieve some very audacious reductions in carbon dioxide emissions in a very short time.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 11:55 am on February 12, 2015 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “Does Science Need Falsifiability?” 

    PBS NOVA

    NOVA

    11 Feb 2015
    Kate Becker

    If a theory doesn’t make a testable prediction, it isn’t science.

    It’s a basic axiom of the scientific method, dubbed “falsifiability” by the 20th century philosopher of science Karl Popper. General relativity passes the falsifiability test because, in addition to elegantly accounting for previously-observed phenomena like the precession of Mercury’s orbit, it also made predictions about as-yet-unseen effects—how light should bend around the Sun, the way clocks should seem to run slower in a strong gravitational field, and others that have since been borne out by experiment. On the other hand, theories like Marxism and Freudian psychoanalysis failed the falsifiability test—in Popper’s mind, at least—because they could be twisted to explain nearly any “data” about the world. As Wolfgang Pauli is said to have put it, skewering one student’s apparently unfalsifiable idea, “This isn’t right. It’s not even wrong.”

    1
    Some theorists propose that our universe is just one bubble in a multiverse. Will falsifiability burst the balloon? Credit: Flickr user Steve Jurvetson, adapted under a Creative Commons license.

    Now, some physicists and philosophers think it is time to reconsider the notion of falsifiability. Could a theory that provides an elegant and accurate account of the world around us—even if its predictions can’t be tested by today’s experiments, or tomorrow’s—still “count” as science?
    “We are in various ways hitting the limits of what will ever be testable.”

    As theory pulls further and further ahead of the capabilities of experiment, physicists are taking this question seriously. “We are in various ways hitting the limits of what will ever be testable, unless we have misunderstood some essential point about the nature of reality,” says theoretical cosmologist George Ellis. “We have now seen all the visible universe (i.e back to the visual horizon) and only gravitational waves remain to test further; and we are approaching the limits of what particle colliders it will ever be feasible to build, for economic and technical reasons.”

    Case in point: String theory. The darling of many theorists, string theory represents the basic building blocks of matter as vibrating strings. The strings take on different properties depending on their modes of vibration, just as the strings of a violin produce different notes depending on how they are played. To string theorists, the whole universe is a boisterous symphony performed upon these strings.

    It’s a lovely idea. Lovelier yet, string theory could unify general relativity with quantum mechanics, solving what is perhaps the most stubborn problem in fundamental physics. The trouble? To put string theory to the test, we may need experiments that operate at energies far higher than any modern collider. It’s possible that experimental tests of the predictions of string theory will never be within our reach.

    Meanwhile, cosmologists have found themselves at a similar impasse. We live in a universe that is, by some estimations, too good to be true. The fundamental constants of nature and the cosmological constant [usually denoted by the Greek capital letter lambda: Λ], which drives the accelerating expansion of the universe, seem “fine-tuned” to allow galaxies and stars to form. As Anil Ananthaswamy wrote elsewhere on this blog, “Tweak the charge on an electron, for instance, or change the strength of the gravitational force or the strong nuclear force just a smidgen, and the universe would look very different, and likely be lifeless.”

    Why do these numbers, which are essential features of the universe and cannot be derived from more fundamental quantities, appear to conspire for our comfort?

    One answer goes: If they were different, we wouldn’t be here to ask the question.

    This is called the “anthropic principle,” and if you think it feels like a cosmic punt, you’re not alone. Researchers have been trying to underpin our apparent stroke of luck with hard science for decades. String theory suggests a solution: It predicts that our universe is just one among a multitude of universes, each with its own fundamental constants. If the cosmic lottery has played out billions of times, it isn’t so remarkable that the winning numbers for life should come up at least once.

    In fact, you can reason your way to the “multiverse” in at least four different ways, according to MIT physicist Max Tegmark’s accounting. The tricky part is testing the idea. You can’t send or receive messages from neighboring universes, and most formulations of multiverse theory don’t make any testable predictions. Yet the theory provides a neat solution to the fine-tuning problem. Must we throw it out because it fails the falsifiability test?

    “It would be completely non-scientific to ignore that possibility just because it doesn’t conform with some preexisting philosophical prejudices,” says Sean Carroll, a physicist at Caltech, who called for the “retirement” of the falsifiability principle in a controversial essay for Edge last year. Falsifiability is “just a simple motto that non-philosophically-trained scientists have latched onto,” argues Carroll. He also bristles at the notion that this viewpoint can be summed up as “elegance will suffice,” as Ellis put it in a stinging Nature comment written with cosmologist Joe Silk.

    “Elegance can help us invent new theories, but does not count as empirical evidence in their favor,” says Carroll. “The criteria we use for judging theories are how good they are at accounting for the data, not how pretty or seductive or intuitive they are.”

    But Ellis and Silk worry that if physicists abandon falsifiability, they could damage the public’s trust in science and scientists at a time when that trust is critical to policymaking. “This battle for the heart and soul of physics is opening up at a time when scientific results—in topics from climate change to the theory of evolution—are being questioned by some politicians and religious fundamentalists,” Ellis and Silk wrote in Nature.

    “The fear is that it would become difficult to separate such ‘science’ from New Age thinking, or science fiction,” says Ellis. If scientists backpedal on falsifiability, Ellis fears, intellectual disputes that were once resolved by experiment will devolve into never-ending philosophical feuds, and both the progress and the reputation of science will suffer.

    But Carroll argues that he is simply calling for greater openness and honesty about the way science really happens. “I think that it’s more important than ever that scientists tell the truth. And the truth is that in practice, falsifiability is not a good criterion for telling science from non-science,” he says.

    Perhaps “falsifiability” isn’t up to shouldering the full scientific and philosophical burden that’s been placed on it. “Sean is right that ‘falsifiability’ is a crude slogan that fails to capture what science really aims at,” argues MIT computer scientist Scott Aaronson, writing on his blog Shtetl Optimized. Yet, writes Aaronson, “falsifiability shouldn’t be ‘retired.’ Instead, falsifiability’s portfolio should be expanded, with full-time assistants (like explanatory power) hired to lighten falsifiability’s load.”

    “I think falsifiability is not a perfect criterion, but it’s much less pernicious than what’s being served up by the ‘post-empirical’ faction,” says Frank Wilczek, a physicist at MIT. “Falsifiability is too impatient, in some sense,” putting immediate demands on theories that are not yet mature enough to meet them. “It’s an important discipline, but if it is applied too rigorously and too early, it can be stifling.”

    So, where do we go from here?

    “We need to rethink these issues in a philosophically sophisticated way that also takes the best interpretations of fundamental science, and its limitations, seriously,” says Ellis. “Maybe we have to accept uncertainty as a profound aspect of our understanding of the universe in cosmology as well as particle physics.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 10:05 am on February 8, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Powerful and Efficient ‘Neuromorphic’ Chip Works Like a Brain” 

    PBS NOVA

    NOVA

    08 Aug 2014
    Allison Eck

    Compared with biological computers—also known as brains—today’s computer chips are simplistic energy hogs. Which is why some computer scientists have been exploring neuromorphic computing, where they try to emulate neurons with silicon. Yesterday, researchers at IBM announced a new neuromorphic processor, dubbed TrueNorth, in an article published in the journal Science.

    At one million “neurons,” TrueNorth is about as complex as a bee’s brain. Experts are saying this little device (about the size of a postage stamp) is the newest and most promising development in “neuromorphic” computing. Despite its 5.4 billion transistors, the entire system consumes only 70 milliwatts of power, a strikingly low amount. The clock speed on the chip is slow, measured in megahertz—today’s computer chips zip along at the gigahertz level—but its vast parallel circuitry allows it to perform 46 billion operations a second per watt of energy.

    1
    At one million “neurons,” a computer chip dubbed TrueNorth mimics the organization of the brain and is the next step in “neuromorphic” computer programming.

    Here’s John Markoff, writing for The New York Times:

    The chip’s electronic “neurons” are able to signal others when a type of data — light, for example — passes a certain threshold. Working in parallel, the neurons begin to organize the data into patterns suggesting the light is growing brighter, or changing color or shape.

    The processor may thus be able to recognize that a woman in a video is picking up a purse, or control a robot that is reaching into a pocket and pulling out a quarter. Humans are able to recognize these acts without conscious thought, yet today’s computers and robots struggle to interpret them.

    Despite the promise, some scientists are skeptical about TrueNorth’s potential, claiming that it’s not that much more impressive than what a cell phone camera can already do. Still others see it as overhyped or just one of many possible neuromorphic strategies.

    Jonathan Webb, writing for BBC News:

    Prof Steve Furber is a computer engineer at the University of Manchester who works on a similarly ambitious brain simulation project called SpiNNaker. That initiative uses a more flexible strategy, where the connections between neurons are not hard-wired.

    He told BBC News that “time will tell” which strategy succeeds in different applications.

    Proponents argue that the chip is endlessly scalable, meaning additional units can be assembled into bigger, more powerful machines. And if its processing potential improves, as traditional silicon chips did in the past, then TrueNorth’s neuromorphic successors could lead to cell phones powered by extremely high-power, energy-efficient processors, the sort that could make today’s smartphone CPUs look like those in early PCs.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 3:10 pm on February 5, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Electric Fields Carrying Chemo Could Destroy Intractable Tumors” 

    PBS NOVA

    NOVA

    05 Feb 2015
    Tim De Chant

    There’s no “good” cancer, but some are certainly worse than others when it comes to prognosis. Pancreatic cancer, for example, has a dismal survival rate. It’s inoperable in many cases, and in general it’s hard to deliver chemo to the tumor because its internal pressure keeps drugs at bay.

    Researchers have been devising strategies to concentrate chemo in the most recalcitrant tumors, from injecting drugs directly into tumors themselves to directing chemo-coated magnetic particles to the site. The latest takes some of these ideas a step further while using existing drugs, a time-saving step. It comes in the form of a device that stores chemo and produces electric fields that carry the drugs directly into the tumor. Because many existing drugs are polar molecules, they are carried along with the electric current.

    1
    Pancreatic cancer cells, seen here through a powerful microscope, are targeted by the new treatment.

    Inventors Joseph DeSimone, a professor of chemistry at the University of North Carolina, Chapel Hill, and his team have tested their device on mice and dogs, and the approach shows promise. Here’s Robert F. Service, reporting for Science:

    The team got several promising results. In one experiment, the researchers started with mice that had been implanted with human pancreatic cancer tumors. One group of mice was then implanted with the electrode setup and administered an anticancer drug called gemcitabine twice a week for 7 weeks. Control animals received either saline through the same electrode setup or intravenous (IV) doses of saline or gemcitabine. The researchers report online today in Science Translational Medicine that the animals in the experimental group had far higher gemcitabine concentrations in their tumors compared with mice that received the IV drug. That caused the tumors to shrink dramatically in the experimental animals, whereas tumors in mice that received IV gemcitabine or saline continued to grow.

    Another advantage of the approach is that it limits the distribution of chemo within the body. Though the drugs are highly toxic to cancer cells, they also are taxing to healthy cells, making treatment regimens grueling affairs.

    DeSimone and his team have yet to move the device into clinical trials involving humans, an often unsuccessful transition for many would-be cancer treatments. Still, the fact that the device relies on delivering known, existing drugs more directly to a tumor site should reduce some uncertainty.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 10:28 am on January 16, 2015 Permalink | Reply
    Tags: , NOVA,   

    From NOVA- “Oceans of Acid: How Fossil Fuels Could Destroy Marine Ecosystems” 

    PBS NOVA

    NOVA

    12 Feb 2014
    Scott Doney

    In 2005, hatchery-grown oyster larvae in the Pacific Northwest began mysteriously dying by the millions. Then it happened again in 2006. And again in 2007 and 2008. Oceanographers and fisheries scientists raced to understand what was behind the catastrophe. Was it bacterial infections? Or something more sinister?

    By 2008, after billions of shellfish larvae had died, they had their answer. The waters of the Pacific Ocean had turned corrosively acidic. The changes were too subtle to be noticed by swimmers and boaters, but to oysters, they were lethal. Many oyster larvae never made it to adulthood. Those that did suffered from deformed shells or were undersized. A $110 million industry was on the brink of collapse.

    1
    The oyster industry in the Pacific Northwest, worth over $110 million, is threatened by ocean acidification.

    The problem is with the water, of course—its pH had dropped too much—but the root cause is in the winds that blow above the Pacific Ocean. A shift in wind patterns had pushed surface waters aside, allowing acidic water from the deep to well up onto the shore. Even a few decades ago, such upwelling events weren’t as acidic and probably wouldn’t have been cause for concern. But the oceans absorb massive amounts of CO2—about one quarter of our excess emissions—and as we pump more of the greenhouse gas into the atmosphere, we are driving the pH of ocean water lower and lower. Today, ocean waters are up to 30% more acidic than in preindustrial times.

    In some ways, the problem of the Pacific oyster farms has been a success story. Working with scientists, hatchery operators quickly identified seawater chemistry as the primary culprit and took remedial steps, including placing sentinel buoys offshore that can warn of inflowing acidic water.

    Still, the fishery isn’t the same. One hatchery even decided to relocate to Hawaii where ocean chemistry remains more conducive to oyster larval growth. They ship the shellfish back to Washington State when they’re hardy enough to survive the acidic waters. But not everyone expects that arrangement to last forever. Larger drops in ocean pH are projected in coming decades as fossil fuel use expands, particularly in rapidly developing countries like China and India.

    Many people are familiar with the link between using fossil fuels as an energy source and climate change. Less appreciated is how burning fossil fuels changes ocean chemistry. Marine plants, animals, and microbes are bathed in seawater, and somewhat surprisingly, even relatively small alterations in seawater chemistry can have big effects. Oysters are the canary in the coal mine.

    A Watery Laboratory

    The basic principles of seawater carbon dioxide chemistry were well understood even as far back as the late 1950s when David Keeling started his now famous time-series of atmospheric carbon dioxide measurements in Hawaii. Then, levels were at 315 parts per million. Now, a little more than a half-century later, carbon dioxide levels are approaching 400 ppm and continuing to rise as we burn more fossil fuels.

    The potential for serious biological ramifications, however, only began to come to light in the late 1990s and early 2000s. Like other gases, carbon dioxide dissolves in water; but in contrast to other major atmospheric constituents—oxygen, nitrogen, argon—carbon dioxide (CO2) reacts with the water (H2O) to form bicarbonate (HCO3-) and hydrogen (H+) ions. The process is often called ocean acidification to reflect the increase in acidity—more H+ ions—and thus lower pH. The other part of the story has to do with the composition of salty seawater. Over geological time scales, weathering of rocks on land adds dissolved ions, or salts, to the ocean, including calcium (Ca+) and carbonate (CO32-) from limestone. Seawater is on the basic end of pH—which greatly increases the amount of carbon dioxide that can dissolve in seawater.

    2
    Oyster larvae are particularly susceptible to the effects of ocean acidification.

    Oysters are just one of many organisms that are dependent on ocean water plentiful with carbonate ions, a building block that many marine plants and animals use to build hard calcium carbonate (CaCO3) shells. These include corals, shellfish, and some important types of plankton, the small floating organisms that form the base of the marine food web.

    Today, there are major research programs around the world that are tracking changes in seawater chemistry and testing how those shifts affect marine organisms and ecosystems. Early experiments involved growing these organisms in the laboratory where seawater chemistry can be easily manipulated. Many of the species tested under acidified conditions had a more difficult time building shell or skeleton material, sometimes even producing malformed shells. Together these factors could slow growth and lower survival of these species in the wild.

    The laboratory isn’t perfect, though. Unlike most lab settings, in the real ocean, organisms live in a community with many different species and a complex web of interactions. Some species are competitors for space and food; others are potential prey, predators, or even parasites.

    Acid Future

    We can get a glimpse of what a future acid ocean might look like by traveling to volcanic vents on the ocean floor, where carbon dioxide bubbles into shallow waters. These sites are ready-made laboratories for studying acidification effects on entire biological communities. And surveys of these regions largely validate the results from laboratory experiments. In the highly acidified water, corals, mollusks, and many crustacean species are absent, replaced with thick mats of photosynthetic algae.

    Scientists are also moving forward with the engineering equivalent of volcanic vents: deployable systems that can be used to deliberately acidify a small patch of coral reef or volume of the upper ocean. These purposeful manipulation experiments are important tools for moving research out of artificial lab conditions into the ocean.

    3
    Volcanic vents in the ocean floor, like this white smoker, provide scientists with ready-made natural laboratories to study acidic waters.

    Preliminary results from ocean acidification experiments seemed rather dire. Some predicted that coral reefs would disappear and shellfish supplies would be decimated. But as with many new findings in science, reality may turn out to be more complex, more nuanced, and more interesting. For example, the sensitivity of shell and skeleton formation to carbon dioxide appears to vary widely across groups of biological species and even, in some cases, within closely related strains of the same species.

    Carbon dioxide and pH play a central role in biochemistry and physiology, and further research has shown that acidification may have a much wider range of possible biological impacts beyond simply decreasing shell and skeleton formation. In some plankton and seaweeds, elevated carbon dioxide speeds photosynthesis, the process used to convert carbon dioxide and light energy into organic matter and food.

    Acidification also affects the functioning of some small microbes that govern many aspects of ocean and atmosphere chemistry, including the transformation of nitrogen gas into valuable nutrients used by plankton while inhibiting the conversion of other inorganic forms of nitrogen, potentially unbalancing a key ecosystem process. Seawater chemistry of dissolved trace metals could also be upended, for better or worse—some of those metals are essential for life while others are toxic. Still other experiments have even shown changes in fish behavior and their ability to smell approaching predators. Their olfactory nerve cells are stymied by subtle changes in the acid-base chemistry inside their bodies.

    Coping with Acidification

    So why the range of reactions? One possible explanation is that over evolutionary time-scale species have developed different strategies for coping with life in the ocean, for example differing in the way they form calcium carbonate minerals. In many organisms, biomineralization occurs inside special internal pockets that are not directly exposed to the surrounding ambient seawater. Crustaceans such as crabs and lobsters appear better able to control their internal fluid chemistry, and thus may fare better in acid ocean water than mollusks, such as clams and oysters.

    Organisms can often compensate to an external stress such as acidification, at least up to a point. But this requires them to expend extra energy and resources. As a result, larvae and juveniles are often more vulnerable than adults. That’s what happened to oysters in the Pacific Northwest. It’s not that their calcium carbonate shells dissolved in the acidic waters, it’s that they expended too much energy trying to coax enough carbonate out of the water. Essentially they died of exhaustion.

    1
    The Pacific Northwest is among the first regions to experience the effects of ocean acidification, but it likely won’t be the last.

    Another reason why some organisms have escaped is because seawater pH and carbonate ion concentrations vary geographically and temporally across the surface ocean. Productive coastal waters and estuaries can have both much higher and much lower pH levels than open-ocean waters, and water chemistry at specific coastal location can change rapidly over only a few hours to days. Therefore, some coastal species already may be adapted, through natural selection, to more acidic or more variable seawater conditions.

    In contrast, open-ocean species may be more sensitive because they are accustomed to a more stable environment. Small, fast growing species with short generation times may be able to evolve to a changing world. As with many environmental shifts, acidification may threaten some species while being relatively inconsequential or even beneficial to others. If acidification were the only threat to marine life, perhaps I wouldn’t be so worried about our oceans. But there are many other environmental stresses that stack on top of it, including climate change, pollution, overfishing, and the destruction of valuable coastal wetlands.

    Coral reefs are at particular risk from rising atmospheric carbon dioxide’s two faces—acidification and global warming. Tropical corals are sensitive to even relatively small increases in summer temperatures. High temperatures can cause coral bleaching, when the coral animals expel the colorful symbiotic algae that usually live inside each coral polyp. Extended heat spells leave the coral vulnerable to disease and, if severe enough, death. Acidification exacerbates coral bleaching. Coral reefs could be greatly diminished once atmospheric carbon dioxide reaches levels expected by the middle of this century, some researchers say.

    6
    Acidic waters can cause coral to expel their symbiotic algae, a phenomenon known as bleaching.

    Nutrient pollution is another threat. Runoff into coastal waters can be rich in nutrients from fertilizers used in agriculture and landscaping. These excess nutrients cause large plankton blooms that can’t sustain themselves. When they eventually collapse, their decaying bodies use up dissolved oxygen and release carbon dioxide into seawater. Low oxygen stresses many marine organisms, and it’s made only worse by lower pH. Nutrient pollution causes localized pockets of acidified waters that added to ocean acidification from global fossil fuel emissions of carbon dioxide. Poor water quality and land-based nutrient inputs are contributing to the Pacific Northwest oyster hatchery problem and may currently be the dominant acidification factor in many estuaries and enclosed bays.

    Rising atmospheric carbon dioxide is ratcheting up its pressure on marine life. From ice cores, we know that the present-day rate of atmospheric carbon dioxide rise is unprecedented over at least the past 800,000 years. One needs to look back tens of millions of years or more in the geological record for a few time periods with possibly comparable rapid acidification.

    While imperfect analogues of today, those geological events were often marked by large-scale extinction of marine species. Combined with what we’re seeing in the laboratory and at natural volcanic vents, there is good reason to be concerned that ocean acidification will affect marine life in the decades to come. We still aren’t sure exactly how, when, or where. But we can bet it will happen.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 4:07 am on January 15, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “From Discovery to Dust” 

    PBS NOVA

    NOVA

    Wed, 29 Oct 2014
    Amanda Gefter

    The idea was too beautiful to be wrong.

    That you could start with nothing, apply some basic laws of physics, and get a universe out of it—a universe that was uniform on the largest scales but replete with the lumps and bumps we call stars and galaxies, a universe, that is, that looks like ours—well, it didn’t matter that the theory didn’t quite work at first. It was just too beautiful to be wrong.

    Inflation

    In Alan Guth’s original version of the theory in 1980, the nothingness at the beginning of time wasn’t really nothing at all. It was a field, the inflaton, and it teetered at the edge of a cliff, momentarily stable but not in its most stable, or lowest energy, state. This gave spacetime a negative pressure, creating a kind of anti-gravitational force that would push outward, sending the inflaton—that nascent field that would give birth to inflation—plummeting toward stability, causing the universe to expand exponentially, growing a million trillion trillion times bigger in the blink of an eye.

    1
    Barred spiral galaxy NGC 1672
    Inflation explains how everything from galaxies to dust may have come about.

    It was creation nearly ex nihilo—all you needed was the tiniest speck of a universe and inflation would transform it into something truly cosmic. There was just one problem: the plunge to the lowest energy state was a kind of phase transition, like water vapor condensing to liquid, and the transition would dissolve the inflaton into a sea of bubbles—pockets of lowest-energy regions—which would eventually collide and merge, collisions that would leave astronomical upheavals more disfiguring than anything we see on the sky today.

    Then, in 1981, Andrei Linde saved inflation from itself. He suggested that we didn’t have to worry about those bubbles because inflation could make them so big that our entire universe could fit inside just one of them. It didn’t matter what happened out at the edges or beyond—we’d never see it anyway.

    There was just one problem. The smooth, scarless space inside the bubble was too smooth, the density of matter so perfectly uniform that nothing so lumpy as stars or galaxies could ever form. It was Linde’s friend and fellow physicist Slava Mukhanov who had the solution: quantum fluctuations.

    Inflation was creation nearly ex nihilo.

    Quantum fluctuations are born of [Werner] Heisenberg’s uncertainty principle, which says that certain pairs of physical characteristics—position and momentum, time and energy—are bound together by a fundamental elusiveness, wherein the more accurately we can specify one, the more wildly the value of the other fluctuates. The universe cannot be perfectly uniform—uncertainty will not allow it. At a precise moment in time, energy varies recklessly; at a well-defined position, momentum soars and swerves. Precise moments and well-defined positions normally mean tiny scales of time and space, but inflation blows all that up. Inflation, Mukhanov told Linde, could take these tiny quantum fluctuations on the order of 10-33 cm and stretch them to astronomical proportions, creating slight peaks and valleys throughout space and laying a gravitational blueprint for what would eventually become a network of stars and galaxies.

    Still, Linde wasn’t satisfied. Getting inflation to start and end in just the right way required the whole thing to be improbably fine-turned. It was beautiful, but unnatural. There would be two more years of work before he found the solution: chaos. Inflation didn’t require fine-tuning, he realized; it didn’t need to teeter on a cliff’s edge. If the inflaton started off in a highly-probable and totally random state, then somewhere amongst the mess, there was bound to be a region with the right properties to spark inflation. From a sea of chaos, a vast island of order would emerge.

    That’s where the universe stood in the cold Moscow winter of 1986. Gorbachev had recently taken office as the General Secretary of the Communist Party and had just set into motion the perestroika—the restructuring of the Russian political, economic, and educational systems. For physicists like Linde, this engendered a strange silence. The old system for getting academic papers published abroad had been scrapped, but it hadn’t yet been replaced by a new one. So while inflation was being developed in the U.S., Russian physicists were forced to wait.

    Linde waited in bed. The doctors told him he was perfectly healthy, but he felt awful nonetheless. He was passing the time reading detective stories when the phone rang. It was the administration from the Lebedev Physical Institute, where he worked. They told him he was to travel to Italy to give a public lecture. He didn’t want to go. Under Gorbachev, Linde was allowed only one trip abroad each year, and he wasn’t about to waste it on a public lecture where he wouldn’t be working with other physicists or learning anything new. He told them he was too ill to travel. You are ill today, they said, but you’ll likely be healthy again soon, no? Or are you saying you are unable to go abroad at all?

    Linde grew scared. He knew if he said that he was unable to go abroad, they might never let him leave again—ever. He needed to prove that he could make the trip, and quickly. It was a Friday. He needed to get to the Hospital of the Academy of Sciences in order to obtain a certificate of health, but he was just learning how to drive and couldn’t risk a battle with the Moscow ice. He decided to pay for a taxi, a financial decision that didn’t come easy. Over the weekend he prepared the necessary travel documentation, and on Monday invested in another taxi ride to the Institute. He paid secretaries to immediately type up his paperwork, which he then ran to every corner of the Institute to get every last signature required. That bureaucratic nightmare ought to have taken a month and a half, and he accomplished it in four days. He dropped off the papers, went home, and collapsed into bed. He didn’t get up for two days.

    Soon the phone was ringing again. The trip was set, they told him, but the Italians wanted to see his lecture ahead of time—the day after tomorrow. Suddenly, Linde realized he had a golden opportunity. He could get around the systemless system and publish abroad! Instead of handing over his public lecture, he could write a new paper, give it to the powers that be and they would send it abroad for him—by diplomatic mail, no less. There was just one catch: he had half an hour to do it. It was the only way to get it typed up in time.

    Linde sat with his head in his hands, rolling it from side to side. Think, think. He felt like a compressed spring—he would either bounce to new heights or break under the stress. He knew that theorists can’t simply order up good ideas at will—physics doesn’t work that way. But today, he thought, it was going to have to.

    Thirty minutes later, he had come up with the theory of the chaotic self-reproducing inflationary multiverse. It was his greatest piece of work.

    Linde’s new theory reached beyond the bounds of the bubble. In his earlier version, our little patch of inflationary universe would arise from some small stretch of chaos. But while our universe was growing, what was happening behind the scenes? Surely there would be other regions where inflation could crop up. They’d be rare, but it didn’t matter—they would grow so big so rapidly that they would soon dominate the landscape. Each inflationary region creates more of itself—it’s self-reproducing. The process ends locally within each island universe, but on the largest scales it carries on, producing universe after universe after universe. In a half hour, Linde had taken our single universe, once the whole of everything there ever was or would be, and duplicated it, multiplied it, mutated it, sent it through a sequence of funhouse mirrors until it emerged on the other side a mere speck again, a humble, lone bubble in an infinite and growing multiverse.
    Seeing gravity waves…it would be like a fish seeing water.

    When he first developed the idea of inflation, Linde never for a second thought that it would be technologically feasible to test it. In principle, there were ways—you could look for the tiniest temperature fluctuations in the remnant heat from the Big Bang, those tiny quantum fluctuations that seeded the stars and galaxies, but that was a precision measurement he could barely fathom at the time. And if you wanted to dream even bigger, well, there ought to be something even more fundamental—quantum fluctuations of spacetime itself, primordial gravity waves. Seeing gravity waves…it would be like a fish seeing water. And seeing primordial gravity waves…well, it’s not just any water, it’s the first water, the origin of water, the origin of everything. But the technological skill that it would take to make that kind of measurement—it was downright unthinkable.

    On good days, he didn’t care. He knew the theory was right, he knew it in his bones. He knew it with the same kind of certainty that Einstein had about general relativity: When observations of the 1919 eclipse came in, proving that gravity bends light just as general relativity predicted, a reporter asked [Albert] Einstein how he would’ve felt had the experiment turned out differently. “I would have felt sorry for the dear Lord,” Einstein replied, “because the theory is correct.”

    The Device

    There was a problem with the antennas.

    When Chao-Lin Kuo arrived at NASA’s Jet Propulsion Laboratory in Cañada Flintridge, California in 2003, the BICEP. team was trying to implement Jamie Bock’s vision for a new polarization detector in their search for primordial gravity waves. Not that the old detectors didn’t work, but the things were unwieldy. Three copper feed horns, a handmade filter, and two detectors per pixel, all hand assembled. It’s not that they weren’t sensitive—they were nearly as sensitive as you can get. Rather, if they wanted better measurements, they didn’t need more sensitive detectors, they needed more detectors—quickly and cheaply. Bock’s vision was to digitize the whole assemblage and print them on circuit boards with microlithography, creating a kind of mass-producible polarimeter-on-a-chip. If it worked, it would change everything. It would be like upgrading from vacuum tubes to integrated circuits. But the team was stuck. They had designed a beautiful antenna array, but its readings kept coming out wrong.

    2
    BICEP2 Detector
    A single polarization detector

    The plan was to mount the detector to a radio telescope at the South Pole, where it would catch light that’s been traveling through an expanding cosmos for the last 13.8 billion years and measure its polarization, or the direction in which the photons are waving relative to the direction of their motion. If they could pin down each photon’s polarization with enough precision and map them across the sky, they’d have some hope of discerning a pattern known as a B-mode, the signature of primordial gravity waves. Kuo, a 30-year-old postdoc, set to work, putting the array through a host of tests until he figured out the problem: it was because the feed lines were crossed. The array looked like a series of X’s, but at the center of each X, the antennas were picking up each other’s signals and screwing up the reading. He set to work on a new design.

    Kuo knew he had to keep the antennas at right angles from one another so they could subtract the horizontal polarization from the vertical and take the difference. And he had to keep them as symmetric as possible, because the difference they were looking for was one part in 30 million. One part in 30 million. All to find a B-mode. How exactly do you make something like that?

    When he really thought about it, this thing they were trying to do, this thing they were trying to measure, it pushed the bounds of sanity. But Kuo already had a taste for pulling something like this off. As a grad student back at Berkeley, he had worked on the ACBAR experiment, which took measurements of the cosmic microwave background temperature fluctuations.

    Cosmic Microwave Background  Planck
    CMB per ESA/Planck

    ESA Planck
    ESA/Planck

    The idea that you could build something with your own two hands, point it at the sky, and see the faintest details of the nascent universe some 14 billion years in the past…well, you had to see it to believe it. You see the pattern. It’s not an image in a textbook or an idea in your mind—it’s on the sky. You look at it and suddenly you realize that you are one of a handful of human beings who has ever cast his eyes on the Big Bang. Well, it’s not exactly the Big Bang; it’s 380,000 years later–a mere eyeblink in the cosmic course of things, but still. To see back to the very beginning, the very first fraction of the very first second, you need something better than light. You need gravity.
    …suddenly you realize that you are one of a handful of human beings who has ever cast his eyes on the Big Bang.

    Kuo tried design after design. On some level, the antennae weren’t all that different from the kind you’d find in a cell phone, except this cell phone needed to answer calls from the beginning of time. The antenna array would shuffle the incoming photons down to the focal plane, where electromagnetism would be converted to heat and measured by an ultrasensitive thermometer. If you want to capture a signal that’s been steadily weakened over 14 billion years, you better make sure there’s virtually no heat and zero polarization coming from the instrument itself or it will totally swamp the measurement. That means keeping the detectors cooled to 0.25 Kelvin, just the slightest shiver above absolute zero. In the old way of thinking, the signal had to be transported off the focal plane and out of the cooling element to be read out by some room temperature electronics, but the transmission itself through heat conducting wires could warm the focal plane enough to drown out the signal. So Bock’s idea was to have the signals read by superconducting electronics on the focal plane itself using quantum-scale magnetic sensors developed at the National Institute of Standards and Technology in Colorado.

    In the meantime, they deployed the old detector to the South Pole in an experiment they named BICEP1. For three years, from 2006 to 2008, it would collect that nascent light and look for the slightest patterns of polarization.

    2
    The BICEP2 focal plane

    Back at JPL, it was Kuo’s fifth design that stuck. He built an antenna array that looked like a series of H’s, with spaces between the vertical and horizontal lines to avoid having the feed lines intersect. Once the array had been fabricated at JPL, it was time to put them to the test. Kuo placed them carefully in the cryostat. Then he waited.

    It would take several days to get things cold enough. First, liquid nitrogen would cool it down to 77 Kelvin. Then the liquid helium would kick in, lowering the temperature to 4 Kelvin. Finally, a few cubic centimeters of helium-3, a rare isotope. With helium-3, you have to tread carefully. The stuff is expensive; as a byproduct of plutonium production, it’s a controlled substance.

    While Kuo waited, he thought about inflation. If that exponential expansion really gave birth to the universe, it ought to have taken quantum fluctuations in spacetime and blown them up across the sky. Some 380,000 years later, the photons that make up the cosmic microwave background radiation would have navigated that same warped spacetime, a journey that would imprint itself uniquely in their polarization. Find the B-mode polarization and you’ve found inflation’s smoking gun. Looking around the lab, he wondered if he was the only one worrying about inflation. These guys were hardware wizards—they want to build cool things. Most of them didn’t have a lot of faith in theory. Kuo respected that. But for him, he needed to understand why he was doing this. Yes, he wanted to build a kickass detector. But he also wanted to know how the universe began.

    Once the helium-3 had everything cooled to 0.25 Kelvin, Kuo had to test the things, to see if they worked and to diagnose any problems. Start by sticking something room temperature in front of it and see what temperature reads out. Then something cooled with liquid nitrogen. Shine a source of microwaves at it, rotate their polarization, watch what happens. He ran every calibration test he could think of. The antenna array worked.

    Kuo had transformed Bock’s vision into a groundbreaking—and more important, functioning—detector. Because they used lithography, they could pack 512 of them on the focal plane, which meant BICEP2 would achieve the same sensitivity as BICEP1 in one-tenth the detection time, much like a bigger camera sensor can capture more stars at night. Kuo’s timing couldn’t have been better. BICEP1 was going off-line and the new technology had to ship out on the first flights of the year to Antarctica in September.

    Despite the pioneering technology, the truth was, no one on the team seemed to believe that a detection was in the cards. Even if inflation were correct, there was a good chance that primordial gravity waves would be way too small to measure. They just thought they’d use the telescope as a proving ground for the technology so that later it could be confidently incorporated into a next-generation space satellite. Satellites are expensive, and if something breaks once it’s up in orbit, you’re out of luck. So the BICEP2 team figured they’d take the technology out for a terrestrial test drive; in the meantime, they could place more upper limits on the amplitude of gravity waves and constrain some inflationary models in the process.
    No one on the team seemed to believe that a detection was in the cards.

    The physicist Andrew Lange had said that this was a wild goose chase. Still, Kuo couldn’t help hoping. Every once in awhile, he figured, you catch a goose. When [Arno] Penzias and [Robert] Wilson first discovered the cosmic microwave background in 1964, they thought it was literally pigeon shit. At least the BICEP2 team knew what they were looking for.

    In the middle of all that, Kuo had moved up the coast, from Pasadena to Palo Alto. He took a position at Stanford University, where he recruited an eager young grad student named Jamie Tolan to work with him on the measurement. One day, Tolan approached his advisor—he was writing a proposal for a NASA graduate student fellowship, and he asked Kuo to read the draft. In the proposal, Tolan laid out the goal of BICEP2: to see just how elusive primordial gravity waves are. Kuo smiled at Tolan. That’s not it, he told him. The goal is to detect them.

    The Questions

    Linde had wanted to be a geologist. His father was a radio physicist, his mother an experimental physicist who studied cosmic rays. The younger Linde wanted to do something different, something tangible. Something like rocks. But during the summer vacation between 7th and 8th grade, the Linde family drove from Moscow to the Black Sea. For a week, Linde sat in the back seat reading. He had brought two books: one on stars and the universe, the other on Einstein’s theory of special relativity. When they arrived at the Black Sea, three physicists stepped out of that car.

    At Moscow State University, Linde sought his colleagues’ advice: should he be a theorist or an experimentalist? The truth was, he didn’t think he was that great at calculation. He did, however, possess a certain intuition coupled with an obsessive mind. Once he became interested in a question, he couldn’t stop thinking about it. Linde soon realized he wasn’t nearly as impressed by measurements as he was by explanatory power. He didn’t want data—he wanted answers. Answers to big questions, the biggest: What happened when he was born? What will happen when he dies? What is it to feel, to think, to live, to exist? But he figured he’d start with simpler questions, the kind with more straightforward answers, like, how does an airplane fly? He promised himself he’d get to the hard ones eventually. There was no denying it. He was a theorist through and through.

    Eventually the hard questions snuck back in. When Linde came up with chaotic eternal inflation in that fateful half hour, he immediately realized the implications. In an infinite multiverse where physical constants can vary from one universe to the next, everything that can happen will happen—an infinite number of times. Every possible world, every incarnation of reality, every possible version of you living every possible version of your life. What then does it mean to want something, to do something, to be something? It was a vertiginous thought, but Linde didn’t let it get to him. So what if there were infinite Andrei Lindes? If I killed myself, he figured, it’s not like I’d survive as a copy—my death would simply become the moment that I was no longer identical to my copy, because I, unlike him, would be dead.

    In any case, it wasn’t clear that the copies existed in any meaningful way. That was the thing about quantum mechanics—the very nature of things seem to be determined by what an observer can measure. In the world of classical physics, you could have two baseballs that were identical in every way, and yet it’s fair to say that there are two of them. In the quantum realm, if you have two indistinguishable particles, you only have one particle. Wheeler and Feynman had emphasized that—in a sense, they said, there’s only one electron in the universe. Linde could never quite shake that.

    Even those quantum fluctuations—the very fluctuations that gave rise to the stars, polarized the microwave light, and created universe after universe—they are determined directly by what an observer can measure. Position and momentum, time and energy—these partners bound by uncertainty are so bound because the accurate measurement of one precludes the accurate measurement of the other. A particle doesn’t have a simultaneous position and momentum because an observer can’t measure a simultaneous position and momentum. Gravity waves are waves of uncertainty—uncertainty not only of existence but of observation. It was a fact that seemed to suggest that observers play some deep role in the nature of reality, a fact that Linde kept tucked away in the back of his mind. What is it to feel, to think, to live, to exist? If there was no observer who could simultaneously observe more than one Andrei Linde, then on some level you might say there’s still only one.

    Despite this, Linde was convinced that the existence of all those parallel universes held great explanatory power. While the multiverse was ultimately governed by the same laws of physics—by quantum mechanics and relativity, by inflation itself—each universe would be born with its own local sub-laws, a set of accidents that would determine its geometry, its physical constants, its particles, its forces, its own unique history. Inflation meant diversity. And diversity, Linde realized, was its own kind of explanation.

    So many features of our universe appear inexplicably fine-tuned for the existence of biological life. Change the strength of a force here or the mass of a particle there and poof!—no stars, no carbon, no life. Such coincidences demand explanation, and inflation had one: the strengths and masses vary from universe to universe, and we just happen to find ourselves in the one in which we can live. The inflationary multiverse may not have been predictive or observable, but it was explanatory. It could explain the illusion of design, the comprehensibility of the cosmos, the unreasonable effectiveness of mathematics. It could explain why the cosmological constant is so small and why the universe is so big. It could explain why we are here, why anything is here, because at the end of the day, Linde knew, physics isn’t really about the universe. It’s about us.
    Linde didn’t like being told what to think.

    The mass of the electron is 2,000 times lighter than that of the proton. Why? Well, if it were ten-times heavier or ten times lighter we wouldn’t be here to ask. Spacetime has four large dimensions. Why? Well, any more dimensions and the gravitational force between two objects would fall off faster than r-2; any fewer and general relativity couldn’t support any such forces all. Either way, you’ve got no stable planetary systems and no life.

    Such explanations are called “anthropic,” and they made people nervous, the theoretical physics equivalent of “just because.” Colleagues told Linde he shouldn’t think about such things, but he didn’t like being told what to think. When he decided to include a section on the anthropic principle in the cosmology book he wrote, his editor in Moscow told him to take it out. If you leave it in, she said, you’ll lose the respect of your colleagues. Yes, Linde replied, but if I take it out, I’ll lose my respect for myself.

    As far as he was concerned, the metaphysical is always brought into the fold of physics in the end, and inflation meant that the burden of proof was on those who wished to believe in a single universe. Einstein had once said, “What really interests me is whether God had any choice in the creation of the world.” He wanted the universe to be a singular specimen of logical perfection and uniqueness. Not Linde. Linde wanted diversity, choice. In Russia, they only had one choice of cheese.
    At the Bottom of the World

    It was Kuo’s fourth visit here, at the bottom of the world, but he still wasn’t used to the whiteness of it all. Everything, everywhere—just white. A blank spot on the world, like someone forgot to fill it in. An endless white that makes you think about infinity. He must’ve been ten years old the first time he thought about it, whether the universe was infinite or finite. That was back in Taiwan—some 8,000 miles from here—where the sun still sets on a summer’s night. It hadn’t made sense to him, as a boy, that reality would just come to an end, that there was a place beyond which there is no more place. What if you sat there at the edge and threw a ball? Where would it go? Someone else had made the same argument, he remembered. A philosopher? Now, as a physicist, he knew it wasn’t so simple— that the universe could be curved and closed, like the surface of a sphere, finite but without an edge. He supposed he had always been a physicist. Funny how all this white makes you think of that. Of all the colors, he missed green the most. Green and the smell of humidity. He had never realized what humidity smelled like until it was gone.

    3
    An LC-130 takes off from the Amundsen-Scott South Pole Station.

    It was hard to say how many days he had been here—hard to differentiate time when the scenery never changes, the weather never shifts, and the sun never goes down. Getting here had been an adventure, as usual. He had flown some 15 hours from California to Christchurch, New Zealand, for a stopover at the International Antarctic Center, where he traded his belongings for extreme cold weather gear before boarding an Air Force aircraft and flying another 14 hours to McMurdo Station here in Antarctica. From McMurdo it was another three-hour flight to the Amundsen-Scott South Pole Station on a plane that landed on skis. Stepping out onto the ice sheet, he had marveled again at the sky, so perfectly blue—the clearest sky on the planet.

    That’s why they were here. Antarctica is the largest desert on Earth. The altitude gets you up above most of the problematic parts of the atmosphere and the biting cold takes care of the rest—any stray water vapor in the air is frozen out of the sky, leaving microwave light from the early universe to stream through unimpeded. It also helps that the sun only rises and sets once a year.

    It was December now; he would be here until Valentine’s Day. The sun would set in March. He didn’t know how the “winter overs”—the people who stayed here past March—did it, not when -20°F was a warm summer day. Of course, the science station had grown more comfortable lately. It had a sauna now and a greenhouse for growing hydroponic fruits and vegetables. Earlier, they used to give you this weird yellow powder, and you’d mix it with water, fry it up, and call it a meal. Now, you could enjoy fresh produce in the cafeteria then go play on the basketball court or relax in the library or game room.

    Between the porthole windows in the doors and the firemen’s lockers lining the corridors, the place looked like the perfect combination of a research ship and a high school. Ship was more accurate—the Amundsen-Scott station, perched on Antarctica’s high plateau, stands on stilts to avoid the snow that never thaws atop a glacier some 9,000 feet thick that ever so slowly drifts.

    4
    The Dark Sector Lab

    To get to work, Kuo would walk along the ice sheet, across the airplane runway, upwind to the Dark Sector lab, so-named because all white light and radio transmission is forbidden there. The lab was hardly a mile away, but cold, wind, and altitude have a funny way of stretching distance. By the time he reached the telescope, he was queasy and out of breath.

    BICEP2 was a refracting telescope with a small aperture—just 26 centimeters. It could afford to be small because the features it was looking for were the size of the full moon on the sky. All of its moving parts were kept inside where it’s warm. Only its head poked out through a hole they had cut in the roof. The telescope was focused on a 20° patch of the so-called Southern Hole, the cleanest stretch of sky available with a clear view straight out of our Milky Way. At the South Pole, the same patch of sky just keeps spinning in circles above you; it never slips behind the horizon or disappears from sight. The telescope can stare it down for years and never blink.

    BICEP2 observed only photons with a frequency of 150 GHz, filtering everything else out. They had opted for a single frequency because it was the only way to optimize every part of the instrument. When you’re trying to avoid dust, which can polarize your light and mimic the signal of gravity waves, 150 GHz is the sweet spot. It’s where you’re most likely to see the clearest signal of gravity waves. The two possible impostors, magnetized radiation from extreme astronomical phenomenon and interstellar dust, rise at low and high frequencies respectively. But 150 GHz is right in the Goldilocks middle. It also happens to be the peak frequency of the cosmic microwave background, the photons that flew out of the dense early universe 380,000 years ago.

    The telescope had two lenses that focused the light, a design similar to Galileo’s, except that this one fed the light into the most sensitive superconducting detectors ever built. Kuo and his team were here to assemble the thing and then take some calibrations, but even turning a screw was proving to be difficult in the cold.

    Once the telescope was up and running it would start collecting data, which it would store temporarily on the computers at the South Pole. But soon a low Earth orbit communications satellite would appear above the horizon and relay the data from the South Pole station to NASA’s White Sands complex in New Mexico. From there it would bounce around the U.S. until it landed in a cluster of computers at Harvard University, which the BICEP2 team could later access from California.

    California. Kuo wondered what his wife and children were doing back home in Stanford. They were probably enjoying the green, green grass and the warmth of a more fleeting sun.

    The Observer and the Observed

    California. Linde moved here in 1990 with his wife, Renata Kallosh, and their two sons. A year earlier they had left Moscow for Switzerland, intending to spend a year at CERN before heading back to the Soviet Union. But offers came in while they were there, including a double offer from Stanford University for both Linde and Kallosh, who is a string theorist, and so they changed course and immigrated to the U.S.

    In the two decades that followed, evidence for inflation mounted, and, in 2003, cosmologists hit the jackpot. NASA’s Wilkinson Microwave Anisotropy Probe (WMAP)—an 1,800-pound spacecraft that orbited the sun nearly a million miles out— had produced an unprecedented map of the microwave sky, measuring temperature differences in the near-uniform radiation down to one part in 100,000.

    NASA WMAP satellite
    WMAP

    Cosmic Background Radiation per WMAP
    CMB per WMAP

    Those slight hot and cold spots traced quantum fluctuations in the density of matter 380,000 years after the Big Bang, when the microwave light was first emitted. The pattern in the map bore out several key predictions of inflation with astounding precision. Even the inflation doubters were coming around. Now there was just one piece of evidence missing: B-mode polarization, the mark of primordial gravity waves.

    Linde wasn’t worried about B-modes. Most versions of inflation predicted them at amplitudes way too small to measure, which meant that even a non-detection could be a strange kind of confirmation, at least for those who already believed. As far as he was concerned, the experimental evidence was already overwhelming. Still, he supposed, on the off-chance they did discover B-modes—well, it would just drive home the fact that quantum mechanics needs to be taken seriously, even at cosmic scales. The beauty of inflation was that it provided the missing link between the tiny quantum world and the largest scales of the universe. We are the great-great-great-grandchildren of quantum fluctuations, he liked to say.

    When you try to apply the laws of quantum mechanics to the universe as a whole, you hit a paradox: all things quantum are defined in terms of what an observer can measure, but no one can measure the universe as a whole because, by definition, you can’t be outside the universe. The issue was captured most perfectly in the famous Wheeler-DeWitt equation, which showed that the quantum state of the universe could not evolve in time, stuck, as it were, in a frozen, eternal moment. As Linde often put it, without observers, the universe is dead.

    Linde knew that the only way to get time flowing was to observe the universe from here, on the inside. When we look out at the cosmos through a telescope, he thought, we don’t see ourselves in the picture. And so we split the world in two: observer and observed. We make a measurement, and the universe comes to life. It sounded awfully solipsistic, but there it was.

    As Linde often put it, without observers, the universe is dead.

    Everyone assumed you can talk about “observers” without talking about consciousness—things like Geiger counters or space telescopes—but Linde wasn’t so sure. If you remove subjective experience from the picture, he thought, there’s no more picture. He couldn’t help wondering whether consciousness was the missing ingredient that would make the ultimate theory of physics consistent. The idea was inspired by gravity waves.

    Back in the day, physicists thought of space and time as tools that we use to describe the motion of matter—not as things in their own right. It was Einstein who realized that even if you emptied the universe of matter, spacetime itself would remain and could exhibit a behavior all of its own: it could wave. Gravity waves meant that spacetime was equally as real and fundamental as matter itself. Later theoretical developments—namely supergravity—extended the symmetries of this space-time so that matter turned out to be nothing deeper than excitations of the geometry of superspace. In other words, it was spacetime that was fundamental and matter was derived, a tool for describing the excitations of spacetime.

    Linde wondered if consciousness awaited the same vindication. Today we think of it as a tool we use to describe the external world, and not as an entity on its own. But what if the external world were empty? What if consciousness was fundamental and the universe derived? Could space, time, and matter together be nothing more than excitations, the gravity waves of consciousness?

    What is it to feel, to think, to live, to exist? It was still the only question he really cared to answer. The rest was just details.

    The Signal

    They must have made a mistake.

    They had screwed up the analysis or there was some design flaw they hadn’t accounted for yet. A signal this bright—it had to be coming from the instrument itself. There was no way this thing was coming from the sky.

    BICEP2 had collected data for three years and now the team had set out to scour it for B-modes. But they barely had to scour. The B-modes were glaring.

    They couldn’t figure out what they’d done wrong. They could have sworn they’d accounted for any spurious polarization, any stray morsel of heat. The detectors had passed every last performance test with flying colors. Where was this thing coming from?

    They split the data in half, made a map from the first year and a half of observation and a map from the second year and a half. Then they subtracted them. They figured if the signal went away, they’d know it had been in both halves equally, that it hadn’t changed over time. But if it had changed over time—well then it wasn’t cosmological, it was an engineering blip. They ran the test. The signal canceled out. It wasn’t a blip.

    They split and recombined the data in every which way, pushed themselves to imagine even the most unlikely scenarios that would have the signal originating in the instrument. Again and again they came up empty handed. Eventually there was no alternative left standing: the signal was coming from the sky.

    Of course, there was always the issue of the dust. Everyone knew that interstellar dust in the Milky Way could polarize the photons and mimic the effect of gravity waves. Obviously the dust contributed to the signal, but the question was, how much? The Southern Hole at 150 GHz ought to be pretty clean. That’s why they chose it. But you never know.

    Obviously dust contributed to the signal, but the question was, how much?

    The team didn’t have access to any full sky maps with a decent signal-to-noise of polarized emissions from dust—but they knew exactly who did. The ESA’s Planck satellite had been mapping the dust from space and ought to be able to tell them exactly how much of it was contributing to their signal. The BICEP2 team submitted a request to share data. Request denied. They waited, then tried again. Request denied. Was the Planck team being competitive or did they simply feel the data wasn’t ready? Who could say. Either way, Kuo and his team were simply going to have to make do with whatever data they could get their hands on.

    3
    As the Milky Way passes overhead, charged particles of the aurora australis billow over the Dark Sector.

    They combed the literature for the leading dust models and fed the results of five of them into their own model. Unfortunately, the models were all built from observations of unpolarized dust at various points on the sky, which were then extrapolated. But without Planck’s actual data, it was their best shot.

    They used the models to create fake maps of dust, and they put in 3 million CPU hours on the Harvard supercomputer simulating the results 500 times. The signal wasn’t going away. Even after they subtracted the signal for the dust, the B-modes appeared to be still sitting there in plain sight.

    That’s when they noticed that a member of the Planck team, J.P. Bernard, had given a public lecture on the dust data. His presentation contained a slide with an image of the dust map. The BICEP team figured it was time to get creative. They digitized the image, reverse engineering it to extract their best guess at the raw data. They knew it was an uncertain procedure, but that was ok—they weren’t staking their claim on it. They were just going to use it as model #6.

    Again they subtracted the dust, and again the B-modes remained visible, bright as day.

    They had to strike the right balance between being careful and being quick. A signal this bright—someone else was bound to see it. They could feel the competition nipping at their heels. They all agreed to not say a word about it to anyone. Not until they were sure. They were at three-sigma certainty—that meant there was a 1 in 740 chance that the signal was a statistical fluke. In physics, three sigma is considered evidence. Five sigma, a 1 in 3.5 million chance…well that’s a discovery.

    5
    The B-mode pattern from BICEP2

    For a year they sat on the result. Kuo was hoping to hell it was real, though if you asked him to bet on it, he wouldn’t risk the money. He had a nagging fear that the B-modes were nothing more than mathematical contamination, just mundane E-mode polarization leaking out. The problem was that BICEP2 had only studied a small patch of sky. Each fragment of data is just a little line segment—it’s only when you look at the way those lines are drawn across the entire sky that a pattern emerges. If the line segments form a series of symmetric shapes, like circular ripples, that’s an E-mode: the standard pattern produced by the same old density fluctuations that create the hot and cold spots in the CMB. But if the pattern looks asymmetric, like pinwheels turning in a given direction, that’s the jackpot. Only primordial gravity waves can turn those pinwheels.

    They had data from a 20° patch of sky, which is to say, not a lot. What do you do with the line segments out toward the edges? You see hints of pattern there, perhaps a slight arc, a suggestion of a pinwheel. But what if it’s a circle? Your statistics start to break down. So you throw away some signal, a sacrifice to the gods of error bars. But how to strike just the right balance between signal and certainty was far from clear.

    One evening in Stanford, after he’d had dinner and helped put the kids to bed, Kuo noticed an e-mail from his grad student, Tolan.

    Two years earlier, Kuo had urged Tolan to find a better way to distinguish the E-modes from the B-modes out at the edges. Tolan began working on the problem on the side, “off pipeline.” They were told again and again, stick to the pipeline, it’s the only way to keep things running smoothly, and it was. Everyone treated Tolan’s work as a kind of side hobby, so he just kept at it, posting updates now and then to the team’s internal website.

    Kuo opened the e-mail. I’ve got a preliminary posting of the matrix estimator. Tolan had done it. He had found a way to cleanly separate the B-modes from the E-modes, and he had run their data. Kuo prepared himself for disappointment. He was sure the signal had disappeared. He clicked on the link to the internal website and scanned Tolan’s results.

    The signal hadn’t disappeared.

    The signal had gotten stronger.

    The error bars had shrunk, and the certainty had risen—from three sigma to five sigma. A discovery.

    That night, the sun went down, but Kuo couldn’t sleep.

    In the morning he e-mailed Tolan: If this signal is real, this is the home run of all home runs…

    If it were real, it would be the closest anyone had ever come to seeing the beginning of time. It would be the smoking gun proof of inflation. It would be a direct look at the quantum mechanical underpinnings of the universe, probing physics at energies a trillion times greater than what particle physicists could achieve in the hallowed tunnels of the LHC. If it were real, Kuo could finally tell his ten-year-old self the answer: if the universe isn’t infinite, it is really damn big.

    Funny, the difference between experiment and theory. Theory is the stuff of great drama, littered with “aha” moments. It’s Archimedes shouting, “eureka!” in the bathtub, it’s Guth writing, “spectacular realization” in his notebook, it’s Linde waking his wife to tell her, “I think I know how the universe was created.” But experiment—experiment is more like life. It’s messy and it happens gradually after a good amount of soldering and shivering and the turning of screws. Sometimes the results are null—and sometimes the results are dust—but little by little it adds up to something tangible and true.
    Never Again?

    Linde and his wife were packing their things for a Caribbean vacation.

    They needed it. They’d been working together again, writing paper after paper, producing a whirlwind of work. Linde couldn’t believe how much they’d done. Every time he had a good idea, he was convinced it would be his last.

    As people, and as physicists, they were a perfect match. Where Linde had physical intuition, Kallosh had mathematical intuition. What was difficult for one came easy for the other. They saw the universe differently, and while the process was painful, they each raised the other up in their thinking. Not that it seemed so grand in the moment. Every time they were finishing yet another paper, they’d end up shouting, “Never again!” But they’d take a break, perhaps a vacation, and then they’d start all over again. That’s just how it was in their household. Ideas were nourishment. Physics was air.

    Linde thought back to his younger days. It was funny now to think he’d ever wondered exactly what he ought to be. Now he understood that he was a theorist for the same reason an artist is an artist or a poet is a poet—because it’s too painful not to be.
    At The Door

    Kuo walked up the long driveway, the cameraman keeping pace behind him. For Kuo, the B-mode measurement was a technological achievement, the end of a marathon, the feeling of knowing that he had played an indelible part in the grand unfolding of science. But he knew that for Linde it would be something different: a moral victory, the triumph of reason and intuition, a validation 30 years coming. He was itching to tell him, he was rehearsing it in his mind. Five sigma. Clear as day. R equals 0.2. He raised his hand to knock on Linde’s door.

    Epilogue

    As of October 2014, maps made by Planck suggest that there is far more polarized dust in the Milky Way than theoretical models had predicted and that the entire B-mode signal measured by BICEP2 may be due to dust. Physicists and astronomers still need more data to determine the source of the signal and to figure out whether gravity waves are lurking behind the dust. Kuo is gearing up to head back down to the South Pole in December to set up BICEP3. The new instrument’s field of view will be three times larger than BICEP2’s and will measure light at a frequency of 95GHz. By comparing its results with BICEP2, Kuo and his team say they will be able to differentiate gravity waves from dust. As for Linde, he is hard at work incorporating inflationary theory into theories of fundamental physics, satisfied that the experimental evidence for inflation is overwhelming even in the absence of gravity waves and motivated, as ever, by the theory’s explanatory power and beauty. Science carries on.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 2:09 pm on December 11, 2014 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Volcanoes May Be Masking the Severity of Global Warming” 

    PBS NOVA

    NOVA

    Thu, 11 Dec 2014
    Christina Couch

    Global warming continues to heat up the earth, but volcanoes are keeping us just a little cooler.

    A new paper published in Geophysical Research Letters shows that volcanic eruptions may be part of the reason why the earth isn’t heating up quite as fast as climate models predict. Sneaky sulfur dioxide emissions coming from smaller volcanoes that weren’t previously factored into climate models are temporarily cooling surface temperatures, said research from MIT atmospheric scientist David Ridley.

    v

    Alaska’s Augustine Volcano Jan 12, 2006

    “If an eruption is powerful enough, the sulfur dioxide can reach the upper atmosphere, the stratosphere, where it forms literally liquid sulfuric acid droplets,” said Benjamin Santer, a research scientist at Lawrence Livermore National Laboratory and co-author on the study. “Those droplets reflect some fraction of incoming sunlight back to space, preventing that sunlight from penetrating deeper into the atmosphere. That’s the primary cooling mechanism.”

    According to Santer and Ridley’s research, that light-refracting cooling effect is strong enough to bring global temperatures down anywhere from 0.09˚ to 0.22˚ F since 2000. Unfortunately the cooling won’t do much to counteract global warming in the long term—Ridley said that the amount of sulfur dioxide released in a small eruption generally dissipates after about one year. But these emissions may be part of the reason why over the last ten to 15 years, average global temperatures haven’t increased as rapidly as they have in decades past. The Intergovernmental Panel on Climate Change estimates that average worldwide temperatures are currently increasing at about one-third the rate that they were between 1951 and 2012.

    “I think there’s quite a good case now that volcanoes are at least able to explain about a third of that,” Ridley said.

    On top of providing volcano emissions data, Ridley’s study also offers scientists a new way to explore the lower stratosphere. Both current research and climate models rely on data derived from satellite observations to measure what’s happening in the stratosphere. That works well until around nine to ten miles above the earth’s surface, where clouds contaminate the data and make it difficult to discern exactly what’s happening. The problem becomes even more complex around the poles where the stratosphere dips lower than it does in the tropics and creates “this kind of wedge of stratosphere that we’re missing when just using the satellites” Ridley said.

    Instead of making estimates based on satellite observations alone, Ridley’s team also used data from a balloon-borne particle counter and from measurement devices on the ground. These included four lidar systems, which measure atmospheric particles using laser light pulses, and data from a series of robotic solar photometers called AERONET that use sunlight to measure how effective aerosol particles are at blocking light. The ground and air-based measurements gave researchers a clearer picture of the chemical makeup of the lower stratosphere.

    “Even though it’s a small part of the atmosphere that we were able to include that hadn’t been included before, it probably has a majority of the aerosols that are important” in the short term, said Ryan R. Neely III, a co-author on the study and lecturer of observational atmospheric science at the University of Leeds.

    Alan Robock, a climate scientist who was not involved in the study but was quoted in the journal’s press release, commended Ridley’s team for using ground and air-based instruments to examine the lower stratosphere in a way that satellite data simply can’t. He said that the new observational methods can potentially help scientists make better climate predictions and create more accurate models in the future.

    Creating accurate climate models hasn’t been easy in the middle of a so-called global warming pause or “hiatus,” especially one that’s controversial among scientists. While some attribute the slowdown to the ocean storing heat, others chalk it up to solar cycles or temperature fluctuations from El Niño and La Niña weather patterns.

    “The hiatus, the pause, it’s a little misleading,” said Todd Sanford, a climate scientist with Climate Central. “We’re still setting global [temperature] records. Really what this is talking about is how quickly temperatures are increasing, not that they have stopped increasing.”

    Even with the pause, global warming is still a major environmental problem, one so large that some researchers are investigating whether a strategy like spraying sulfur dioxide into the stratosphere to mimic the cooling effects from volcanoes is a viable temporary solution.

    “We know that if this were to be done, we could get fairly rapid reductions in temperatures but there are issues with it,” Sanford said. “You’re masking the effect of CO2 in some ways. That’s good as long as you’re doing it, but if for any reason you stopped injecting these particles up into the atmosphere, you’re now very quickly unmasking all of that CO2 warming. You’d get all of that warming back. It’s one of these things where if you start it and you’re not doing anything else on CO2, you’ve got to keep it going.”

    Besides, Sanford added, simply cooling the atmosphere without reducing CO2 won’t address other problems caused by carbon, like increasing ocean acidification.

    Sulfur dioxide injections could also deteriorate the ozone, produce uneven temperature and precipitation patterns, completely obscure our view of the sky, and create global political issues as the world decides “what temperature to set the thermostat” Robock said. He added that the technology to execute this type of geoengineering doesn’t yet exist, though others like Harvard climate scientist David Keith argue that it does. Even if we were able to get a critical mass of sulfuric acid into the stratosphere, there’s no way to control it once its there.

    “If you have an existing cloud up there and you start spraying more sulfur, theory tells us the particles will grow and you’ll get larger particles rather than more particles and they’ll be much less effective at scattering sunlight,” Robock said. “They’ll also fall out faster so you have to put a lot more up there.”

    Kicking our carbon habit is the real solution to global warming, Robock added, but until that happens, creating more accurate climate models can help us better understand how the atmosphere is changing.

    Ridley warns against the dangers of placing too much emphasis on volcanic cooling. While volcanoes are playing a small but significant role in keeping rising temperatures a little in check, sulfur dioxide cooling isn’t a safeguard against the effects of global warming. “This is really just a bit of an offset on the warming rather than a change in the expected trend on warming,” he said. Besides, he added, no good global hiatus lasts forever. “We’ve got no reason to believe that that will continue.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 418 other followers

%d bloggers like this: