Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:49 am on August 23, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “The Shadow of a Black Hole” 

    PBS NOVA

    NOVA

    21 Aug 2015
    Matthew Francis

    Event Horizon Telescope
    Part of Event Horizon Telescope [EHT]

    Event Horizon Telescope map
    EHT map

    The invisible manifests itself through the visible: so say many of the great works of philosophy, poetry, and religion. It’s also true in physics: we can’t see atoms or electrons directly and dark matter seems to be entirely transparent, yet this invisible stuff makes and shapes the universe as we know it.

    Then there are black holes: though they are the most extreme gravitational powerhouses in the cosmos, they are invisible to our telescopes. Black holes are the unseen hand steering the evolution of galaxies, sometimes encouraging new star formation, sometimes throttling it. The material they send jetting away changes the chemistry of entire galaxies. When they take the form of quasars and blazars, black holes are some of the brightest single objects in the universe, visible billions of light-years away. The biggest supermassive black holes are billions of times as massive as the Sun. They are engines of creation and destruction that put the known laws of physics to their most extreme test. Yet, we can’t actually see them.

    1
    A simulation of superheated material circling the black hole at the center of the Milky Way. Credit: Scott C. Noble, The University of Tulsa

    Black holes are a concentration of mass so dense that anything that gets too close—stars, planets, atoms, light—becomes trapped by the force of gravity. The point of no return is called the event horizon, and it forms a sort of imaginary shell around the black hole itself. But event horizons are very small: the event horizon of a supermassive black hole could fit comfortably inside the solar system (comfortably for the black hole, that is, not for us). That might sound big, but on cosmic scales, it’s tiny: the black hole at the center of the Milky Way spans just 10 billionths of a degree on the sky. (For comparison, the full Moon is about half a degree across, and the Hubble Space Telescope can see objects as small as 13 millionths of a degree.)

    NASA Hubble Telescope
    NASA/ESA Hubble

    Both the size and nature of the event horizon make it difficult to observe black holes directly, though indirect observations abound. In fact, though black holes themselves are strictly invisible, their surrounding regions can be extremely bright. Many luminous astronomical objects produce so much light from such a small region of space that they can’t be anything other than black holes, even though our telescopes aren’t powerful enough to pick out the details. In addition, the stars at the center of the Milky Way loop close enough to show they’re orbiting an object millions of times the mass of the Sun, yet smaller than the solar system. No single object, other than a black hole, can be so small and yet so massive. Even though we know black holes are common throughout the universe—nearly every galaxy has at least one supermassive black hole in it, and thousands more smaller specimens—we haven’t confirmed that these objects have event horizons. Since event horizons are a fundamental prediction of general relativity (and make black holes what they are), demonstrating their existence is more than just a formality.

    However, confirming event horizons would take a telescope the size of the whole planet. The solution: the Event Horizon Telescope (EHT), which links observatories around the world to mimic the pinpoint resolution of an Earth-sized scope. The EHT currently includes six observatories, many of which consist of multiple telescopes themselves, and two more observatories will be joining soon, so that EHT will have components in far-flung places from California to Hawaii to Chile to the South Pole. With new instruments and new observations, EHT astronomers will soon be able to study the fundamental physics of black holes for the first time. Yet even with such a powerful team of telescopes, the EHT’s vision will only be sharp enough to make out two supermassive black holes: the one at the center of our own Milky Way, dubbed Sagittarius A*, and the one in the M87 galaxy, which weighs in at nearly seven billion times the mass of the sun.

    The theory of general relativity predicts that the intense gravity at the event horizon should bend the paths of matter and light in distinct ways. If the light observed by the EHT matches those predictions, we’ll know there’s an event horizon there, and we’ll also be able to learn something new about the black hole itself.

    The “gravitational topography” of spacetime near the event horizon depends on just two things: the mass of the black hole and how fast it is spinning. The event horizon diameter of a non-spinning black hole is roughly six kilometers for each solar mass. In other words, a black hole the mass of the sun (which is smaller than any we’ve yet found) would be six kilometers across, and one that’s a million times the mass of the Sun would be six million kilometers across.

    If the black hole is spinning, its event horizon will be flattened at the poles and bulging at the equator and it will be surrounded by a region called the ergosphere, where gravity drags matter and light around in a whirlpool. Everything crossing the border into the ergosphere orbits the black hole, no matter how fast it tries to move, though it still conceivably can escape without crossing the event horizon. The ergosphere will measure six kilometers across the equator for each solar mass inside the black hole, and the event horizon will be smaller, depending on just how fast the black hole is rotating. If the black hole has maximum spin, dragging matter near the event horizon at close to light speed, the event horizon will be half the size of that of a non-spinning black hole. (Spinning black holes are smaller because they convert some of their mass into rotational energy.)

    When the EHT astronomers point their telescopes toward the black hole at the center of the Milky Way, they will be looking for a faint ring of light around a region of darkness, called the black hole’s “shadow.” That light is produced by matter that is circling at the very edge of the event horizon, and its shape and size are determined by the black hole’s mass and spin. Light traveling to us from the black hole will also be distorted by the extreme gravitational landscape around the black hole. General relativity predicts how these effects should combine to create the image we see at Earth, so the observations will provide a strong test of the theory.

    If observers can catch sight of a blob of gas caught in the black hole’s pull, that would be even more exciting. As the blob orbits the black hole at nearly the speed of light, we can watch its motion and disintegration in real time. As with the ring, the fast-moving matter emits light, but from a particular place near the black hole rather than from all around the event horizon. The emitted photons are also influenced by the black hole, so timing their arrival from various parts of the blob’s orbit would give us a measure of how both light and matter are affected by gravity. The emission would even vary in a regular way: “We’d be able to see it as kind of a heartbeat structure on a stripchart recorder,” says Shep Doeleman, one of the lead researchers on the EHT project.

    Event Horizon Telescope astronomers have already achieved resolutions nearly good enough to see the event horizon of the black hole at the center of the Milky Way. With the upgrades and addition of more telescopes in the near future, the EHT should be able to see if the event horizon size corresponds to what general relativity predicts. In addition, observations of supermassive black holes show that at least some may be spinning at close to the maximum rate, and the EHT should be able to tell that too.

    Black holes were long considered a theorist’s toy, ripe for speculation but possibly not existing in nature. Even after discovering real black holes, many doubted we would ever be able to observe any of their details. The EHT will bring us as close as possible to seeing the invisible.

    Contributing institutes

    Some contributing institutions are:

    ALMA
    APEX
    Academia Sinica Institute for Astronomy and Astrophysics
    Arizona Radio Observatory, University of Arizona
    Caltech Submillimeter Observatory
    Combined Array for Research in Millimeter-wave Astronomy
    European Southern Observatory
    Georgia State University
    Goethe-Universität Frankfurt am Main
    Greenland Telescope
    Harvard–Smithsonian Center for Astrophysics
    Haystack Observatory, MIT
    Institut de Radio Astronomie Millimetrique
    Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE)
    Joint Astronomy Centre – James Clerk Maxwell Telescope
    Large Millimeter Telescope
    Max Planck Institut für Radioastronomie
    National Astronomical Observatory of Japan
    National Radio Astronomy Observatory
    National Science Foundation
    University of Massachusetts, Amherst
    Onsala Space Observatory
    Perimeter Institute
    Radio Astronomy Laboratory, UC Berkeley
    Radboud University
    Shanghai Astronomical Observatory (SHAO)
    Universidad de Concepción
    Universidad Nacional Autónoma de México (UNAM)
    University of California – Berkeley (RAL)
    University of Chicago (South Pole Telescope)
    University of Illinois Urbana-Champaign
    University of Michigan

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 9:51 pm on August 19, 2015 Permalink | Reply
    Tags: , Dams in the US, NOVA   

    From NOVA: “The Undamming of America” 

    PBS NOVA

    NOVA

    12 Aug 2015
    Anna Lieb

    Gordon Grant didn’t really get excited about the dam he blew up until the night a few weeks later when the rain came. It was October of 2007, and the concrete carnage of the former Marmot Dam had been cleared. A haphazard mound of earth was the only thing holding back the rising waters of the Sandy River. But not for long. Soon the river punched through, devouring the earthen blockade within hours. Later, salmon would swim upstream for the first time in 100 years.

    Grant, a hydrologist with the U.S. Forest Service, was part of the team of scientists and engineers who orchestrated the removal of Marmot Dam. Armed with experimental predictions, Grant was nonetheless astonished by the reality of the dam’s dramatic ending. For two days after the breach, the river moved enough gravel and sand to fill up a dump truck every ten seconds. “I was literally quivering,” Grant says. “I got to watch what happens when a river gets its teeth into a dam, and in the course of about an hour, I saw what would otherwise be about 10,000 years of river evolution.”

    Over 3 million miles of rivers and streams have been etched into the geology of the United States, and many of those rivers flow into and over somewhere between 80,000 and two million dams. “We as a nation have been building, on average, one dam per day since the signing of the Declaration of Independence,” explains Frank Magilligan, a professor of geography at Dartmouth College. Just writing out the names of inventoried dams gives you more words than Steinbeck’s novel East of Eden.

    Some of the names are charming: Lake O’ the Woods Dam, Boys & Girls Camp # 3 Dam, Little Nirvana Dam, Fawn Lake Dam. Others are vaguely sinister: Dead Woman Dam, Mad River Dam, Dark Dam. There’s the unappetizing Kosciusko Sewage Lagoon Dam, the fiercely specific Mrs. Roland Stacey Lake Dam and the disconcertingly generic Tailings Pond #3 Dam. There’s a touch of deluded grandeur in the Kingdom Bog Dam and an oddly suggestive air to the River Queen Slurry Dam.

    The names arose over the course of a long and tumultuous relationship. We’ve built a lot of dams, in a lot of places, for a lot of reasons—but lately, we’ve gone to considerable lengths to destroy some of them. Marmot Dam is just one of a thousand that have been removed from U.S. rivers over the last 70 years. Over half the demolitions occurred in the last decade. To understand this flurry of dynamiting and digging—and whether it will continue—you have to understand why dams went up first place and how the world has transformed around them.

    The Dams We Love

    A sedate pool of murky water occupies the space between a pizzeria, a baseball field, and the oldest dam in the United States, built in 1640 in what is now Scituate, Massachusetts.

    When a group of settlers arrived in the New World, the first major structure they built was usually a church. Next, they built a dam. The dams plugged streams and set them to work, turning gears to grind corn, saw lumber, and carve shingles. During King Phillip’s War in 1676, the Wampanoag tribe attacked colonist’s dams and millhouses, recognizing that without them, settlers could not eat or put roofs over their heads.

    Robert Chessia of the Scituate Historical Society shows me a map of the area, circa 1795. On every windy line indicating a stream, there is a triangle and curly script label: “gristmill.”

    2
    Map of area surrounding Scituate, Massachusetts, circa 1795.

    In the 19th century, dams controlled the rivers that powered the mills that produced goods like flour and textiles. Some dams are historical structures, beautiful relics of centuries past. Not far from Scituate stands a dam owned by Mordecai Lincoln, great-great grandfather of Abraham Lincoln. Some dams have been incorporated into local identity—as in the town of LaValle, Wisconsin, which dubbed itself as the “best dam town in Wisconsin.”

    Before refrigerators, frozen, dammed streams offered up chunks of ice to be sawed out and saved for the summer. Before skating rinks, we skated over impounded waters.

    In the 20th century, the pace accelerated. We completed 10,000 new dams between 1920 and 1950 and 40,000 between 1950 and 1980. Some were marvels. Grand Coulee Dam contains enough concrete to cover the entirety of Manhattan with four inches of pavement . Hoover Dam is tall enough to dwarf nearly every building in San Francisco. Glen Canyon Dam scribbled a 186-mile-long lake in the arid heart of a desert.

    7
    Grand Coulee Dam

    Behind those big new dams were big new dreams. A 1926 dam on the Susquehanna River produced so much hydroelectric power that the owners needed to set up a network of wires to sell the electricity far and wide. This became the PNJ Interchange, “the seed of the electricity grid as we know it,” explains Martin Doyle, a professor of river science and policy at Duke University. Grand Coulee Dam, which stopped the Columbia River in 1942, supplied vast quantities of electrical power that turned aluminum into airplanes and uranium into plutonium. President Harry Truman said that power from Grand Coulee turned the tide of World War II.

    Yet only 3% of dams in the US are hydropower facilities—together supplying about just under 7% of U.S. power demand. Most dams were built for other reasons. They restrained rivers to control floods and facilitate shipping. They stored enormous volumes of water for irrigating the desert and in doing so reshaped the landscape of half the country. “The West developed through the construction of dams because it allowed the control of water for development,” says Emily Stanley, a limnologist at the University of Wisconsin, Madison.

    But for most dams, none of these are their primary purpose. Nearly one-third of dams in the national inventory list “recreation” as their raison d’être, a rather vague description. I inquired about this with the Army Corps of Engineers, which maintains the inventory, and their reply merely offered a cursory explanation of “purpose” codes in the database. Mark Ogden, the project manager for the Association of State Dam Safety Officials, says many small private dams were indeed built for recreational activities like fishing.

    Grant, Magilligan, and Doyle have a different theory, however. Dams may get the recreational label, Doyle says, “when we have no idea what they are for now, and we can’t stitch together what they were for when they were built.” But while many of the original uses have disappeared, the dams have not.

    The Dams We Love to Hate

    In the very center of conservationist hell, mused John McPhee [Encounters With the Archdruid, Part 3, A River, 1971, about David Brower and Floyd Dominy], surrounded by chainsaws and bulldozers and stinking pools of DDT, stands a dam. He’s not the only one to feel that way. “They take away the essence of what a river is,” Stanley says.

    A dam fragments a watershed, Magilligan explains. A flowing river carries sediment and nutrients downstream and allows flora and fauna to move freely along its length. When a dam slices through this moving ecosystem, it slows and warms the water. In the reservoir behind the dam, lake creatures and plants start to replace the former riverine occupants. Sediment eddies and drops to the bottom, rather than continuing downstream.

    Migratory fish can be visceral reminders of how a dam changes a river. Salmon hatch in freshwater rivers, swim out to sea, and then return to their birthplace to reproduce, a circle-of-life story that has captured people’s imaginations for generations. At the Elwha Dam in Washington state, Martin Doyle recalls looking down to see salmon paddling against the base of the dam, trying in vain to reach their spawning grounds upriver. Roughly 98% of the salmon population on the Elwha River disappeared after the dam went up, says Amy East, a research geologist at the U.S. Geological Survey (USGS). Doyle points out that salmon are just one of many species affected by dams. Migratory shad, mussels, humpback chub, herring—the list goes on. He notes that the charismatic salmon are a more popular example than the “really butt-ugly fish we’ve got on the East Coast.”

    Dams not only upend ecosystems, they also erase portions of our culture and history. Gordon Grant points out that on the Columbia River, people fished at Celilo Falls for thousands of years, making it one of the oldest continually inhabited places in the country. The falls are now covered in 100 feet of water at the bottom of the reservoir behind the Dalles Dam.

    Hundreds of archaeological sites, going back 10,000 years, dot the riverbanks and the walls of the Grand Canyon. For millennia, East explains, many of these potsherds, dwellings, and other artifacts had been protected by a covering of sand. But that sand is disappearing because the upstream Glen Canyon Dam traps most of the would-be replacement sand coming down the Colorado River. Furthermore, snowmelt used to swell the river with monstrous spring floods, redistributing sediment throughout the canyon. Now, demand for power in Las Vegas and Phoenix regulates the flow. “They turn the river on when people are awake and turn the river off when people go to sleep,” explains Jack Schmidt, a river geomorphologist at Utah State University. Without “gangbuster” spring floods, he says, the sandbars are disappearing and the archaeological sites are increasingly more exposed. “There’s a lot of human history in the river corridor, and unfortunately a lot of it is being eroded away in the modern era,” East says.

    As the ecological and cultural toll dams take became clearer, our relationship with them started to show its cracks. Fights over dams grew increasingly loud. At the turn of the century, John Muir and a small band of hirsute outdoorsmen opposed construction of the O’Shaughnessy Dam in the Hetch Hetchy Valley of Yosemite. They failed. By the 1960s, pricy full-page ads in the New York Times opposed the Echo Park Dam on a tributary of the Colorado. They succeeded. Echo Park Dam was never built—but downstream, Glen Canyon Dam went up instead, inspiring new levels of resentment and vitriol among dam opponents. In a 1975 novel by cantankerous conservationist Edward Abbey, environmental activists blow up Glen Canyon Dam. The novel’s title entered the popular lexicon as a term for destructive activism: “monkeywrenching.”

    Abbey once described his enemies as “desk-bound men and women with their hearts in a safe deposit box, and their eyes hypnotized by desk calculators.” Now, 40 years later, Abbey might be surprised to learn that it’s men and women crunching numbers at desks who actually incite the dynamiting of dams.

    5
    O’Shaughnessy Dam in Hetch Hetchy Valley, California

    Why They’re Coming Down

    The decision to remove a dam is surprisingly simple. Ultimately, it comes down to dollars. “The bottom line is usually the bottom line,” says Jim O’Connor, a research geologist at USGS. As dams age, they often require expensive maintenance to comply with safety regulations or just to continue functioning. Sometimes, environmental issues drive up the cost; for example, the Endangered Species Act may require the owner to provide a way for fish to get past the dam. Consideration for Native American tribal rights may also influence decisions over whether to keep or kill a dam. “In my experience, economics lurks behind virtually all decisions to take dams off or to keep ’em. But the nature of what’s driving the economics is changing,” says Grant, the Forest Service hydrologist. Dam owners—who are overwhelmingly private, but also include state, local, and federal governments—have to weigh repair costs against the benefits the dam provides.

    In some cases, those benefits don’t exist. The age of waterwheel-powered looms and saws is long gone, but thousands of forlorn mill ponds still linger. “You’re left with a structure isn’t doing anything for anybody and is quietly and happily rotting in place,” says Gordon Grant. Others like Kendrick Dam in Vermont supplied blocks of ice. “We’ve got refrigerators now,” Magilligan says. “This one should probably come out.”

    6
    This old mill in Tennessee is now a restaurant.

    Other dams don’t live long enough to become obsolete. The designers of California’s Matilaja Dam, which was completed in 1948, said it would last for 900 years, says Toby Minear, a USGS geologist. But the reservoir behind Matilaja silted up so quickly that within 50 years the reservoir was 95% full of sediment. Though the surrounding community still wanted its water, the dam could no longer provide storage. Congress approved a removal plan in 2007, but the estimated $140 million dollar project has stalled after proving more expensive and technically challenging than anticipated.

    For most dams, the story is more complicated. Two dams on the Elwha River generated hydropower, but when the owner was legally required to add fish ladders—a series of small waterfalls that salmon can use to easily scale the dam—future sales of hydroelectricity paled in comparison to the repair cost. Furthermore, the neighboring Elwha Tribe had fought for decades to restore the salmon catch—half of which legally belonged to them. The owner opted to sell the dams to the federal government in 1992, and after nearly two decades of study and negotiation, the Department of the Interior, the Elwha Tribe, and the surrounding community had agreed on a removal plan. In September, 2011, construction crews began breaking up the two largest dams ever removed from U.S. rivers.

    Beginning of the End

    “Removing these big, concrete riverine sarcophagi, and salmon swimming past that gaping hole—that is the mental image that people will always have of dam removal,” Doyle says. But in reality, not all rivers host salmon and not all dams are removed with explosives. Each river, each dam, and each removal are totally different, says Laura Wildman, an engineer at a firm specializing in dam removal.

    Doyle remembers one particularly dramatic example of a “blow and go” removal, where the US Marines exploded a small dam slated for removal as part of a training exercise. When a dam disappears suddenly, the river responds violently. O’Connor was at the “blow and go” removal of the Condit Dam on the White Salmon River in Washington. “At first it was like a flash flood of water—just mostly water, definitely dirty water. It came up fast, it was turbulent, it was noisy,” he says. “Then it was brown, stinky, and chock full of organic material mud flow.”

    Even the slower removals, which take place over months or years, can have dramatic moments. Doyle describes how a backhoe slowly taking out the Rockdale Dam in Wisconsin “looked kind of prehistoric, like a long-necked dinosaur reaching out and eating away at the dam.”

    Jennifer Bountry, a Bureau of Reclamation hydrologist who helped plan the Elwha Dam removal, explains that initially the engineers would gingerly shave off a foot of concrete off the dam and wait to see what happened. But as the removal progressed, the river was changing so fast that she had to keep a close eye on the currents as she was recording her observations. “You had to be careful where you parked your boat,” Bountry says. The freed Elwha River rapidly carved out a new channel, carrying with it roughly the same volume of sediment as Mt St. Helens belched out during the infamous 1980 eruption.

    Aftermath of the End

    A small stream trickles through the YMCA’s Camp Gordon Clark in Hanover, Massachusetts. Freshly tie-dyed t-shirts hanging from the chain link fence sway in the breeze. Summer camp is in full swing, and Samantha Woods, director of the North and South Rivers Watershed Association, a nonprofit, walks me down a shallow slope to an ox-bow stream curling through a wide plain covered in cattails. The heavy, humid air is thick with buzzing cicadas and singing birds. Less than a year ago, this plain was a blank, wet canvas. Where the cattails stand now was submerged beneath several feet of water impounded by a 10-foot-tall earthen dam that had stood for at least 300 years. In 2001, the state determined that the dam could catastrophically collapse in a flood and required the owner—the YMCA—to fix or remove it.

    The dam hung in limbo for nearly a decade until storm damage reignited fears of collapse. By then, the public had started to embrace the idea that removing the dam could be a good thing for the river. Plus, repairing the dam would have cost an estimated $1 million. Taking it out would cost half that amount. So in October of 2014, crews tore down the earthen blockade, drained the pond, and planted native plant seeds in the newly exposed earth. Less than a year later, the transformation to wetlands is well underway. Woods is optimistic that if one more downstream dam comes out, herring would swim up this creek for the first time in centuries.

    But no one knows for sure if the herring will come back. In general, scientists are just beginning to unravel what happens when a dam is removed after tens or hundreds of years. “Dam removals help us understand how rivers behave,” Magilligan says. Magilligan, along with Bountry, East, Grant, O’Connor, and Schmidt, is part of a group called the Powell Center, which is studying how rivers respond when they’re set free.

    In the hundred or so dam removals for which data is available, fish, lamprey, and eel populations are rebounding, and more sediment and nutrients are heading downstream, both expected outcomes. But the Powell Center scientists are surprised at just how fast recovery takes place. Formerly trapped sediment clears out within weeks or months. For example, a recent study showed that this freed sediment is quickly rebuilding the Elwha River delta. Some fish populations revive within a few years, not a few decades as many had expected. Many rivers are starting to resemble their pre-dam selves.

    But the Powell Center members also point out that dam removals may sometimes have undesirable consequences, like allowing non-native species formerly trapped upstream to colonize the rest of the river, or releasing contaminated sediment downstream. They agree there’s much more to figure out.

    Dam New World

    Few people may be more emblematic of the subtle shift in attitudes about dam removal in recent years than Gordon Grant. A much younger Grant spent a dozen years as a rafting guide. Back then, he’d sat around campfires singing “Damn the man who dams the river!” with people who chained themselves to boulders at the bottom of a valley slated to become a reservoir. One day Grant got curious enough about the forces shaping the rapids he ran that he went to graduate school. For nearly 30 years now, he’s been conducting research in fluvial geomorphology—the study of how rivers reshape the surface of the earth. I asked Gordon Grant if a dam is still, for him, at the inner circle of hell. “It used to be more than it is now,” he says. “It may be slippage, it may be gray hair, it may be something else, but I see dams in a somewhat different light now.”

    “I’ve seen dams that provide nothing for anybody and I’ve seen dams that provide a lot of power that otherwise would have been generated by coal,” Grant says of his research career. Both building and demolishing dams have tradeoffs, Grant argues, and as a scientist he’s interested in how economics, ecology, and hydrogeology each play a role. Emily Stanley says, “I’ve learned that it’s not enough to say ‘Yeah, we should blow ’em all up!’ We can’t just wave the wand and take them away. There will be huge consequences. But yeah, there’s too many dams.”

    Even Daniel Beard, Commissioner of Reclamation under the first Clinton Administration agrees there’s too many dams. He has been calling loudly and unequivocally for taking out one of the largest in the country, the Glen Canyon Dam. “Do I think that’s controversial? Absolutely. Do I think it’s politically realistic? Eh…not really. But somebody has speak up,” he says.

    6
    Glen Canyon Dam on the Colorado River

    Most scientists and engineers are skeptical that any dam as large as Glen Canyon will go anywhere, anytime soon. Drought has brought the reservoir down to as low as one-third of its designed capacity in recent years, but even still it currently stores 12 million acre-feet—roughly the average volume of water that goes through the Grand Canyon in a year—and generates enough power for 300,000 homes. And even if the economics change dramatically, the dam itself is a formidable structure and one not easily removed. “I can’t imagine getting dropped into Glen Canyon and having the audacity to start wanting to plug that thing with concrete,” Doyle says. “If we really want to start removing Western dams, then we need an audacity to match that with which they went after building them.”

    Even removing the dam in Scituate, which is 370 years old and a mere 10 feet tall, is a tough sell. “This dam isn’t coming down,” David Ball, president of the Scituate Historical Society, told me on two occasions. The pond still provides about half the town’s drinking water.

    For some dams that do still serve purposes like Scituate and Glen Canyon, dam owners, conservation groups, and government agencies have worked to manage them more holistically. In Scituate, fish ladders and timed water releases are beginning to restore herring to the upstream watershed.

    At the Glen Canyon Dam, operators now create a simulacrum of spring floods by releasing extra water to help restore sediment in the Grand Canyon. The first artificial flood stormed through the Grand Canyon in the spring of 1996, and by 2012, a supportive Bureau of Reclamation had helped clear the way for nearly annual restoration floods. Though these floods surge with less than half of the flow of pre-dam torrents, they were still highly controversial at first, says Jack Schmidt, the Utah State professor. Releasing extra water in the spring means lost revenue, he explains, because it generates electricity that no one is interested in buying.

    And at Shasta Dam in California, water releases are now carefully controlled in order to keep the water temperature low enough for downstream Chinook salmon to survive, according to Deputy Interior Secretary Michael Connor. He expects that drought, exacerbated by climate change, will alter our relationship with dams. “There is nothing necessarily permanent. We should be relooking and rethinking the costs and the benefits of our infrastructure,” he says.

    Many dams will remain—and as climate change alters precipitation patterns, some new ones will be built. Dams shaped the country of the rivers they divide, and they don’t go down quietly. But time and economics will sweep more dams away.

    It’s hard to forget the moment when a once-restrained river breaks free. Connor, for one, vividly recalls the removal of the Elwha dams. “You know, you count on your one hand those days that really stand out, and those events that you really participate in. That is easily, for me, one of those days that I’ll always remember.”

    Some had hoped for that moment for a very long time. East, the USGS geologist, recalls meeting an 80-year-old Elwha woman who had never before seen the river untrammeled. The woman had said, joyfully, “I’ve been waiting for these dams to come out my whole life!”

    [See the original article for, an interactive map of dams in the US, were you can find any dam in which you might be interested, and an animation of the number of dam completions in the U.S. between 1800-2000]

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 2:20 pm on August 17, 2015 Permalink | Reply
    Tags: , NOVA, Prime Meridian   

    From NOVA: ” Trash Bin Marks the True Location of the Greenwich Meridian, 334 Feet to the East” 

    PBS NOVA

    NOVA

    17 Aug 2015
    Allison Eck

    1
    This is not the correct location.

    There are three different prime meridian lines in Greenwich, England, and none of them are accurate.

    There’s Halley Meridian and Bradley Meridian, both used before the current marker, the famed Greenwich Meridian. But the genesis of each new day, longitude 0º, belongs to none of these. The line demarcating Earth’s hemispheres actually lies 334 feet to the east of the official Greenwich Meridian, cutting through a pathway nearby a garbage receptacle.

    Here’s Sara Knapton, writing for the Telegraph:

    The Prime Meridian was set in 1884 using the large Transit Circle telescope built by Sir George Biddell Airy, the 7th Astronomer Royal. The telescopes tracked the movement of ‘clock stars’—circumpolar stars which never rise or set. Because these stars are always present in the sky and transit the meridian twice each day their appearance in the telescope cross hairs can be used to set time and longitude.

    2
    The northern circumpolar stars revolving around the north celestial pole. Note that Polaris, the bright star near the center, is almost stationary. Polaris is circumpolar and can be seen at all times of the year. (The graphic shows how the apparent positions of the stars move over a 24-hour period, but in practice they are not visible when the Sun is also in the sky.)

    A basin of mercury was used to make sure that Airy’s telescope was kept exactly vertical so that it could align with the clock stars. But astronomers failed to take into account that subtle changes in gravity would impact the telescope alignment and give a wonky reading.

    Global positioning systems take such gravitational effects, which arise from Earth’s irregular shape and varied local terrain, into account. So when GPS was introduced in 1984, the true location of longitude 0º was revealed—but the marker at Greenwich wasn’t changed. A newly published paper in the Journal of Geodesy confirms that this “deflection of the vertical” is merely a local effect due to the direction of gravity in Greenwich, and not a universal change in our planet’s longitude system.

    The real meridian also runs through a café around the corner from Greenwich Observatory (you can see this if you type “Prime Meridian” into Google Maps on your iPhone). At the very least, this coffee shop may become a wildly successful business in the near future.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 2:20 pm on August 13, 2015 Permalink | Reply
    Tags: , , NOVA, RNA Spray   

    From NOVA: “RNA Spray Could Make GMOs Obsolete” 

    PBS NOVA

    NOVA

    Thu, 13 Aug 2015
    Abbey Interrante

    1
    One challenge is confirming that every crop in the field is sprayed.

    As Scotland moves forward to ban genetically modified crops, Monsanto is developing a way to alter crops without touching their genes.

    Through RNA interference, or the process of temporarily barring gene expression, Monsanto scientists have been able to stop the Colorado potato beetle from eating crops. Instead of modifying the crop’s genes, they’ve sprayed RNA that shuts down a gene the insects need to survive directly onto the crops. When the beetles eat the plant, the ingested RNA will eventually cause them to die through inhibiting the necessary gene.

    Antonio Regalado, reporting at MIT Technology Review, explains RNA interference further:
    The mechanism is a natural one: it appears to have evolved as a defense system against viruses. It is triggered when a cell encounters double-stranded RNA, or two strands zipped together—the kind viruses create as they try to copy their genetic material. To defend itself, the cell chops the double-stranded RNA molecule into bits and uses the pieces to seek out and destroy any matching RNA messages. What scientists learned was that if they designed a double-stranded RNA corresponding to an animal or plant cell’s own genes, they could get the cells to silence those genes, not only those of a virus.

    Other companies, all of which are hoping to avoid the controversy they face when they genetically modify crops directly, are exploring the genetic spray alternative to GMOs. These sprays can be created and applied quickly, providing protection if the plants are infested by a never-before-seen virus or insect. They could even be used to endow plants with advantageous, temporary traits. For example, farmers could spray RNAis that bestow corn plants with drought-resistance, saving a harvest during hot, dry weather.

    Such sprays can only turn off genes for a few days or weeks at a time, so all efforts would be temporary. If a new set of insect invaders enters the field of crops weeks after the last RNA spray, the plants would no longer be protected. But the approach has it’s benefits, too, because the plants’ genes that were affected to help them survive in a water shortage would revert back to their original states when the water shortage ends. This means they could thrive in both conditions. In addition to that, if insects evolve to survive the RNA spray, the scientists could switch which gene they’re affecting. Monsanto is hoping to improve the sprays to last for months—some scientists have already been successful in creating these long-lasting sprays.

    Since the spray target specifics genes that only that certain targeted insects have, it wouldn’t affect beneficial bugs that currently suffer from pesticide use, such as bees. This differentiates the spray from traditional insecticides, which are indiscriminate killers.

    Despite the lack of evidence of harmful effects of the spray, it will most likely face stiff opposition. Some worry the spray will be hard to control, and wind could blow it to surrounding areas. Others argue that the RNA interference might silence important genes in humans when we eat the crops, but no trustworthy studies so far have shown that to be true.

    Despite the lack of evidence of harmful effects of the spray, it will most likely face stiff opposition. Some worry the spray will be hard to control, and wind could blow it to surrounding areas. Others argue that the RNA interference might silence important genes in humans when we eat the crops, but no trustworthy studies so far have shown that to be true.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:07 pm on August 10, 2015 Permalink | Reply
    Tags: , Infection, NOVA,   

    From NOVA: “One Drop Of Blood Can Reveal Almost Every Virus A Person Has Ever Had” 

    PBS NOVA

    NOVA

    08 Jun 2015
    Allison Eck

    A single drop of blood may contain nearly all the information you need to know about a person’s viral past.

    The new experimental test, called VirScan, opens up a world of possibilities, so much so that its development has been compared to the advent of the electron microscope. Able to detect 1,000 strains of viruses from 206 species, the test analyzes antibodies that the body has made in response to previous viruses.

    The result is a nearly comprehensive record of the human “virome,” and it could eventually give researchers insight into whether or not viruses contribute to chronic diseases and cancer. In other words, scientists may find out what viruses antagonize the immune system by creating antibodies that subvert it—or, they could discover why chemotherapy works well for some people but not for others.

    1
    A very small amount of blood could betray a person’s entire history of viral infection.

    Stephen J. Elledge, senior author of the report published in Science, and his team administered the test to 569 people in the United States, South Africa, Thailand, and Peru. The VirScan results indicated that most people tested had been exposed to about 10 different species of virus, though others had been exposed to as many as 25. People outside the United States had higher rates of exposure, which could be due to a number of factors: sanitation levels, genetic variation, population density, and so on. In the long term, more thorough comparisons between countries’ viral histories could lead to better epidemiological practices across the globe.

    The test can take up to two months to perform right now, but if a company were to acquire it, the whole process may be completed in as few as two or three days, Elledge told The New York Times. And with expedited testing, scientists could study everything from the age at which children acquire various illnesses to how disease has changed throughout history. They may even encounter some unexpected results—in fact, they already have.

    Here’s Denise Grady, writing for The New York Times:

    The initial study had some surprises, Dr. Elledge said. One was “that the immune response is so similar from person to person.” Different people made very similar antibodies that targeted the same region on a virus, he explained.

    Another surprise came from people infected with H.I.V. Dr. Elledge expected their immune responses to other viruses to be diminished. “Instead, they have exaggerated responses to almost every virus,” he said. The researchers do not know why.

    The test has some limitations, but this is certainly a major step forward in scientists’ goal to track the progress and potency of illness and disease all over the world.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 6:40 am on August 7, 2015 Permalink | Reply
    Tags: , Friction, NOVA,   

    From NOVA: “Friction Fighters” 

    PBS NOVA

    NOVA

    05 Aug 2015
    Anna Lieb

    Is friction real? Once, with the quiet certainty of someone who just stayed up all night in the company of equations describing concrete, my college roommate told me that friction was made up.

    Now, I’m pondering her words as I stare at six yterrbium atoms. They are blue and dancing, projected on the wall of a small room off a long hallway at MIT. Lasers and electronics march all over a wide tabletop, climbing up into the ceiling and slithering down to the floor. I’m about to learn that everything I thought I knew about friction is a 14th-century work of fiction—and that the truth is stranger by far.

    I’m in the lab of Vladan Vuletic, a professor of physics here, where two of his graduate students are feeding electrical current through a circlet of aggressively coiled wires into a shoebox-sized, airless vault, instructing the ytterbium atoms to move in unison—in time with swing music, even.


    Ytterbium ions move on command in Vladan Vuletic’s lab at MIT.
    Video can be downloaded at orignal article.

    Vuletic and his lab group spent years setting up this maze of a room in order to study technology so new we’re not quite sure if it really exists yet: quantum computing. But when he realized that their experiment could do much more, his curiosity sent him on an unexpected detour. “We could study friction in a way that was not possible before, namely, have direct access to looking at each atom individually.”

    Friction is a simple word that glosses over a complex phenomenon arising from a dizzying array of interactions. “Friction is a very elusive thing. It’s not something you can touch, but you always feel the effect,” says Ali Erdemir, a senior scientist at Argonne National Laboratory who has spent decades figuring out how to reduce friction losses in transportation. Friction gives and friction takes away. It ensures that our shoes don’t slip and our vehicles stop on command. But friction also eats up roughly one third of all the fuel we burn in our cars, and deep underground, friction between bits of the earth’s crust decides when and where an earthquake will occur.

    1
    Friction in engines and other mechanical parts wastes massive amounts of energy in transportation.

    Tribologists, the clan of scientists and engineers who study interacting surfaces, think on scales ranging from atoms to airplane wings, and their efforts have huge potential payoffs. In transportation alone, researchers think that reducing the energy lost by surfaces rubbing against each other in engines could save 1% of the all the energy used in the U.S. alone, says Robert Carpick, a professor of mechanical engineering at the University of Pennsylvania.

    For tribologists, the experiments going on right now in Vuletic’s lab could offer a fresh window into a force that’s almost as poorly understood as it is ubiquitous. “In some ways there’s more fundamental physics in our understanding of black holes light years away from us than there is about the friction between our feet and the ground,” Caprick says.

    Laws and Loopholes

    Most people’s first encounters with the scientific side of friction are brief and quickly forgotten. Carpick, for example, had no idea friction was a subject of active research until he started working in a tribology lab as a graduate student. Jaqueline Krim, who heads a nanotribology lab at North Carolina State University, says that just two basic laws about friction, wedged into an introductory physics course, comprised “almost 100% of what I learned up to my Ph.D.”

    In fact, those basic two laws go back a long way. “Leonardo da Vinci and the other guy—whose name I don’t actually remember—wrote down the laws by 1600,” says George Smith, a historian of science at Tufts University and mechanical engineer. Da Vinci worked out his rules 200 years before the other guy—Guillerme Amontons—but da Vinci never published. Amontons printed up his laws in 1599, died shortly thereafter at the age of 42, and all but disappeared from history.

    Here’s what Amontons’s laws say: Imagine dragging a reluctant elephant across a parking lot. Suppose this hypothetical pachyderm is stubborn enough to keep all four legs locked in place and all four feet touching the pavement. Once you overcome inertia to get the beast moving, all your effort goes into fighting the friction between the elephant’s hooves and the asphalt. Amontons’s first law says that the friction force is proportional to the force of the pavement pushing against the weight of the elephant. (In physics, this is called the “normal force” because it’s normal—that is, perpendicular—to the surfaces in question.) So if you stack a second elephant on top of the first, you get twice as much friction because you have twice the normal force. (Though the normalcy of the stacked elephant situation is admittedly debatable.) The second law states that friction doesn’t depend on how much area is in contact. So if your elephant daintily lifts up one of its front legs, and one of its back legs, the friction doesn’t change, even though there’s only half as much hoof area touching the ground.

    Amontons’s laws do a reasonably good job of describing many everyday situations, but they are nonetheless fiction. They fall short because they don’t really tell us anything about what’s going on between two sliding surfaces. The closer we look, the more loopholes tribologists are finding in Amontons’s laws.

    For example, let’s take a second look at the second law. If friction comes from interactions between two surfaces, then wouldn’t more surface mean more opportunities for things to catch and snag against each other and thus more friction? “This is something that always intrigued me, you know,” Vuletic says. “It turns out that even this is not perfectly well understood.”

    Or take this other example: Which is easier, dragging a box across an ice rink or a soccer field? You might expect that smoother surfaces like ice always slide more easily than rough ones like grass. But this is not always true. If you take two copper surfaces and polish them to perfection, then the copper refuses to slide at all. “When the atoms in contact are all of the same kind,” explained the physicist Richard Feynman in one of his lectures, “there is no way for the atoms to ‘know’ that they are in different pieces of copper.”

    Yet another unsolved tribology mystery involves a Soviet physicist named J.W. Obreimoff, who in 1929 was using a Gillette razor to slice rock the hard way. He cut into a thin sheet of mica, blade parallel to the glittery surface. As he sliced Obreimoff saw what he described as a “splash of light.” To this day, neither Amontons’s laws nor any other description of interacting surfaces can explain the phenomenon, says Seth Putterman, a professor of physics at UCLA. Yet it’s everywhere. The same physics is at work when you crunch down on a wintergreen-flavored Lifesaver candy and see sparks or when a cat’s fur crackles with static electricity after it walks across carpet. “For sure we don’t understand the cat’s fur,” Putterman says.

    A Rough Place

    Our partial ignorance may really be an issue of scale. If you zoom in enough, the seemingly smooth surface of an ice sheet or mirror would resemble a mountain range. “Atomically speaking, there’s no such thing as a flat surface,” says Michael Strano, a professor of chemical engineering at MIT. When you slide one surface over another, it’s like you’ve turned the Himalayas upside down and started dragging them across the Rocky Mountains. The peaks, called “asperities” in tribology lingo, bump into each other. Each time they stretch, compress, or break off saps energy from the motion.

    The rough nature of smooth-looking surfaces could help explain why the second law of friction suffices for macroscopic objects but breaks down if we zoom close enough. Most of what we measure as an object’s surface area (say, the elephant foot) doesn’t interact with the other surface (say, the pavement). In fact, only a few atoms at the tops of the asperities in the foot get close to the tops of the asperities in the pavement. These are the only atoms that “actually see each other,” Erdemir says. “They are intimately interacting.”

    If we could master those interactions, we might be able to get rid of friction.

    3
    Up close, even the smoothest surfaces resemble mountain ranges.

    Researchers theorized in the late 1980s about how to eliminate one type of friction, known as stick-slip. Stick-slip friction happens when the peaks of one surface nestle down into the valleys of the other and get stuck—until you apply enough force to coax them up and out. In many cases, it’s the dominant frictional effect at atomic scales.

    The trick to overcoming stick-slip is to induce apathy, convincing the two surfaces not to give a damn if you move them across one another. Such surfaces are called “incommensurate.” To picture incommensurate surfaces, suppose we papier-mâchéd over one-inch round marbles spaced exactly one inch apart. To make the second surface incommensurate with the first, we papier-mâché over more marbles to make a surface that can’t mesh with the first one . That means making the space between the marbles in the second surface different. (Not just any spacing will do—if the new bumps are exactly two inches apart, then the surfaces will still fit together, with every other peak corresponding to a valley. For the surfaces to be incommensurate, the ratio of the spacing must be an irrational number, which cannot be written as a ratio of integers. A ratio of π would work, but ratios of 17 or ⅓ would not, because then every 17th atom or every third atom would line up with atoms in the other surface.)

    4
    When the marbles are equally spaced as in (a), or when one spacing is an integer multiple of the other as in (b), the surfaces can interlock. When the ratio of the spacing is an irrational number like pi (c), the surfaces are incommensurate.

    If you build two incommensurate surfaces, no matter how you shove them around, “you’ll always have some fitting and some not fitting,” says James Hone, a professor of mechanical engineering at Columbia University. If the surfaces can interlock, they’ll prefer the stuck-together arrangement. But for incommensurate surfaces, apathy sets in. “Then the system doesn’t care if it’s moving sideways,” he says, so you don’t lose energy as you move. If done right, the two incommensurate surfaces might slide past one another with vanishingly low friction. Such surfaces are especially intriguing to the materials scientists, physicists, and engineers that have spent the last 25 years trying to observe frictionless sliding, a phenomenon known as superlubricity. Some argue they’ve already found it.

    Vanishing Act

    In an airless chamber in the center of the lab, Alexei Bylinskii, a graduate student in Vuletic’s group, uses electric fields to corral a handful of ytterbium ions into a space the size of a matchbox. By changing the electric current flowing through the maze of wires, they can carefully pull these atoms over a surface below and measure how much friction the atoms feel.

    This lower surface, which is designed to be incommensurate with the string of ytterbium ions above, is called an optical lattice. It is made out of light but that doesn’t mean the surface is an illusion. By bouncing light between two mirrors, the group creates a standing wave of light—imagine the peaks and troughs of a frozen ocean wave. These peaks and troughs correspond to points of higher and lower energy for the ytterbium ions, which want to move down into troughs and away from peaks. From the ion’s point of view, this landscape resembles the high and low points on the surface of a material like copper—but the scientists can control the shape and size of the optical lattice far more precisely than they can control the surface of a physical chunk of metal. When Vuletic’s lab rigged the spacing of the ions to be incommensurate with the spacing of the optical lattice below, they observed a dramatic reduction in friction.

    Even outside this pristine vacuum chamber, researchers have created systems with incredibly low friction. Ali Erdemir and colleagues at Argonne National Lab recently created a surface coating that resembles miniscule ball-bearings. The “ball-bearings” are actually tiny diamonds, wrapped up in a wispy layer of graphene to produce two incommensurate—and incredibly slippery—surfaces. Erdemir calls the work a clear example of superlubricity.

    But some tribologists argue the term is misapplied—“you might call it very good lubricity, not superlubricity,” Carpick says. In physics, the prefix “super-” typically applies only in extreme situations. Sokoloff explains that when researchers observe superconductivity, current flows unhindered because “electrical resistance really does go down to zero.” Similarly, when liquid helium exhibits superfluidity, its viscosity vanishes, allowing the stuff to eerily climb up and over the walls of its container. So far, superlubricity experiments have demonstrated very low—but not actually zero—friction.

    The terminology dispute hints at something deeper than a quibble over nomenclature. The theoretical picture of superconductivity relies on quantum mechanics. Superfluidity also defies classical physics. So will quantum mechanics help us understand where to look for “true” superlubricity?

    Not necessarily, Sokoloff argues. “Right now, according to the way we understand things…you’re probably not going to see truly zero friction,” Sokoloff says.

    Scientists don’t yet know what role quantum mechanics might play in friction on the atomic scale. Vuletic’s lab is working on cooling their experiment down to just a hair above absolute zero, where they hope to see ytterbium ions quantum tunneling—moving through the peaks, rather than over them. They want to see how this quantum tunneling affects friction, an observation that may help us understand friction at larger scales. But it’s not a done deal, Bylinskii says. “Whether friction in the real world depends on quantum mechanical effects, that’s an open question.”
    Sliding Forward

    If answering that and other questions would be helpful to traditional mechanical engineers, it would be a breakthrough for nano engineers. At large scales, we’ve come up with shortcuts to make friction less destructive. Take a car tire. On average, every revolution on pavement wears off one layer of atoms, Vladan Vulutec says. “For a tire it doesn’t matter, because there’s billions and billions of layers until you have a millimeter or centimeter of loss of profile.”

    But as researchers design devices that are only 100 or even ten atoms thick, and losing even one layer of atoms is a pretty big deal. “Nanoscale stuff is all surface,” says Hone, the Columbia nano engineer, “and so once you contaminate the surface, you’ve changed what it is.”

    Back at MIT, Strano’s group is interested in scaling up nanoscale discoveries about friction and other phenomena to make exotic materials for safer, lighter cars and airplanes The potential applications are huge: Ali Erdemir of Argonne estimates that mitigating friction losses in transportation alone could save an estimated $500 billion in fuel costs and 800 million tons of CO­­2 annually.

    Mastering friction could also help make cars safer. When a car crashes, several thousand of pounds of mass that were moving suddenly aren’t anymore. As the energy of motion dissipates, the frame of the car—not to mention the occupants—often crumple up. “You’d like to flow that energy in a certain way, and you’d like it not to go to you,” Strano says. Controlling atomic-level friction could help design a material that’s rigid in most cases but bends easily when pushed from a certain direction, allowing designers to carefully orchestrate how the frame of a car deforms in the event of an accident.

    All of the researchers I talked to say they’re a long way from completely eliminating or even expertly controlling friction in the messy world outside the laboratory. Strano points out that, in general, researchers observe amazing properties on atomic scales, but they have made slower progress in advancing tantalizing technologies like ultra-efficient engines or futuristic airplane wings.

    But that hasn’t stopped them. “It used to be that friction was okay if your car wasn’t wearing away,” says Krim, the nanotribologist. “Our world is less tolerant now of waste.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:48 pm on August 3, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “New Theory Could Tell Us If Life Came From an Alien Planet” 

    PBS NOVA

    NOVA

    03 Aug 2015
    Abbey Interrante

    1
    An artist’s rendition of exoplanets

    Life is thought to have originated spontaneously on Earth about 3.5 billion years ago, but some scientists are think life may have come to Earth from elsewhere in the universe. But if finding the origins of life on Earth has been difficult, searching for them in the sky seems nearly impossible.

    For supporters of the panspermia hypothesis, which says that life could have started on one planet and jumped to another, a new model proposed by Henry Lin and Abraham Loeb, both of Harvard University, is an exciting prospect not because it proves the theory, but because it makes it testable.

    Scientists look for evidence of panspermia by looking for biosignatures, or evidence of past or present life, on space objects, however space objects are so numerous in the universe that checking all of them is absurd. So Lin and Loeb suggest that if panspermia were to occur, it would appear in clusters of solar systems. For example, if Earth sat at the edge of one of these clusters, half of what’s viewed in the sky from the planet could be inhabited and the other half would be uninhabited.

    According to Lin and Loeb, if 25 exoplanets on one side of the sky showed signs of biological activity, and 25 on the other side showed no biological activity, this would be a smoking gun for panspermia. However, if the Earth is in the center of a panspermiac cluster, then it would be surrounded by biosignatures. If that was the case, panspermia would be harder to confirm.

    Joshua Sokol, reporting for New Scientist, explains further:

    Future probes like NASA’S James Webb Space Telescope will scrutinise the atmospheres of planets in other solar systems for possible signs of biological activity.

    NASA Webb Telescope
    Webb

    If life spreads between planets, inhabited worlds should clump in space like colonies of bacteria on a Petri dish. Otherwise, Lin says, its signature would be seen on just a few, randomly scattered planets.

    Studies show that regions of large stellar density are be more likely to have higher transfer rates of rocky material, and therefore a higher chance for spreading life. In regions of small stellar density, panspermia would be less likely to occur and therefore would have tiny amount of or even zero biosignatures. Nevertheless, with an even higher stellar density, the chances an area is inhospitable for life also rise because of an increased number of stellar encounters.

    Some question if panspermia has occurred already, resulting in life on Earth, or if humans will be the first to generate it through colonization of other planets. As our technological prowess increases, spacecraft could eventually transport us humans successfully through space. But it’s also possible that primitive life could evolve to survive the harsh environment of space, piggyback on debris from, say, a meteor collision with Earth, and colonize a new world. The question is: who will get there first?

    [Someone tell me where this article describes any test methods.]

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 8:04 am on July 28, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Fossil Fuels Are Destroying Our Ability to Study the Past” 

    PBS NOVA

    NOVA

    21 Jul 2015
    Tim De Chant

    It’s been used to date objects tens of thousands of years old, from fossil forests to the Dead Sea Scrolls, but in just a few decades, a tool that revolutionized archaeology could turn into little more than an artifact of a bygone era.

    Radiocarbon dating may be the latest unintended victim of our burning of fossil fuels for energy. By 2020, carbon emissions will start to affect the technique, and by 2050, new organic material could be indistinguishable from artifacts from as far back as AD 1050, according to research by Heather Graven, a lecturer at Imperial College London.

    1
    The Great Isaiah Scroll, one of the seven Dead Sea Scrolls, has been dated using the radiocarbon technique.

    The technique relies on the fraction of radioactive carbon relative to total carbon. Shortly after World War II, Willard Libby discovered that, with knowledge of carbon-14’s predictable decay rate, he could accurately date objects that contained carbon by measuring the ratio of carbon-14 to all carbon in the sample. The less carbon-14 to total carbon, the older the artifact. Since only living plants and animals can incorporate new carbon-14, the technique became a reliable measure for historical artifacts. The problem is, as we’ve pumped more carbon dioxide into the atmosphere, we’ve unwittingly increased the total carbon side of the equation.

    Here’s Matt McGrath, reporting for BBC News:

    At current rates of emissions increase, according to the research, a new piece of clothing in 2050 would have the same carbon date as a robe worn by William the Conqueror 1,000 years earlier.

    “It really depends on how much emissions increase or decrease over the next century, in terms of how strong this dilution effect gets,” said Dr Graven.

    “If we reduce emissions rapidly we might stay around a carbon age of 100 years in the atmosphere but if we strongly increase emissions we could get to an age of 1,000 years by 2050 and around 2,000 years by 2100.”

    Scientists have been anticipating the diminished accuracy of radiocarbon dating as we’ve continued to burn more fossil fuels, but they didn’t have a firm grasp of how quickly it could go south. In the worst case scenario, we would no longer be able date artifacts younger than 2,000 years old. Put another way, by the end of the century, a test of the Shroud of Turn wouldn’t be able to definitively distinguished the famous piece of linen from a forgery made today.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 5:57 pm on July 27, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Agriculture May Have Started 11,000 Years Earlier Than We Thought” 

    PBS NOVA

    NOVA

    Mon, 27 Jul 2015

    The technology that allowed us to build cities and develop specialized vocations may have first started 23,000 years ago in present day Israel—some 11,000 years earlier than expected—but then mysteriously disappeared from later settlements.

    Archaeologists found evidence of farming—including sickles, grinding stones, domesticated seeds, and, yes, weeds—in a sedentary camp that was flooded by the Sea of Galilee until the 1980s when drought and water pumping shrank the lake’s footprint. The 150,000 seeds found at the site represent 140 plant species, including wild oat, barley, and emmer wheat along with 13 weed species that are common today. The find not only illustrates humanity’s initial forays into farming, but it also provides the earliest evidence that weeds evolved alongside human ecological disturbances like farms and settlement clearings.

    1
    Archaeologists found wild barley seeds buried at the site.

    Mysteriously, the lessons learned from those early trials either were forgotten or were a failure. The study’s authors point out that neither sickles nor similar seeds have been found at settlements dating to just after the Sea of Galilee site, which is known as Ohalo II.

    The settlement was composed of a number of huts covered with tree branches, leaves, and grasses. Archaeologists also found a variety of flint and ground stone tools, several hearths, beads, animal remains, and an adult male gravesite. They suspect Ohalo II was occupied year round based on the remains of various migratory birds, which are known to visit the area during different times of year.

    The seeds that made up much of the settlers’ diets are surprisingly familiar. Here’s Ainit Snir and colleagues, writing in their paper published in PLoS One:

    Some of the plants are the progenitors of domesticated crop species such as emmer wheat, barley, pea, lentil, almond, fig, grape, and olive. Thus, about 11,000 years before what had been generally accepted as the onset of agriculture, people’s diets relied heavily on the same variety of plants that would eventually become domesticated.

    While Snir and coauthors think that Ohalo II’s fields were simply early trials and that plants weren’t fully domesticated until 11,000 years later, they do suspect that future discoveries could flesh out long, trial-and-error development of agriculture.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
  • richardmitnick 1:51 pm on July 20, 2015 Permalink | Reply
    Tags: , , NOVA, Reed-Solomon codes   

    From NOVA: “The Codes of Modern Life” 

    PBS NOVA

    NOVA

    15 Jul 2015
    Alex Riley

    On August 25th 2012, the spacecraft Voyager 1 exited our Solar System and entered interstellar space, set for eternal solitude among the stars. Its twin, Voyager 2, isn’t far behind. Since their launch from Cape Canaveral in Florida, in 1977, their detailed reconnaissance of the Jovian planets—Jupiter, Saturn, Uranus, Neptune—and over 60 moons extended the human senses beyond Galileo’s wildest dreams.

    After passing Neptune, the late astrophysicist Carl Sagan proposed that Voyager 1 should turn around and capture the first portrait of our planetary family. As he wrote in his 1994 book, Pale Blue Dot, “It had been well understood by the scientists and philosophers of classical antiquity that the Earth was a mere point in a vast encompassing Cosmos, but no one had ever seen it as such. Here was our first chance (and perhaps our last for decades to come).”

    1
    Earth, as seen from Voyager 1 more than 4 billion miles away.

    Indeed, our planet can be seen as a fraction of a pixel against a backdrop of darkness that’s broken only by a few scattered beams of sunlight reflected off the probe’s camera. The precious series of images were radioed back to Earth at the speed of light, taking five and a half hours to reach the huge conical receivers in California, Spain, and Australia more than 4 billion miles away. Over such astronomical distances, one pixel out of 640,000 can easily be replaced by another or lost entirely in transmission. It wasn’t, in part due to a single mathematical breakthrough published decades before.

    In 1960, Irving Reed and Gustave Solomon published a paper in the Journal of the Society for Industrial and Applied Mathematics, entitled, Polynomial Codes Over Certain Finite Fields, a string of words that neatly convey the arcane nature of their work. “Almost all of Reed and Solomon’s original paper doesn’t mean anything to most people,” says Robert McEliece, a mathematician and information theorist at California Institute of Technology. But within those five pages was the basic recipe for the most efficacious error-correction codes yet created. By adding just the right levels of redundancy to data files, this family of algorithms can correct for error that often occurs during transmission or storage without taking up too much precious space.

    Today, Reed-Solomon codes go largely unnoticed, but they are everywhere, reducing errors in everything from mobile phone calls to QR codes, computer hard drives, and data beamed from the New Horizons spacecraft as it zoomed by Pluto. As demand for digital bandwidth and storage has soared, Reed-Solomon codes have followed. Yet curiously, they’ve been absent in one of the most compact, longest-lasting, and most promising of storage mediums—DNA.

    2
    From Voyager to DNA

    3
    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    Several labs have investigated nature’s storage device to archive our ever-increasing mountain of digital information; encoding small amounts of data in DNA and, more importantly, reading it back. But those trials lacked sophisticated error correction, which DNA data systems will need if they are to become our storage medium of choice. Fortunately, a team of scientists, led by Robert Grass, a lecturer at ETH Zurich, rectified that omission earlier this year when they stored a duo of files in DNA using Reed-Solomon codes. It’s a mash up that could help us reliably store our fragile digital data for generations to come.

    Life’s Storage

    DNA is best known as the information storage device for life on Earth. Only four molecules—adenine, cytosine, thymine, and guanine, commonly referred to by their first letters—make up the rungs on the famous double helix of DNA. These sequences are the basis of every animal, plant, fungi, archaea, and bacteria that has ever lived in the 4 billion some years that life has existed on Earth.

    “It’s not a form of information that’s likely to be outdated very quickly,” says Sriram Kosuri a geneticist from University of California, Los Angeles. “There’s always going to be a reason for studying DNA as long as we’re still around.”

    It is also incredibly compact. Since it folds in three dimensions, we could store all of the world’s current data—everyone’s photos, every Facebook status update, all of Wikipedia, everything—using less than an ounce of DNA. And, with its propensity to replicate given the right conditions, millions of copies of DNA can be made in the lab in just a few hours. Such favorable traits make DNA an ideal candidate for storing lots of information, for a long time, in a small space.

    A Soviet scientist named Mikael Nieman recognized DNA’s potential back in 1964, when he first proposed the idea of storing data in natural biopolymers. In 1988, his theory was finally put into practice when the first messages were stored in DNA. Those strings were relatively simple. Only in recent years have laboratories around the world started to convert large amounts of the binary code that’s spoken by computers into genetic code.

    In 2012, by converting the ones of binary code into As or Cs, and zeros into Ts and Gs, Kosuri along with George Church and Yuan Gao stored an entire book called Regenesis, totaling 643 kilobytes, into the genetic code. A year later, Ewan Birney, Nick Goldman, and their colleagues from the European Bioinformatics Institute added a slightly more sophisticated way of translating binary to nucleic acid that reduced the number of repeated bases.

    Such repeats are a common problem when writing and reading of DNA, or synthesizing and sequencing, as they’re called. Although Birney, Goldman, and team stored a similar amount of information as Kosuri, Church, and Gao—739 kilobytes—it was spread over a range of media types: 154 Shakespearean sonnets, Watson and Crick’s famous 1953 paper that described DNA’s molecular structure, an audio file of Martin Luther King Jr.’s “I Have a Dream” speech, and a photograph of the building they were working in near Cambridge, UK.

    The European team also integrated a deliberate error-correction system: distributing their data over more than 153,000 short, overlapping sequences of DNA. Like shouting a drink order multiple times in a noisy bar, the regions of overlap increased the likelihood that the message would be understood at the other end. Indeed, after a Californian company called Agilent Technologies manufactured the team’s DNA sequences, packaged them, and sent them to the U.K. via Germany, the team was able to remove any errors that had occurred “by hand” using their overlapping regions. In the end, they recovered their files with complete fidelity. The text had no spelling mistakes, the photo was high-res, and the speech was clear and eloquent.

    “But that’s not what we do,” says Grass, the lecturer at the Swiss Federal Institute of Technology. After seeing Church and colleagues’ publication in the news in 2012, he wanted to compare how competent different storage media were over long periods of time.

    “The original idea was to do a set of tests with various storage formats,” he says, “and torture them with various conditions.” Hot and cold, wet and dry, at high pressure, and in an oxygen-rich environment, for example. He contacted Reinhard Heckel, a friend he had met at Belvoir Rowing Club in Zurich for advice. Heckel, who was a PhD student in communication theory at the time, voiced concern that such an experiment would be unfair since DNA didn’t have the same error-correction systems as other storage devices such as CDs and computer hard drives.

    To make it a fair fight, they implemented Reed-Solomon codes into their DNA storage method. “We quickly found out that we could ‘beat’ traditional storage formats in terms of long term reliability by far,” Grass says. When stored on most conventional storage devices—USB pens, DVDs, or magnetic tapes—data starts to degrade after 50 years or so. But, early on in their work, Grass and his colleagues estimated that DNA could hold data error-free for millennia, thanks to the inherent stability of its double helix and that breakthrough in mathematical theory from the mid-20th century.

    Out from Obscurity

    When storing and sending information from one place to another, you almost always run the risk of introducing errors. Like in the “telephone” game, key parts may be modified or lost entirely. There has been a rich history of reducing such errors, and few things have propelled the field more than the development of information theory. In 1948, Claude Shannon, an ardent blackjack player and mathematician, proposed that by simplifying files or transmissions into numerous smaller components—yes or no questions—combined with error-correcting codes, the relative risk of error becomes very low. Using the 1s and 0s of binary, he hushed the noise of telephone switching circuits.

    Using this binary foundation, Reed and Solomon attempted to shush these whispers even further. But their error-correction codes weren’t put into use straight away. They couldn’t, in fact—the cyphers needed to decode them weren’t invented until 1968. Plus, there wasn’t anything to use them on; the technology that could utilize them hadn’t been invented. “They are very clever theoretical objects, but no one ever imagined they were going to be practical until the digital electronics became so sophisticated,” says McEliece, the Caltech information theorist.

    Once technology did catch up, one of the codes’ first uses was in transmitting data back from Voyager 1 and 2. Since the redundancy provided by these codes (together with another type, known as convolution codes) cleaned up mistakes—the loss or alteration of pixels, for example—the space probes didn’t have to send the same image again and again. That meant more high-resolution images could be radioed back to Earth as Voyager passed the outer planets of our solar system.

    3
    Reed-Solomon codes correct for common transmission errors, including missing pixels (white), false signals (black), and paused transmissions (the white stripe).

    Reed-Solomon codes weren’t widely used until October 1982, when compact discs were commercialized by the music industry. To manufacture huge quantities en masse, factories used a master version of the CD to stamp out new copies, but subtle imperfections in the process along with inevitable scratches when the discs were handled all but guaranteed errors would creep into the data. But, by adding redundancy to accommodate for errors and minor scratches, Reed-Solomon codes made sure that every disc, when played, was as flawless as the next. “This and the hard disk was the absolute distribution of Reed-Solomon codes all over the world,” says Martin Bossert, director of the Institute of Telecommunications and Applied Information Theory at the University of Ulm, Germany.

    At a basic level, here’s how Reed-Solomon codes work. Suppose you wanted to send a simple piece of information like the equation for a parabola (a symmetrical curved line). In such an equation, there are three defining points: 4 + 5x + 7×2. By adding incomplete redundancy in the form of two extra numbers—a 4 and a 7, for example—a total of five numbers is sent in the transmission. As a result, any transposition or loss of information can be corrected for by feeding the additional numbers through the Reed-Solomon algorithm. “You still have an overrepresentation of your system,” Grass says. “It doesn’t matter which one you lose, you can still get back to the original information.”

    Using similar formulae, Grass and his colleagues converted two files—the Swiss Federal Charter from 1291 and an English translation of The Methods of Mechanical Theorems by Archimedes—into DNA. The redundant information, in the form of extra bases placed over 4,991 short sequences according to the Reed-Solomon algorithm, provided the basis for error-correction when the DNA was read and the data retrieved later on.

    That is, instead of wastefully overlapping large chunks of sequences as the EBI researchers did, “you just add a small amount of redundancy and still you can correct errors at any position, which seemed very strange at the beginning because it’s somehow illogical,” Grass says. As well as using fewer base pairs per kilobyte of data, this tack has the added bonus of automated, algorithmic error-correction.

    Indeed, with a low error-rate—less than three base changes per 117-base sequence—the overrepresentation in their sequences meant that the Reed-Solomon codes could still get back to the original information.

    The same basic principle is used in written language. In fact, you are doing something very similar right now. Even when text contains spelling errors or even when whole words are missing, you can still perfectly read the message and reconstruct the sentence accordingly. The reason? Language is inherently redundant. Not all combinations of letters—including spaces as a 27th option—give a meaningful word, sentence, or paragraph.

    On top of this “inner” redundancy, Grass and colleagues installed another genetic safety net. On the ends of the original sequences, they added large chunks of redundancy. “So if we lose whole sequences or if one is completely screwed and it can’t be corrected with the inner [redundancy], we still have the outer codes,” Grass says. It’s similar to how CDs safeguard against scratches.

    It may sound like overkill, but so much redundancy is warranted, at least for now. There simply isn’t enough information on the rate and types of errors that occur during DNA synthesis and sequencing. “We have an inkling of the error-rate, but all of this is very crude at this point,” Kosuri says. “We just don’t have a good feeling for that, so everyone just overdoes the corrections.” Further, given that the field of genomics is moving so fast, with new ways to write and read DNA, errors might differ depending on what technologies are being used. The same was true for other storage devices while still in their infancy. After further testing, the error-correction codes could be more attuned to the expected error rates and the redundancy reduced, paving the way for higher bandwidth and greater storage capacity.

    Into the Future

    Compared with the previous studies, storing two files totaling 83 kilobytes in DNA isn’t groundbreaking. The image below is roughly five times larger. But Grass and his colleagues really wanted to know just how much better DNA was at long-term storage. With their Reed-Solomon coding in place, Grass and colleagues mimicked nature to find out.

    “The idea was always to make an artificial fossil, chemically,” Grass says. They tried impregnating their DNA sequences in filter paper, they used a biopolymer to simulate the dry conditions within spores and seeds of plants, and they encapsulated them in microscopic beads of glass. Compared with DNA that hasn’t been modified chemically, all three trials led to markedly lower rates of DNA decomposition.

    4
    Grass and colleagues glass DNA storage beads

    The glass beads were the best option, however. Water, when unimpeded, destroys DNA. If there are too many breaks and errors in the sequences, no error-correction system can help. The beads, however, protected the DNA from the damaging effects of humidity.

    With their layers of error-correction and protective coats in place, Grass and his colleagues then exposed the glass beads to three heat treatments—140˚, 149˚, and 158˚ F—for up to a month “to simulate what would happen if you store it for a long time,” he says. Indeed, after unwrapping their DNA from the beads using a fluoride solution and then re-reading the sequences, they found that slight errors had been introduced similar to those which appear over long timescales in nature. But, at such low levels, the Reed-Solomon codes healed the wounds.

    Using the rate at which errors arose, the researchers were able to extrapolate how long the data could remain intact at lower temperatures. If kept in the clement European air outside their laboratory in Zurich, for example, they estimate a ballpark figure of around 2,000 years. But place these glass beads in the dark at –0.4˚ F, the conditions of the Svalbard Global Seed Bank on the Norwegian island of Spitsbergen, and you could save your photos, music, and eBooks for two million. That’s roughly ten times as long as our species has been on Earth.

    Using heat treatments to mimic the effects of age isn’t foolproof, Grass admits; a month at 159˚ F certainly isn’t the same as millennia in the freezer. But his conclusions aren’t unsupported. In recent years, palaeogenetic research into long-dead animals has revealed that DNA can persist long after death. And when conditions are just right—cold, dark, and dry—these molecular strands can endure long after the extinction of an entire species. In 2012, for instance, the genome of an extinct human relative that died around 80,000 years ago was reconstructed from a finger bone. A year later, that record was shattered when scientists sequenced the genome of an extinct horse that died in Canadian permafrost around 700,000 years ago. “We already have long-term data,” Grass says. “Real long-term data.”

    But despite its inherent advantages, there are still some major hurdles to surmount before DNA becomes a viable storage option. For one, synthesis and sequencing is still too costly. “We’re still on the order of a million-fold too expensive on both fronts,” Kosuri says. Plus, it’s still slow to read and write, and it’s not rewritable nor is it random access. Currently, today’s DNA data storage techniques are similar to magnetic tape—the whole memory has to be read to retrieve a piece of information.

    Such caveats limit DNA to archival data storage, at least for the time being. “The question is if it’s going to drop fast enough and low enough to really compete in terms of dollars per gigabyte,” Grass says. It’s likely that DNA will continue to be of interest to medical and biological laboratories, which will help to speed up synthesis and sequencing and drive down prices.

    Whatever new technologies are on the horizon, history has taught us that Reed-Solomon-based coding will probably still be there, behind the scenes, safeguarding our data against errors. Like the genes within an organism, the codes have been passed down to subsequent generations, slightly adjusted and optimized for their new environment. They have a proven track record that starts on Earth and extends ever further into the Milky Way. “There cannot be a code that can correct more errors than Reed-Solomon codes…It’s mathematical proof,” Bossert says. “It’s beautiful.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 463 other followers

%d bloggers like this: