Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:18 pm on October 7, 2015 Permalink | Reply
    Tags: , , , , NOVA   

    From DON Lincoln (FNAL) for NOVA: “Neutrino Physicists win Nobel, but Neutrino Mysteries Remain” 



    07 Oct 2015
    FNAL Don Lincoln
    Don Lincoln

    Neutrinos are the most enigmatic of the subatomic fundamental particles. Ghosts of the quantum world, neutrinos interact so weakly with ordinary matter that it would take a wall of solid lead five light-years deep to stop the neutrinos generated by the sun. In awarding this year’s Nobel Prize in physics to Takaaki Kajita (Super-Kamiokande Collaboration/University of Tokyo) and Arthur McDonald (Sudbury Neutrino Observatory Collaboration/Queen’s University, Canada) for their neutrino research, the Nobel committee affirmed just how much these “ghost particles” can teach us about fundamental physics. And we still have much more to learn about neutrinos.

    Super-Kamiokande experiment Japan
    Super-Kamiokande experiment Japan

    Sudbury Neutrino Observatory
    Sudbury Neutrino Observatory

    View from the bottom of the SNO acrylic vessel and photomultiplier tube array with a fish-eye lens. This photo was taken immediately before the final, bottom-most panel of photomultiplier tubes was installed. Photo courtesy of Ernest Orlando, Lawrence Berkeley National Laboratory.

    Neutrinos are quantum chameleons, able to change their identity between the three known species (called electron-, muon– and tau-neutrinos). It’s as if a duck could change itself into a goose and then a swan and back into a duck again. Takaaki Kajita and Arthur B. McDonald received the Nobel for finding the first conclusive proof of this identity-bending behavior.

    In 1970, chemist Ray Davis built a large experiment designed to detect neutrinos from the sun. This detector was made up of a 100,000-gallon tank filled with a chlorine-containing compound. When a neutrino hit a chlorine nucleus, it would convert it into argon. In spite of a flux of about 100,000 trillion solar neutrinos per second, neutrinos interact so rarely that he expected to see only about a couple dozen argon atoms after a week’s running.

    But the experiment found even fewer argon atoms than predicted, and Davis concluded that the flux of electron-type neutrinos hitting his detector was only about a third of that emitted by the sun. This was an incredible scientific achievement and, for it, Davis was awarded a part of the 2002 Nobel Prize in physics.

    Explaining how these neutrinos got “lost” in their journey to Earth would take nearly three decades. The correct answer was put forth by the Italian-born physicist Bruno Pontecorvo, who hypothesized that the electron-type neutrinos emitted by the sun were morphing, or “oscillating,” into muon-type neutrinos. (Note that the tau-type neutrino was postulated in 1975 and observed in 2000; Pontecorvo was unaware of its existence.) This also meant that neutrinos must have mass—a surprise, since even in the Standard Model of particle physics, our most modern theory of the behavior of subatomic particles, neutrinos are treated as massless.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    So, if neutrinos could really oscillate, we would know that our current theory is wrong, at least in part.

    In 1998, a team of physicists led by Takaaki Kajita was using the Super Kamiokande (SuperK) experiment in Japan to study neutrinos created when cosmic rays from space hit the Earth’s atmosphere. SuperK was an enormous cavern, filled with 50,000 tons of water and surrounded by 11,000 light-detecting devices called phototubes. When a neutrino collided with a water molecule, the resulting debris from the interaction would fly off in the direction that the incident neutrino was traveling. This debris would emit a form of light called Cerenkov radiation and scientists could therefore determine the direction the neutrino was traveling.

    Cherenkov radiation glowing in the core of the Advanced Test Reactor [Idaho National Laboratory].

    By comparing the neutrinos created overhead, about 12 miles from the detector, to those created on the other side of the Earth, about 8,000 miles away, the researchers were able to demonstrate that muon-type neutrinos created in the atmosphere were disappearing, and that the rate of disappearance was related to the distance that the neutrinos traveled before being detected. This was clear evidence for neutrino oscillations.

    Just a few years later, in 2001, the Sudbury Neutrino Observatory (SNO) experiment, led by Arthur B. McDonald, was looking at neutrinos originating in the sun. Unlike previous experiments, the SNO could identify all three neutrino species, thanks to its giant tank of heavy water (i.e. D2O, two deuterium atoms combined with oxygen). SNO first used ordinary water to measure the flux of electron-type neutrinos and then heavy water to observe all three types. The SNO team was able to demonstrate that the neutrino flux of all three types of neutrinos agreed exactly with those emitted by the sun, but that the flux of electron-type was lower than would be expected in a no-oscillation scenario. This experiment was a definitive demonstration of the oscillation of solar neutrinos.

    With the achievements of both the SuperK and SNO experiments, it is entirely fitting that Kajita and McDonald share the 2015 Nobel Prize in physics. They demonstrated that neutrinos oscillate and, therefore, that neutrinos have mass. This is a clear crack in the impressive façade of the Standard Model of particle physics and may well lead to a better and more complete theory.

    The neutrino story didn’t end there, though. To understand the phenomenon in greater detail, physicists are now generating beams of neutrinos at many sites over the world, including Fermilab, Brookhaven, CERN and the KEK laboratory in Japan. Combined with studies of neutrinos emitted by nuclear reactors, significant progress has been made in understanding the nature of neutrino oscillation.

    Real mysteries remain. Our measurements have shown that the mass of each neutrino species is different. That’s why we know that some must have mass: if they are different, they can’t all be zero. However, we don’t know the absolute mass of the neutrino species—just the mass differences. We don’t even know which species is the heaviest and which is the lightest.

    The biggest question in neutrino oscillation physics, though, is whether neutrinos and antimatter neutrinos oscillate the same way. If they don’t, this could explain why our universe is composed solely of matter even while we believe that matter and antimatter existed in equal quantities right after the Big Bang.

    Accordingly, Fermilab, America’s premier particle physics laboratory, has launched a multi-decade effort to build the world’s most intense beam of neutrinos, aimed at a distant detector located 800 miles away in South Dakota.

    Sanford Underground Research Facility Interior
    Sanford Underground Research Facility

    Named the Deep Underground Neutrino Experiment (DUNE), it will dominate the neutrino frontier for the foreseeable future.
    FNAL Dune & LBNF

    This year’s Nobel Prize acknowledged a great step forward in our understanding of these ghostly, subatomic chameleons, but their entire story hasn’t been told. The next few decades will be a very interesting time.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 2:02 pm on October 6, 2015 Permalink | Reply
    Tags: , , , , NOVA, Sterile neutrinos   

    From NOVA: “Sterile Neutrinos: The Ghost Particle’s Ghost” July 2014. Old, but Worth It for the Details 



    11 Jul 2014

    FNAL Don Lincoln
    Don Lincoln, FNAL

    What do you call the ghost of a ghost?

    If you’re a particle physicist, you might call it a sterile neutrino. Neutrinos, known more colorfully as “ghost particles,” can pass through (almost) anything. If you surrounded the Sun with five light years’ worth of solid lead, a full half of the Sun’s neutrinos would slip right on through. Neutrinos have this amazing penetrating capability because they do not interact by the electromagnetic force, nor do they feel the strong nuclear force. The only forces they feel are the weak nuclear force and the even feebler tug of gravity.

    The Perseus galaxy cluster, one of 73 clusters from which mysterious x-rays, possible produced by sterile neutrinos, were observed. Credit: Chandra: NASA/CXC/SAO/E.Bulbul, et al.; XMM-Newton: ESA

    NASA Chandra Telescope

    ESA XMM Newton

    When Wolfgang Pauli first postulated neutrinos in 1930, he thought that his proposed particles could never be detected. In fact, it took more than 25 years for physicists to confirm that neutrinos—Italian for “little neutral ones”—were real. Now, physicists are hunting for something even harder to spot: a hypothetical ghostlier breed of neutrinos called sterile neutrinos.

    Today, we know of three different “flavors” of neutrinos: electron neutrinos, muon neutrinos and tau neutrinos (and their antimatter equivalents). In the the late 1960s, studies of the electron-type neutrinos emitted by the Sun led scientists to suspect that they were somehow disappearing or morphing into other forms. Measurements made in 1998 by the Super Kamiokande experiment strongly supported this hypothesis, and in 2001, the Sudbury Neutrino Observatory clinched it.

    Super-Kamiokande Detector
    Super-Kamiokande Detector

    Sudbury Neutrino Observatory
    Sudbury Neutrino Observatory

    One of the limitations of studying neutrinos from the Sun and other cosmic sources is that experimenters don’t have control over them. However, scientists can make beams of neutrinos in particle accelerators and also study neutrinos emitted by man-made nuclear reactors. When physicists studied neutrinos from these sources, a mystery presented itself. It looked like there weren’t three kinds of neutrinos, but rather four or perhaps more.

    Ordinarily, this wouldn’t be cause for alarm, as the history of particle physics is full of the discovery of new particles. However, in 1990, researchers using the LEP accelerator demonstrated convincingly that there were exactly three kinds of ordinary neutrinos. Physicists were faced with a serious puzzle.

    LEP at CERN

    There were some caveats to the LEP measurement. It was only capable of finding neutrinos if they were low mass and interacted via the weak nuclear force. This led scientists to hypothesize that perhaps the fourth (and fifth and…) forms of neutrinos were sterile, a word coined by Russian physicist Bruno Pontecorvo to describe a form of neutrino that didn’t feel the weak nuclear force.

    Searching for sterile neutrinos is a vibrant experimental program and a confusing one. Researchers pursuing some experiments, such as the LSND and MiniBoone, have published measurements consistent with the existence of these hypothetical particles, while others, like the Fermilab MINOS team, have ruled out sterile neutrinos with the same properties. Inconsistencies abound in the experimental world, leading to great consternation among scientists.

    LSND Experiment
    LANL/LSND Experiment

    FNAL MiniBoone

    FNAL Minos Far Detector

    In addition, theoretical physicists have been busy. There are many different ways to imagine a particle that doesn’t experience the strong, weak, or electromagnetic forces (and is therefore very difficult to make and detect); proposals for a variety of different kinds of sterile neutrinos have proliferated wildly, and sterile neutrinos are even a credible candidate for dark matter.

    Perhaps the only general statement we can make about sterile neutrinos is that they are spin ½ fermions, just like neutrinos, but unlike “regular” neutrinos, they don’t experience the weak nuclear force. Beyond that, the various theoretical ideas diverge. Some predict that sterile neutrinos have right-handed spin, in contrast to ordinary neutrinos, which have only left-handed spin. Some theories predict that sterile neutrinos will be very light, while others have them quite massive. If they are massive, that could explain why ordinary neutrinos have such a small mass: perhaps the mathematical product of the masses of these two species of neutrinos equals a constant, say proponents of what scientists call the “see-saw mechanism”; as one mass goes up, the other must go down, resulting in low-mass ordinary neutrinos and high-mass sterile ones.

    Now, some astronomers have proposed sterile neutrinos could be the source of a mysterious excess of x-rays coming from certain clusters of galaxies. Both NASA’s Chandra satellite and the European Space Agency’s XMM-Newton have spotted an excess of x-ray emission at 3.5 keV. It is brighter than could immediately be accounted for by known x-ray sources, but it could be explained by sterile neutrinos decaying into photons and regular neutrinos. However, one should be cautious. There are tons of atomic emission lines in this part of the x-ray spectrum. One such line, an argon emission line, happens to be at 3.62 keV. In fact, if the authors allow a little more of this line than predicted, the possible sterile neutrino becomes far less convincing.

    Thus the signal is a bit sketchy and could easily disappear with a better understanding of more prosaic sources of x-ray emission. This is not a criticism of the teams who have made the announcement, but an acknowledgement of the difficulty of the measurement. Many familiar elements emit x-rays in the 3.5 keV energy range, and though the researchers attempted to remove those expected signals, they may find that a fuller accounting negates the “neutrino” signal. Still, the excess was seen by more than one facility and in more than one cluster of galaxies, and the people involved are smart and competent, so it must be regarded as a possible discovery.

    It is an incredible long shot that the excess of 3.5 keV x-ray from galaxy clusters is a sterile neutrino but, if it is, it will be a really big deal. The first order of business is a more detailed understanding of more ordinary emission lines. Unfortunately, only time will tell if we’ve truly seen a ghost.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 7:41 am on October 6, 2015 Permalink | Reply
    Tags: , NOVA, ,   

    From NOVA: “Are Space and Time Discrete or Continuous?” 



    01 Oct 2015
    Sabine Hossenfelder

    Split a mile in half, you get half a mile. Split the half mile, you get a quarter, and on and on, until you’ve carved out a length far smaller than the diameter of an atom. Can this slicing continue indefinitely, or will you eventually reach a limit: a smallest hatch mark on the universal ruler?

    The success of some contemporary theories of quantum gravity may hinge on the answer to this question. But the puzzle goes back at least 2500 years, to the paradoxes thought up by the Greek philosopher Zeno of Elea, which remained mysterious from the 5th century BC until the early 1800s. Though the paradoxes have now been solved, the question they posed—is there a smallest unit of length, beyond which you can’t divide any further?—persists.

    Credit: Flickr user Ian Muttoo, adapted under a Creative Commons license.

    The most famous of Zeno’s paradoxes is that of Achilles and the Tortoise in a race. The tortoise gets a head start on the faster-running Achilles. Achilles should quickly catch up—at least that’s what would happen in a real-world footrace. But Zeno argued that Achilles will never pass over the tortoise, because in the time it takes for Achilles to reach the tortoise’s starting point, the tortoise too will have moved forward. While Achilles pursues the tortoise to cover this additional distance, the tortoise moves yet another bit. Try as he might, Achilles only ever reaches the tortoise’s position after the animal has already left it, and he never catches up.

    Obviously, in real life, Achilles wins the race. So, Zeno argued, the assumptions underlying the scenario must be wrong. Specifically, Zeno believed that space is not indefinitely divisible but has a smallest possible unit of length. This allows Achilles to make a final step surpassing the distance to the tortoise, thereby resolving the paradox.

    It took more than two thousand years to develop the necessary mathematics, but today we know that Zeno’s argument was plainly wrong. After mathematicians understood how to sum an infinite number of progressively smaller steps, they calculated the exact moment Achilles surpasses the tortoise, proving that it does not take forever, even if space is indefinitely divisible.

    Zeno’s paradox is solved, but the question of whether there is a smallest unit of length hasn’t gone away. Today, some physicists think that the existence of an absolute minimum length could help avoid another kind of logical nonsense; the infinities that arise when physicists make attempts at a quantum version of [Albert]Einstein’s General Relativity, that is, a theory of “quantum gravity.” When physicists attempted to calculate probabilities in the new theory, the integrals just returned infinity, a result that couldn’t be more useless. In this case, the infinities were not mistakes but demonstrably a consequence of applying the rules of quantum theory to gravity. But by positing a smallest unit of length, just like Zeno did, theorists can reduce the infinities to manageable finite numbers. And one way to get a finite length is to chop up space and time into chunks, thereby making it discrete: Zeno would be pleased.

    He would also be confused. While almost all approaches to quantum gravity bring in a minimal length one way or the other, not all approaches do so by means of “discretization”—that is, by “chunking” space and time. In some theories of quantum gravity, the minimal length emerges from a “resolution limit,” without the need of discreteness. Think of studying samples with a microscope, for example. Magnify too much, and you encounter a resolution-limit beyond which images remain blurry. And if you zoom into a digital photo, you eventually see single pixels: further zooming will not reveal any more detail. In both cases there is a limit to resolution, but only in the latter case is it due to discretization.

    In these examples the limits could be overcome with better imaging technology; they are not fundamental. But a resolution-limit due to quantum behavior of space-time would be fundamental. It could not be overcome with better technology.

    So, a resolution-limit seems necessary to avoid the problem with infinities in the development of quantum gravity. But does space-time remain smooth and continuous even on the shortest distance scales, or does it become coarse and grainy? Researchers cannot agree.

    Artist concept of Gravity Probe B orbiting the Earth to measure space-time, a four-dimensional description of the universe including height, width, length, and time.
    Date 18 May 2008
    Source http://www.nasa.gov/mission_pages/gpb/gpb_012.html
    Author NASA

    In string theory, for example, resolution is limited by the extension of the strings (roughly speaking, the size of the ball that you could fit the string inside), not because there is anything discrete. In a competing theory called loop quantum gravity, on the other hand, space and time are broken into discrete blocks, which gives rise to a smallest possible length (expressed in units of the Planck length, about 10-35 meters), area and volume of space-time—the fundamental building blocks of our universe. Another approach to quantum gravity, “asymptotically safe gravity,” has a resolution-limit but no discretization. Yet another approach, “causal sets,” explicitly relies on discretization.

    And that’s not all. Einstein taught us that space and time are joined in one entity: space-time. Most physicists honor Einstein’s insight, and so most approaches to quantum gravity take space and time to either both be continuous or both be discrete. But some dissidents argue that only space or only time should be discrete.

    So how can physicists find out whether space-time is discrete or continuous? Directly measuring the discrete structure is impossible because it is too tiny. But according to some models, the discreteness should affect how particles move through space. It is a miniscule effect, but it adds up for particles that travel over very long distances. If true, this would distort images from far-away stellar objects, either by smearing out the image or by tearing apart the arrival times of particles that were emitted simultaneously and would otherwise arrive on Earth simultaneously. Astrophysicists have looked for both of these signals, but they haven’t found the slightest evidence for graininess.

    Even if the direct effects on particle motion are unmeasurable, defects in the discrete structure could still be observable. Think of space-time like a diamond. Even rare imperfections in atomic lattices spoil a crystal’s ability to transport light in an orderly way, which will ruin a diamond’s clarity. And if the price tags at your jewelry store tell you one thing, it’s that perfection is exceedingly rare. It’s the same with space-time. If space-time is discrete, there should be imperfections. And even if rare, these imperfections will affect the passage of light through space. No one has looked for this yet, and I’m planning to start such a search in the coming months.

    Next to guiding the development of a theory of quantum gravity, finding evidence for space-time discreteness—or ruling it out!—would also be a big step towards solving a modern-day paradox: the black hole information loss problem, posed by Stephen Hawking in 1974. We know that black holes can only store so much information, which is another indication for a resolution-limit. But we do not know exactly how black holes encode the information of what fell inside. A discrete structure would provide us with elementary storage units.

    Black hole information loss is a vexing paradox that Zeno would have appreciated. Let us hope we will not have to wait 2000 years for a solution.

    Editor and author’s picks for further reading

    arXiv: Minimal Length Scale Scenarios for Quantum Gravity

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:56 am on September 29, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “$20 Million Xprize Wants to Eliminate Waste Carbon Dioxide” 



    29 Sep 2015
    Tim De Chant

    Five out of five climatologists agree—we’re probably going to emit more CO2 than we should if we want to prevent the worst effects of climate change.

    Fortunately, there’s a solution—capturing that CO2 and doing something with it. Unfortunately, the “somethings” that we know of with are both costly and not that profitable. A new Xprize announced this morning aims to change that. Funded by energy company NRG and COSIA, an industry group representing Canadian oil sands companies, the prize will fund the teams that develop the most valuable ways to turn the most CO2 into something useful.

    A smokestack vents emissions to the atmosphere.

    “It’s the second largest prize we’ve ever launched,” Paul Bunje, senior scientist of energy and environment at Xprize, told NOVA Next. “It’s a recognition of a couple of things: One is the scale of the challenge at hand—dealing with carbon dioxide emissions is obviously an epic challenge for the entire plant. Secondly, it also recognizes just how difficult, technologically, this challenge is.”

    Starting today, teams have nine months to register, and by late 2016, they’ll need to submit technical documentation in support of their plans. A panel of judges will then pick the best 15 in each “track”—one which captures emissions from a coal-fired power plant, the other from a natural gas-fired plant.

    The 30 semifinalists will then have to develop laboratory-scale versions of their plan. The best five from each track will receive a $500,000 grant to help fund the next stage, where teams will have to build demonstration-scale facilities that will be attached to working power plants. Four and a half years from now, a winner from each track will be chosen and be awarded $7.5 million.

    Bunje, who is leading this Xprize, hopes the prize will show that “CO2 doesn’t just have to be a waste project that drives climate change—rather, that you can make money off of the products from converted CO2,” he said. “That kind of a perception shift will be pretty remarkable.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:18 pm on September 28, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Could the Universe Be Lopsided?” 



    28 Sep 2015
    Paul Halpern

    One hundred years ago, [Albert] Einstein re-envisioned space and time as a rippling, twisting, flexible fabric called spacetime. His theory of general relativity showed how matter and energy change the shape of this fabric. One might expect, therefore, that the fabric of the universe, strewn with stars, galaxies, and clouds of particles, would be like a college student’s dorm room: a mess of rumpled, crumpled garments.

    Indeed, if you look at the universe on the scale of stars, galaxies, and even galaxy clusters, you’ll find it puckered and furrowed by the gravity of massive objects. But take the wider view—the cosmologists’ view, which encompasses the entire visible universe—and the fabric of the universe is remarkably smooth and even, no matter which direction you turn. Look up, down, left, or right and count up the galaxies you see: you’ll find it’s roughly the same from every angle. The cosmic microwave background [CMB], the cooled-down relic of radiation from the early universe, demonstrates the same remarkable evenness on the very largest scale.

    Cosmic Background Radiation Planck
    CMB per ESA/Planck

    ESA Planck
    ESA/Planck satellite

    A computer simulation of the ‘cosmic web’ reveals the great filaments, made largely of dark matter, located in the space between galaxies. By NASA, ESA, and E. Hallman (University of Colorado, Boulder), via Wikimedia Commons

    Physicists call a universe that appears roughly similar in all directions isotropic. Because the geometry of spacetime is shaped by the distribution of matter and energy, an isotropic universe must posses a geometric structure that looks the same in all directions as well. The only three such possibilities for three-dimensional spaces are positively curved (the surface of a hypersphere, like a beach ball but in a higher dimension), negatively curved (the surface of a hyperboloid, shaped like a saddle or potato chip), or flat. Russian physicist [Alexei] Fridmann, Belgian cleric and mathematician Georges Lemaître and others incorporated these three geometries into some of the first cosmological solutions of Einstein’s equations. (By solutions, we mean mathematical descriptions of how the three spatial dimensions of the universe behave over time, given the type of geometry and the distribution of matter and energy.) Supplemented by the work of American physicist Howard Robertson and British mathematician Arthur Walker, this class of isotropic solutions has become the standard for descriptions of the universe in the Big Bang theory.

    However, in 1921 Edward Kasner—best known for his coining of the term “Googol” for the number 1 followed by 100 zeroes—demonstrated that there was another class of solutions to Einstein’s equations: anisotropic, or “lopsided,” solutions.

    Known as the Kasner solutions, these cosmic models describe a universe that expands in two directions while contracting in the third. That is clearly not the case with the actual universe, which has grown over time in all three directions. But the Kasner solutions become more intriguing when you apply them to a kind of theory called a Kaluza-Klein model, in which there are unseen extra dimensions beyond space and time. Thus space could theoretically have three expanding dimensions and a fourth, hidden, contracting dimension. Physicists Alan Chodos and Steven Detweiler explored this concept in their paper Where has the fifth dimension gone?

    Kasner’s is far from the only anisotropic model of the universe. In 1951, physicist Abraham Taub applied the shape-shifting mathematics of Italian mathematician Luigi Bianchi to general relativity and revealed even more baroque classes of anisotropic solutions that expand, contract or pulsate differently in various directions. The most complex of these, categorized as Bianchi type-IX, turned out to have chaotic properties and was dubbed by physicist Charles Misner the “Mixmaster Universe” for its resemblance to the whirling, twirling kitchen appliance.

    Like a cake rising in a tray, while bubbling and quivering on the sides, the Mixmaster Universe expands and contracts, first in one dimension and then in another, while a third dimension just keeps expanding. Each oscillation is called a Kasner epoch. But then, after a certain number of pulses, the direction of pure expansion abruptly switches. The formerly uniformly expanding dimension starts pulsating, and one of those formerly pulsating starts uniformly expanding. It is as if the rising cake were suddenly turned on its side and another direction started rising instead, while the other directions, including the one that was previously rising, just bubbled.

    One of the weird things about the Mixmaster Universe is that if you tabulate the number of Kasner epochs in each era, before the behavior switches, it appears as random as a dice roll. For example, the universe might oscillate in two directions five times, switch, oscillate in two other directions 17 times, switch again, pulsate another way twice, and so forth—without a clear pattern. While the solution stems from deterministic general relativity, it seems unpredictable. This is called deterministic chaos.

    Could the early moments of the universe have been chaotic, and then somehow regularized over time, like a smoothed-out pudding? Misner initially thought so, until he realized that the Mixmaster Universe couldn’t smooth out on its own. However, it could have started out “lopsided,” then been stretched out during an era of ultra-rapid expansion called inflation until its irregularities were lost from sight.

    As cosmologists have collected data from instruments such as the Hubble Space Telescope, Planck Satellite, and WMAP satellite (now retired), the bulk of the evidence supports the idea that our universe is indeed isotropic.

    NASA Hubble Telescope
    NASA/ESA Hubble


    But a minority of researchers have used measurements of the velocities of galaxies and other observations, such as an odd line up of temperature fluctuations in the cosmic microwave background dubbed the “Axil of Evil” to assert that the universe could be slightly irregular after all.

    For example, starting in 2008, Alexander Kashlinsky, a researcher at NASA’s Goddard Space Flight Center, and his colleagues have statistically analyzed cosmic microwave background data gathered by first the WMAP satellite and the Planck satellite to show that, in addition to their motion due to cosmic expansion, many galaxy clusters seem to be heading toward a particular direction on the sky. He dubbed this phenomenon “dark flow,” and suggested that it is evidence of a previously-unseen cosmic anisotropy known as a “tilt.” Although the mainstream astronomical community has disputed Kashlinsky’s conclusion, he has continued to gather statistical evidence for dark flow and the idea of tilted universes.

    Whether or not the universe really is “lopsided,” it is intriguing to study the rich range of solutions of Einstein’s general theory of relativity. Even if the preponderance of evidence today points to cosmic regularity, who knows when a new discovery might call that into question, and compel cosmologists to dust off alternative ideas. Such is the extraordinary flexibility of Einstein’s masterful theory: a century after its publication, physicists are still exploring its possibilities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:33 am on September 23, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Why Doesn’t Everyone Believe Humans Are Causing Climate Change?” 2014 But Important 



    19 Nov 2014
    Brad Balukjian

    Last week during his tour of Asia, President Barack Obama struck a new global warming deal with China. It was a landmark agreement that many expect could break the logjam that has kept the world’s two largest emitters largely on the sidelines of talks to curb greenhouse gas emissions. Both countries agreed to reduce carbon dioxide emissions, with the U.S. ramping up reductions starting in 2020 and China beginning cuts in 2030.

    Yet back home, President Obama still faces an electorate that doesn’t believe climate change is caused by humans. Only 40% of Americans attribute global warming to human activity, according to a recent Pew Research Center poll. This, despite decades of scientific evidence and the fact that Americans generally trust climate scientists.

    Despite decades of evidence, most Americans don’t believe that humans are causing climate change.

    That apparent cognitive dissonance has vexed two scientists in particular: Michael Ranney, a professor of education at the University of California, Berkeley, and Dan Kahan, a professor of law at Yale University. According to both, we haven’t been asking the right questions. But they disagree on what, exactly, those questions should be. If one or both of them are right, the shift in tone could transform our society’s debate over climate change.

    The Wisdom Deficit

    In the 1990s, Michael Ranney started informally asking people what they perceived to be the world’s biggest problem. He hadn’t set out to tackle environmental issues—he was first trained in applied physics and materials science before turning to cognitive psychology. But time and again, he heard “climate change” as an answer.

    Ranney had also noticed that while the scientific community had converged on a consensus, the general public had not, at least not in the U.S. The Climategate controversy in late 2009 over leaked e-mails between climate scientists and Oklahoma Senator James Inhofe’s insistence that anthropogenic global warming is a hoax are just two examples of the widespread conflict among the American public over what is causing the planet to warm.

    Ranney and his team say that a “wisdom deficit” is driving the wedge. Specifically, it’s a lack of understanding of the mechanism of global warming that’s been retarding progress on the issue. “For many Americans, they’re caught between a radio talk show host—of the sort that Rush Limbaugh is—and maybe a professor who just gave them a lecture on global warming. And if you don’t understand the mechanism, then you just have competing authorities, kind of like the Pope and Galileo,” he says. “Mechanism turns out to be a tie-breaker when there’s a contentious issue.”

    Despite the fact that the general public has been inundated with scientific facts related to global warming, Ranney says that our climate literacy is still not very high. In other words, though we may hear a lot about climate change, we don’t really understand it. It’s similar to how lots of people follow the ups and downs of the Dow Jones Industrial Average but don’t understand how those fluctuations relate to macroeconomic trends.

    Climate illiteracy isn’t just limited to the general public, either. Ranney recalls a scientist’s presentation at a recent conference which said that many university professors teaching global warming barely had a better understanding of its mechanism than the undergraduates they were teaching. “Even one of the most highly-cited climate change communicators in the world didn’t know the mechanism over dinner,” he says.

    One of the most common misconceptions, according to Ranney, is that light energy “bounces” off the surface of the Earth and then is trapped or “bounced back” by greenhouse gases. The correct mechanism is subtly different. Ranney’s research group has boiled it down to 35 words: “Earth transforms sunlight’s visible light energy into infrared light energy, which leaves Earth slowly because it is absorbed by greenhouse gases. When people produce greenhouse gases, energy leaves Earth even more slowly—raising Earth’s temperature.”

    When Ranney surveyed 270 visitors to a San Diego park on how global warming works, he found that exactly zero could provide the proper mechanism. In a second experiment, 79 psychology undergraduates at UC Berkeley scored an average of 3.8 out of 9 possible points when tested on mechanistic knowledge of climate change. In a third study, 41 people recruited through Amazon’s Mechanical Turk, an online marketplace for freelance labor, scored an average of 1.9 out of 9. (Study participants in Japan and Germany had a similarly poor showing, meaning it’s not just an American problem.) With every new experiment, Ranney found consistently low levels of knowledge.

    At least, he did at first. In his experiments, after the first round of questions, Ranney included a brief lecture or a written explanation on the correct mechanism behind global warming. He then polled the same people to see whether they understood it better and whether they accepted that humans are causing climate change. In the UC Berkeley study, acceptance rose by 5.4%; in the Mechanical Turk study, it increased by 4.7%. Perhaps most notably, acceptance increased among both conservatives and liberals. There was no evidence for political polarization.

    That doesn’t mean polarization doesn’t exist. It’s certainly true that liberals are more likely to accept anthropogenic global warming than conservatives. Myriad studies and surveys have found that. But political affiliation doesn’t always overwhelm knowledge when it becomes available—Ranney found no evidence for a difference between conservatives’ and liberals’ change in willingness to accept climate change after his “knowledge intervention.”

    Convinced that the key to acceptance is understanding the mechanism, Ranney created a series of no-frills videos of varying lengths in multiple languages explaining just that. More than 130,000 page views later, Ranney is not shy about his aims: “Our goal is to garner 7 billion visitors,” he says.

    Depolarizing Language

    Meanwhile, Dan Kahan says that it’s not a wisdom gap that’s preventing acceptance of human’s role in climate change, but the cultural politicization of the topic. People don’t need a sophisticated understanding of climate change, he says. “They only need to be able to recognize what the best available scientific evidence signifies as a practical matter: that human-caused global warming is initiating a series of very significant dynamics—melting ice, rising sea levels, flooding, heightened risk of serious diseases, more intense hurricanes and other extreme weather events—that put us in danger.”

    According to Kahan, the problem lies in the discourse around the issue. When people are asked about their acceptance of anthropogenic global warming, he says the questions tend to confound what people know with who they are and the cultural groups they identify with. In those circumstances, declaring a position on the issue becomes more a statement of cultural identity than one of scientific understanding.

    Kahan’s ideas are based on his own surveys of the American public. In one recent study of 1,769 participants recruited through the public opinion firm YouGov, he assessed people’s “ordinary climate science intelligence” with a series of climate change knowledge questions. He also collected demographic data, including political orientation. Kahan found no correlation between one’s understanding of climate science and his or her acceptance of human-caused climate change. Some people who knew quite a bit on the topic still didn’t accept the premise of anthropogenic climate change, and vice versa. He also found that, as expected, conservatives are less likely to accept that humans are changing the climate.

    Unlike Ranney, Kahan did find strong evidence for polarization. The more knowledgeable a conservative, for example, the more likely they are to not accept human-caused global warming. Kahan suggests that these people use their significant analytical skills to seek evidence that aligns with their political orientation.

    Still, despite many people’s strong reluctance to accept anthropogenic global warming, cities and counties in places like southeast Florida have gone ahead and supported practices to deal with global warming anyway. Kahan relates one anecdote in which state and local officials in Florida have argued for building a nuclear power generator higher than planned because of sea-level rise and storm surge projections. But if you ask these same people if they believe in climate change, they’ll say, “no, that’s something entirely different!” Kahan says.

    Kahan’s not exactly sure why some people act in ways that directly contradict their own beliefs—he laughs and verbally shrugs when asked—but he has some ideas. The leading one is the notion of dualism, when someone mentally separates two apparently conflicting ideas and yet feels no need to reconcile them. This happens on occasion with religious medical doctors, he says, who reject evolution but openly admit to using the principles of evolution in their work life.

    Whatever the cause, Kahan thinks the case of southeast Florida is worth studying. There, the community has been able to examine the scientific evidence for climate change and take action despite widespread disagreement on whether humans are actually driving climate change. The key, Kahan says, is that they have kept politics out of the room.

    Two Sides of the Same Coin

    Ranney and Kahan, much like the skeptics and supporters of human-caused climate change, question the other’s conclusions. Kahan is skeptical that Ranney’s approach can be very effective on a large scale. “I don’t think it makes sense to believe that if you tell people in five-minute lectures about climate science, that it’s going to solve the problem,” he says. He also questions the applicability of Ranney’s experiments, which have mostly included students and Mechanical Turk respondents. “The people who are disagreeing in the world are not college students,” he says. “You’re also not in a position to give every single person a lecture. But if you did, do you think you’d be giving that lecture to them with Rush Limbaugh standing right next to them pointing out they they’re full of shit? Because in the world, that’s what happens.”

    Hundreds of millions of in-person lectures would certainly be impossible, but Ranney has high hopes for his online videos. Plus, Ranney points out that Kahan’s studies are correlative, while his are controlled experiments where causation can be more strongly inferred. In addition, most of the measures of climate science knowledge that Kahan uses in his research focus on factual knowledge rather than mechanism. (For example, the multiple choice question, “What gas do most scientists believe causes temperatures in the atmosphere to rise?”). Ranney’s work, on the other hand, is all about mechanism.

    Despite their apparent disagreement, Ranney thinks the debate is a bit of a false dichotomy. “It’s certainly the case that one’s culture has a significant relationship to whether or not you accept [anthropogenic global warming], but that doesn’t mean your global warming knowledge isn’t also related to it. And it doesn’t mean you can’t overcome a cultural predilection with more information,” Ranney says. “There were a lot of things that were culturally predicted, like thinking we were in a geocentric universe or that smoking was fine for you or that the Earth was flat—all manner of things that eventually science overcame.”

    Perhaps Ranney and Kahan are on the same team after all—they would probably agree that, at the end of the day, both knowledge and culture matter, and that we’d be well-served to focus our energy on how to operationally increase acceptance of anthropogenic global warming. “Whatever we can do now will be heroic for our great-grandchildren, and whatever we do not do will be infamous,” Ranney says.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 11:13 am on September 6, 2015 Permalink | Reply
    Tags: , NOVA, Quantum entanglement   

    From NOVA “Quantum Physics: How Big Can Entanglement Get?” 



    04 Sep 2015
    Andrew Zimmerman Jones

    Our intuition has evolved to deal with the macroscopic world: the world of things you can hold in your hand and see with your naked eyes. But many of the discoveries of the last century, particularly those in quantum physics, have called into question virtually all of those physical intuitions. Even Albert Einstein, whose intuitions were often spot-on, couldn’t bridge the gap between his intuition and the predictions of quantum theory, particularly when it came to the notion of quantum entanglement. Yet we’ve been able to make some peace with quantum mechanics because, for most intents and purposes, its strangest effects are only felt on the micro scale. For everyday interactions with ordinary objects, our intuition still works just fine.

    Image: Flickr user Domiriel, adapted under a Creative Commons license.

    Now, though, physicists are entangling bigger and bigger objects—not just single particles but collections of thousands of atoms. This seemingly-esoteric research could have real technological implications, potentially doubling the accuracy of atomic clocks used in applications such as GPS. But it also challenges the artificial barrier we’ve set up between the microscopic scale, where quantum mechanics rules, and the macroscopic world, where we can count on our intuition. Quantum weirdness is going big.

    Entanglement 101

    What is entanglement, anyway, and why did it get Einstein tied in knots? For a mundane analogy, image you put a red piece of paper in one opaque envelope and a green piece of paper in an identical envelope. Now, randomly hand an envelope to each of two kids, Peter and Macy, and have them walk in opposite directions. There is no way to know which kid has which color, but you can say with certainty that one of the following two “states” describes the situation.

    State 1: Peter has the red paper and Macy has the green paper
    State 2: Peter has the green paper and Macy has the red paper

    Since the state of each piece of paper is absolutely tied into the state of the other one, the red paper and the green paper are entangled with each other. If Peter looks into his envelope and finds the red paper, then you instantly know that Macy has the green paper, because you must be in State 1. The papers represent an entangled system, because they can’t be fully described independently of each other. If you describe Peter’s paper as “either red or green” and Macy’s paper as “either green or red,” but don’t connect their two situations together, then you have an incomplete description of the system.

    At this point, you’re likely thinking: “So what?” And rightly so. As with most things in the universe, this entanglement gets a lot stranger when you stick the word “quantum” in front of it. In the mundane example, the entanglement came about purely because of our ignorance. We didn’t know for sure which envelope each paper was in, but we were certain that they were really in those envelopes.

    Quantum mechanics, however, does not seem to work if you try to hold this level of certainty. So let’s try the same scenario, but instead of regular paper, imagine that we are instead using some “quantum paper” that (though not real) obeys the traditional rules of quantum mechanics. In such a quantum system, Peter’s unseen quantum paper exists in a bizarre state where it is both red and green at the same time. Macy’s quantum paper is similarly in such an undetermined state. This isn’t to say that the paper is a color that is a mix of red and green, but rather that each piece of paper exists in a superposition of states where it is both “a red paper” and “a green paper,” even though it is not in a state that makes it “a red and green paper.”

    That is, of course, until someone actually looks in the envelope to determine the state (called “collapsing the wavefunction” in quantum terminology). If Peter looks in his new quantum envelope and sees a green paper, quantum physics would say that his paper has collapsed into the “green” state. But remember that his paper is entangled with Macy’s paper, so when his collapses into the “green” state the whole entangled system collapses into State 2.

    If you’re thinking that something sounds fishy here, you’re in good company, since that’s exactly what Albert Einstein thought when he and colleagues came up with this challenge to quantum mechanics. (Their version of this Einstein-Podolsky-Rosen paradox or EPR paradox involved decaying particles rather than hypothetical quantum paper.) The idea that, by looking at his quantum paper, Peter could have any effect on Macy’s quantum paper struck Einstein as bizarre, and he ultimately dubbed it “spooky action at a distance” because it seemed to violate the rule that nothing could communicate faster than the speed of light.

    Spooky or not, a century of physics research has shown that this does appear to be what happens. At the moment Peter observes the color of his quantum paper, Macy’s quantum paper ceases to be both red and green and instantly becomes definitely one or the other. Because the two pieces of quantum paper are entangled, this would be true no matter where in the universe Macy went with her paper.

    What Can Entangle?

    Of the many deep and profound questions related to quantum entanglement, the size of the entangled system is one that has always been of interest to physicists. The original EPR paradox only described pairs of individual particles, not pieces of paper or even molecules. So, what happens when you try to scale entanglement up to bigger objects? Maintaining a superposition of states is a very delicate operation. Most particle interactions cause the superposition of states to collapse into a single state, a process called “decoherence.” Even a stray light particle, a photon, could knock the whole entangled system out of its superposition state and into a single definitive state. This is why we don’t experience this quantum behavior in our everyday life, because pretty much everything we experience has already undergone decoherence.

    Or has it? One worldview, called the “many worlds interpretation” of quantum physics, takes the superposition of states as seriously as possible. It suggests that decoherence never actually happens, that the array of possible states never collapses into one single state. Each possible state is “real,” though they don’t all manifest themselves in the reality that we are experience. We experience merely one limb on a branching tree of possibilities. If Macy looks in her envelope first and finds a green paper (State 1), there exists another branch where she finds a red one (State 2), and because her paper is entangled with his, Peter will always find his paper in the corresponding state.

    While many physicists find the many-worlds interpretation an intriguing prospect, it doesn’t actually solve the question of how big we can make a system that exhibits this bizarre superposition behavior in a way that is perceptible to us. Some things seem to be in a superposition and some things don’t, even if the many worlds interpretation applies. How far can we push that boundary in our experiments? Is it possible for non-microscopic objects to demonstrate quantum behaviors?

    Creating entangled systems has always been tricky. Though the EPR paradox was proposed in 1935, it wasn’t until the early 1980s that scientists were able to actually test it with a real physical system. Entangling more than a handful of particles was incredibly difficult, but technology gradually improved. In 2005, when researchers created an entanglement among six atoms, it was considered a major breakthrough.

    Because of the delicate nature of quantum systems, it is key to keep the entanglement safe from random motion of the particles, which can cause a collapse. This has traditionally involved cooling the atoms to limit motion, but in recent years scientists have even been able to entangle objects at room temperature. In a 2011 paper, physicists described an experiment where two tiny diamonds released vibrational energy in an entangled system. The fact that these larger systems can display properties of entanglement has highlighted the challenge in drawing clear lines between the “quantum” and “non-quantum” worlds.

    Only a decade after six-atom entangled systems were considered cutting-edge, the number to beat was 100 atoms entangled together. That record seems to have now been blown out of the water, as a March 2015 paper in the journal Nature indicated a record of 3,000 cooled atoms entangled together, with the researchers stating with confidence that they thought they could scale their process up to millions of atoms.

    More significantly, being able to create complex, stable entangled systems is an essential component in the development of quantum computers. First proposed by Nobel laureate Richard Feynman in the 1980s, quantum computers would exploit the bizarre behavior of quantum superposition to perform calculations exponentially more quickly than classical computers. It would represent an astounding revolution in information technology, if the technical hurdles can be overcome to make it a reality.

    When all is said and done, one thing seems clear: there is more to quantum reality than was dreamt of in even Einstein’s philosophy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 3:35 pm on September 2, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Venom of Aggressive Brazilian Wasp Rips Holes in Cancer Cells” 



    Polybia paulista

    Until a decade ago, Polybia paulista wasn’t well known to anyone other than entomologists and the hapless people it stung in its native Brazil. But then, a number of research groups discovered a series of remarkable qualities all concentrated in the aggressive wasp’s venom.

    One compound in particular has stood out for its antimicrobial and anti-cancer properties. Polybia-MP1, a peptide, or a string of amino acids, is different from most antibacterial peptides in that it’s only toxic to bacteria and not red blood cells. MP1 punches through bacteria’s cell membranes, causing them to die a leaky death. Scientists had also discovered that MP1 was also good at inhibiting spreading bladder and prostate cancer cells and could kill leukemia cells, but they didn’t know why it was so toxic only to tumor cells.

    Well, now they think they have an idea. How MP1 kills cancer cells turns out to be very similar to how it kills bacteria cells—by causing them to leak to death. MP1 targets two lipids— phosphatidylserine, or PS, and phosphatidylethanolamine, or PE—that cancer cells have adorned on the outside of their membranes. Here’s Kiona Smith-Strickland, writing for Discover:

    MP1’s destruction of a cancer cell, researchers say, has two stages. First, MP1 bonds to the outer surface of the cell, and then it opens holes or pores in the membrane big enough to let the cell’s contents leak out. PS is crucial for the first part: seven times more MP1 molecules bound to membranes with PS in their outer layer. And PE is crucial for the second: Once the MP1 molecules worked their way into the membrane, they opened pores twenty to thirty times larger than in membranes without PE.

    Even better, healthy cells have neither PS nor PE on the outside of their membranes. Rather, they keep them on the inside, a key difference from cancer cells that would shield them from the damaging effects of MP1. In other words, MP1 could make an ideal chemotherapy.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:49 am on August 23, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “The Shadow of a Black Hole” 



    21 Aug 2015
    Matthew Francis

    Event Horizon Telescope
    Part of Event Horizon Telescope [EHT]

    Event Horizon Telescope map
    EHT map

    The invisible manifests itself through the visible: so say many of the great works of philosophy, poetry, and religion. It’s also true in physics: we can’t see atoms or electrons directly and dark matter seems to be entirely transparent, yet this invisible stuff makes and shapes the universe as we know it.

    Then there are black holes: though they are the most extreme gravitational powerhouses in the cosmos, they are invisible to our telescopes. Black holes are the unseen hand steering the evolution of galaxies, sometimes encouraging new star formation, sometimes throttling it. The material they send jetting away changes the chemistry of entire galaxies. When they take the form of quasars and blazars, black holes are some of the brightest single objects in the universe, visible billions of light-years away. The biggest supermassive black holes are billions of times as massive as the Sun. They are engines of creation and destruction that put the known laws of physics to their most extreme test. Yet, we can’t actually see them.

    A simulation of superheated material circling the black hole at the center of the Milky Way. Credit: Scott C. Noble, The University of Tulsa

    Black holes are a concentration of mass so dense that anything that gets too close—stars, planets, atoms, light—becomes trapped by the force of gravity. The point of no return is called the event horizon, and it forms a sort of imaginary shell around the black hole itself. But event horizons are very small: the event horizon of a supermassive black hole could fit comfortably inside the solar system (comfortably for the black hole, that is, not for us). That might sound big, but on cosmic scales, it’s tiny: the black hole at the center of the Milky Way spans just 10 billionths of a degree on the sky. (For comparison, the full Moon is about half a degree across, and the Hubble Space Telescope can see objects as small as 13 millionths of a degree.)

    NASA Hubble Telescope
    NASA/ESA Hubble

    Both the size and nature of the event horizon make it difficult to observe black holes directly, though indirect observations abound. In fact, though black holes themselves are strictly invisible, their surrounding regions can be extremely bright. Many luminous astronomical objects produce so much light from such a small region of space that they can’t be anything other than black holes, even though our telescopes aren’t powerful enough to pick out the details. In addition, the stars at the center of the Milky Way loop close enough to show they’re orbiting an object millions of times the mass of the Sun, yet smaller than the solar system. No single object, other than a black hole, can be so small and yet so massive. Even though we know black holes are common throughout the universe—nearly every galaxy has at least one supermassive black hole in it, and thousands more smaller specimens—we haven’t confirmed that these objects have event horizons. Since event horizons are a fundamental prediction of general relativity (and make black holes what they are), demonstrating their existence is more than just a formality.

    However, confirming event horizons would take a telescope the size of the whole planet. The solution: the Event Horizon Telescope (EHT), which links observatories around the world to mimic the pinpoint resolution of an Earth-sized scope. The EHT currently includes six observatories, many of which consist of multiple telescopes themselves, and two more observatories will be joining soon, so that EHT will have components in far-flung places from California to Hawaii to Chile to the South Pole. With new instruments and new observations, EHT astronomers will soon be able to study the fundamental physics of black holes for the first time. Yet even with such a powerful team of telescopes, the EHT’s vision will only be sharp enough to make out two supermassive black holes: the one at the center of our own Milky Way, dubbed Sagittarius A*, and the one in the M87 galaxy, which weighs in at nearly seven billion times the mass of the sun.

    The theory of general relativity predicts that the intense gravity at the event horizon should bend the paths of matter and light in distinct ways. If the light observed by the EHT matches those predictions, we’ll know there’s an event horizon there, and we’ll also be able to learn something new about the black hole itself.

    The “gravitational topography” of spacetime near the event horizon depends on just two things: the mass of the black hole and how fast it is spinning. The event horizon diameter of a non-spinning black hole is roughly six kilometers for each solar mass. In other words, a black hole the mass of the sun (which is smaller than any we’ve yet found) would be six kilometers across, and one that’s a million times the mass of the Sun would be six million kilometers across.

    If the black hole is spinning, its event horizon will be flattened at the poles and bulging at the equator and it will be surrounded by a region called the ergosphere, where gravity drags matter and light around in a whirlpool. Everything crossing the border into the ergosphere orbits the black hole, no matter how fast it tries to move, though it still conceivably can escape without crossing the event horizon. The ergosphere will measure six kilometers across the equator for each solar mass inside the black hole, and the event horizon will be smaller, depending on just how fast the black hole is rotating. If the black hole has maximum spin, dragging matter near the event horizon at close to light speed, the event horizon will be half the size of that of a non-spinning black hole. (Spinning black holes are smaller because they convert some of their mass into rotational energy.)

    When the EHT astronomers point their telescopes toward the black hole at the center of the Milky Way, they will be looking for a faint ring of light around a region of darkness, called the black hole’s “shadow.” That light is produced by matter that is circling at the very edge of the event horizon, and its shape and size are determined by the black hole’s mass and spin. Light traveling to us from the black hole will also be distorted by the extreme gravitational landscape around the black hole. General relativity predicts how these effects should combine to create the image we see at Earth, so the observations will provide a strong test of the theory.

    If observers can catch sight of a blob of gas caught in the black hole’s pull, that would be even more exciting. As the blob orbits the black hole at nearly the speed of light, we can watch its motion and disintegration in real time. As with the ring, the fast-moving matter emits light, but from a particular place near the black hole rather than from all around the event horizon. The emitted photons are also influenced by the black hole, so timing their arrival from various parts of the blob’s orbit would give us a measure of how both light and matter are affected by gravity. The emission would even vary in a regular way: “We’d be able to see it as kind of a heartbeat structure on a stripchart recorder,” says Shep Doeleman, one of the lead researchers on the EHT project.

    Event Horizon Telescope astronomers have already achieved resolutions nearly good enough to see the event horizon of the black hole at the center of the Milky Way. With the upgrades and addition of more telescopes in the near future, the EHT should be able to see if the event horizon size corresponds to what general relativity predicts. In addition, observations of supermassive black holes show that at least some may be spinning at close to the maximum rate, and the EHT should be able to tell that too.

    Black holes were long considered a theorist’s toy, ripe for speculation but possibly not existing in nature. Even after discovering real black holes, many doubted we would ever be able to observe any of their details. The EHT will bring us as close as possible to seeing the invisible.

    Contributing institutes

    Some contributing institutions are:

    Academia Sinica Institute for Astronomy and Astrophysics
    Arizona Radio Observatory, University of Arizona
    Caltech Submillimeter Observatory
    Combined Array for Research in Millimeter-wave Astronomy
    European Southern Observatory
    Georgia State University
    Goethe-Universität Frankfurt am Main
    Greenland Telescope
    Harvard–Smithsonian Center for Astrophysics
    Haystack Observatory, MIT
    Institut de Radio Astronomie Millimetrique
    Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE)
    Joint Astronomy Centre – James Clerk Maxwell Telescope
    Large Millimeter Telescope
    Max Planck Institut für Radioastronomie
    National Astronomical Observatory of Japan
    National Radio Astronomy Observatory
    National Science Foundation
    University of Massachusetts, Amherst
    Onsala Space Observatory
    Perimeter Institute
    Radio Astronomy Laboratory, UC Berkeley
    Radboud University
    Shanghai Astronomical Observatory (SHAO)
    Universidad de Concepción
    Universidad Nacional Autónoma de México (UNAM)
    University of California – Berkeley (RAL)
    University of Chicago (South Pole Telescope)
    University of Illinois Urbana-Champaign
    University of Michigan

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:51 pm on August 19, 2015 Permalink | Reply
    Tags: , Dams in the US, NOVA   

    From NOVA: “The Undamming of America” 



    12 Aug 2015
    Anna Lieb

    Gordon Grant didn’t really get excited about the dam he blew up until the night a few weeks later when the rain came. It was October of 2007, and the concrete carnage of the former Marmot Dam had been cleared. A haphazard mound of earth was the only thing holding back the rising waters of the Sandy River. But not for long. Soon the river punched through, devouring the earthen blockade within hours. Later, salmon would swim upstream for the first time in 100 years.

    Grant, a hydrologist with the U.S. Forest Service, was part of the team of scientists and engineers who orchestrated the removal of Marmot Dam. Armed with experimental predictions, Grant was nonetheless astonished by the reality of the dam’s dramatic ending. For two days after the breach, the river moved enough gravel and sand to fill up a dump truck every ten seconds. “I was literally quivering,” Grant says. “I got to watch what happens when a river gets its teeth into a dam, and in the course of about an hour, I saw what would otherwise be about 10,000 years of river evolution.”

    Over 3 million miles of rivers and streams have been etched into the geology of the United States, and many of those rivers flow into and over somewhere between 80,000 and two million dams. “We as a nation have been building, on average, one dam per day since the signing of the Declaration of Independence,” explains Frank Magilligan, a professor of geography at Dartmouth College. Just writing out the names of inventoried dams gives you more words than Steinbeck’s novel East of Eden.

    Some of the names are charming: Lake O’ the Woods Dam, Boys & Girls Camp # 3 Dam, Little Nirvana Dam, Fawn Lake Dam. Others are vaguely sinister: Dead Woman Dam, Mad River Dam, Dark Dam. There’s the unappetizing Kosciusko Sewage Lagoon Dam, the fiercely specific Mrs. Roland Stacey Lake Dam and the disconcertingly generic Tailings Pond #3 Dam. There’s a touch of deluded grandeur in the Kingdom Bog Dam and an oddly suggestive air to the River Queen Slurry Dam.

    The names arose over the course of a long and tumultuous relationship. We’ve built a lot of dams, in a lot of places, for a lot of reasons—but lately, we’ve gone to considerable lengths to destroy some of them. Marmot Dam is just one of a thousand that have been removed from U.S. rivers over the last 70 years. Over half the demolitions occurred in the last decade. To understand this flurry of dynamiting and digging—and whether it will continue—you have to understand why dams went up first place and how the world has transformed around them.

    The Dams We Love

    A sedate pool of murky water occupies the space between a pizzeria, a baseball field, and the oldest dam in the United States, built in 1640 in what is now Scituate, Massachusetts.

    When a group of settlers arrived in the New World, the first major structure they built was usually a church. Next, they built a dam. The dams plugged streams and set them to work, turning gears to grind corn, saw lumber, and carve shingles. During King Phillip’s War in 1676, the Wampanoag tribe attacked colonist’s dams and millhouses, recognizing that without them, settlers could not eat or put roofs over their heads.

    Robert Chessia of the Scituate Historical Society shows me a map of the area, circa 1795. On every windy line indicating a stream, there is a triangle and curly script label: “gristmill.”

    Map of area surrounding Scituate, Massachusetts, circa 1795.

    In the 19th century, dams controlled the rivers that powered the mills that produced goods like flour and textiles. Some dams are historical structures, beautiful relics of centuries past. Not far from Scituate stands a dam owned by Mordecai Lincoln, great-great grandfather of Abraham Lincoln. Some dams have been incorporated into local identity—as in the town of LaValle, Wisconsin, which dubbed itself as the “best dam town in Wisconsin.”

    Before refrigerators, frozen, dammed streams offered up chunks of ice to be sawed out and saved for the summer. Before skating rinks, we skated over impounded waters.

    In the 20th century, the pace accelerated. We completed 10,000 new dams between 1920 and 1950 and 40,000 between 1950 and 1980. Some were marvels. Grand Coulee Dam contains enough concrete to cover the entirety of Manhattan with four inches of pavement . Hoover Dam is tall enough to dwarf nearly every building in San Francisco. Glen Canyon Dam scribbled a 186-mile-long lake in the arid heart of a desert.

    Grand Coulee Dam

    Behind those big new dams were big new dreams. A 1926 dam on the Susquehanna River produced so much hydroelectric power that the owners needed to set up a network of wires to sell the electricity far and wide. This became the PNJ Interchange, “the seed of the electricity grid as we know it,” explains Martin Doyle, a professor of river science and policy at Duke University. Grand Coulee Dam, which stopped the Columbia River in 1942, supplied vast quantities of electrical power that turned aluminum into airplanes and uranium into plutonium. President Harry Truman said that power from Grand Coulee turned the tide of World War II.

    Yet only 3% of dams in the US are hydropower facilities—together supplying about just under 7% of U.S. power demand. Most dams were built for other reasons. They restrained rivers to control floods and facilitate shipping. They stored enormous volumes of water for irrigating the desert and in doing so reshaped the landscape of half the country. “The West developed through the construction of dams because it allowed the control of water for development,” says Emily Stanley, a limnologist at the University of Wisconsin, Madison.

    But for most dams, none of these are their primary purpose. Nearly one-third of dams in the national inventory list “recreation” as their raison d’être, a rather vague description. I inquired about this with the Army Corps of Engineers, which maintains the inventory, and their reply merely offered a cursory explanation of “purpose” codes in the database. Mark Ogden, the project manager for the Association of State Dam Safety Officials, says many small private dams were indeed built for recreational activities like fishing.

    Grant, Magilligan, and Doyle have a different theory, however. Dams may get the recreational label, Doyle says, “when we have no idea what they are for now, and we can’t stitch together what they were for when they were built.” But while many of the original uses have disappeared, the dams have not.

    The Dams We Love to Hate

    In the very center of conservationist hell, mused John McPhee [Encounters With the Archdruid, Part 3, A River, 1971, about David Brower and Floyd Dominy], surrounded by chainsaws and bulldozers and stinking pools of DDT, stands a dam. He’s not the only one to feel that way. “They take away the essence of what a river is,” Stanley says.

    A dam fragments a watershed, Magilligan explains. A flowing river carries sediment and nutrients downstream and allows flora and fauna to move freely along its length. When a dam slices through this moving ecosystem, it slows and warms the water. In the reservoir behind the dam, lake creatures and plants start to replace the former riverine occupants. Sediment eddies and drops to the bottom, rather than continuing downstream.

    Migratory fish can be visceral reminders of how a dam changes a river. Salmon hatch in freshwater rivers, swim out to sea, and then return to their birthplace to reproduce, a circle-of-life story that has captured people’s imaginations for generations. At the Elwha Dam in Washington state, Martin Doyle recalls looking down to see salmon paddling against the base of the dam, trying in vain to reach their spawning grounds upriver. Roughly 98% of the salmon population on the Elwha River disappeared after the dam went up, says Amy East, a research geologist at the U.S. Geological Survey (USGS). Doyle points out that salmon are just one of many species affected by dams. Migratory shad, mussels, humpback chub, herring—the list goes on. He notes that the charismatic salmon are a more popular example than the “really butt-ugly fish we’ve got on the East Coast.”

    Dams not only upend ecosystems, they also erase portions of our culture and history. Gordon Grant points out that on the Columbia River, people fished at Celilo Falls for thousands of years, making it one of the oldest continually inhabited places in the country. The falls are now covered in 100 feet of water at the bottom of the reservoir behind the Dalles Dam.

    Hundreds of archaeological sites, going back 10,000 years, dot the riverbanks and the walls of the Grand Canyon. For millennia, East explains, many of these potsherds, dwellings, and other artifacts had been protected by a covering of sand. But that sand is disappearing because the upstream Glen Canyon Dam traps most of the would-be replacement sand coming down the Colorado River. Furthermore, snowmelt used to swell the river with monstrous spring floods, redistributing sediment throughout the canyon. Now, demand for power in Las Vegas and Phoenix regulates the flow. “They turn the river on when people are awake and turn the river off when people go to sleep,” explains Jack Schmidt, a river geomorphologist at Utah State University. Without “gangbuster” spring floods, he says, the sandbars are disappearing and the archaeological sites are increasingly more exposed. “There’s a lot of human history in the river corridor, and unfortunately a lot of it is being eroded away in the modern era,” East says.

    As the ecological and cultural toll dams take became clearer, our relationship with them started to show its cracks. Fights over dams grew increasingly loud. At the turn of the century, John Muir and a small band of hirsute outdoorsmen opposed construction of the O’Shaughnessy Dam in the Hetch Hetchy Valley of Yosemite. They failed. By the 1960s, pricy full-page ads in the New York Times opposed the Echo Park Dam on a tributary of the Colorado. They succeeded. Echo Park Dam was never built—but downstream, Glen Canyon Dam went up instead, inspiring new levels of resentment and vitriol among dam opponents. In a 1975 novel by cantankerous conservationist Edward Abbey, environmental activists blow up Glen Canyon Dam. The novel’s title entered the popular lexicon as a term for destructive activism: “monkeywrenching.”

    Abbey once described his enemies as “desk-bound men and women with their hearts in a safe deposit box, and their eyes hypnotized by desk calculators.” Now, 40 years later, Abbey might be surprised to learn that it’s men and women crunching numbers at desks who actually incite the dynamiting of dams.

    O’Shaughnessy Dam in Hetch Hetchy Valley, California

    Why They’re Coming Down

    The decision to remove a dam is surprisingly simple. Ultimately, it comes down to dollars. “The bottom line is usually the bottom line,” says Jim O’Connor, a research geologist at USGS. As dams age, they often require expensive maintenance to comply with safety regulations or just to continue functioning. Sometimes, environmental issues drive up the cost; for example, the Endangered Species Act may require the owner to provide a way for fish to get past the dam. Consideration for Native American tribal rights may also influence decisions over whether to keep or kill a dam. “In my experience, economics lurks behind virtually all decisions to take dams off or to keep ’em. But the nature of what’s driving the economics is changing,” says Grant, the Forest Service hydrologist. Dam owners—who are overwhelmingly private, but also include state, local, and federal governments—have to weigh repair costs against the benefits the dam provides.

    In some cases, those benefits don’t exist. The age of waterwheel-powered looms and saws is long gone, but thousands of forlorn mill ponds still linger. “You’re left with a structure isn’t doing anything for anybody and is quietly and happily rotting in place,” says Gordon Grant. Others like Kendrick Dam in Vermont supplied blocks of ice. “We’ve got refrigerators now,” Magilligan says. “This one should probably come out.”

    This old mill in Tennessee is now a restaurant.

    Other dams don’t live long enough to become obsolete. The designers of California’s Matilaja Dam, which was completed in 1948, said it would last for 900 years, says Toby Minear, a USGS geologist. But the reservoir behind Matilaja silted up so quickly that within 50 years the reservoir was 95% full of sediment. Though the surrounding community still wanted its water, the dam could no longer provide storage. Congress approved a removal plan in 2007, but the estimated $140 million dollar project has stalled after proving more expensive and technically challenging than anticipated.

    For most dams, the story is more complicated. Two dams on the Elwha River generated hydropower, but when the owner was legally required to add fish ladders—a series of small waterfalls that salmon can use to easily scale the dam—future sales of hydroelectricity paled in comparison to the repair cost. Furthermore, the neighboring Elwha Tribe had fought for decades to restore the salmon catch—half of which legally belonged to them. The owner opted to sell the dams to the federal government in 1992, and after nearly two decades of study and negotiation, the Department of the Interior, the Elwha Tribe, and the surrounding community had agreed on a removal plan. In September, 2011, construction crews began breaking up the two largest dams ever removed from U.S. rivers.

    Beginning of the End

    “Removing these big, concrete riverine sarcophagi, and salmon swimming past that gaping hole—that is the mental image that people will always have of dam removal,” Doyle says. But in reality, not all rivers host salmon and not all dams are removed with explosives. Each river, each dam, and each removal are totally different, says Laura Wildman, an engineer at a firm specializing in dam removal.

    Doyle remembers one particularly dramatic example of a “blow and go” removal, where the US Marines exploded a small dam slated for removal as part of a training exercise. When a dam disappears suddenly, the river responds violently. O’Connor was at the “blow and go” removal of the Condit Dam on the White Salmon River in Washington. “At first it was like a flash flood of water—just mostly water, definitely dirty water. It came up fast, it was turbulent, it was noisy,” he says. “Then it was brown, stinky, and chock full of organic material mud flow.”

    Even the slower removals, which take place over months or years, can have dramatic moments. Doyle describes how a backhoe slowly taking out the Rockdale Dam in Wisconsin “looked kind of prehistoric, like a long-necked dinosaur reaching out and eating away at the dam.”

    Jennifer Bountry, a Bureau of Reclamation hydrologist who helped plan the Elwha Dam removal, explains that initially the engineers would gingerly shave off a foot of concrete off the dam and wait to see what happened. But as the removal progressed, the river was changing so fast that she had to keep a close eye on the currents as she was recording her observations. “You had to be careful where you parked your boat,” Bountry says. The freed Elwha River rapidly carved out a new channel, carrying with it roughly the same volume of sediment as Mt St. Helens belched out during the infamous 1980 eruption.

    Aftermath of the End

    A small stream trickles through the YMCA’s Camp Gordon Clark in Hanover, Massachusetts. Freshly tie-dyed t-shirts hanging from the chain link fence sway in the breeze. Summer camp is in full swing, and Samantha Woods, director of the North and South Rivers Watershed Association, a nonprofit, walks me down a shallow slope to an ox-bow stream curling through a wide plain covered in cattails. The heavy, humid air is thick with buzzing cicadas and singing birds. Less than a year ago, this plain was a blank, wet canvas. Where the cattails stand now was submerged beneath several feet of water impounded by a 10-foot-tall earthen dam that had stood for at least 300 years. In 2001, the state determined that the dam could catastrophically collapse in a flood and required the owner—the YMCA—to fix or remove it.

    The dam hung in limbo for nearly a decade until storm damage reignited fears of collapse. By then, the public had started to embrace the idea that removing the dam could be a good thing for the river. Plus, repairing the dam would have cost an estimated $1 million. Taking it out would cost half that amount. So in October of 2014, crews tore down the earthen blockade, drained the pond, and planted native plant seeds in the newly exposed earth. Less than a year later, the transformation to wetlands is well underway. Woods is optimistic that if one more downstream dam comes out, herring would swim up this creek for the first time in centuries.

    But no one knows for sure if the herring will come back. In general, scientists are just beginning to unravel what happens when a dam is removed after tens or hundreds of years. “Dam removals help us understand how rivers behave,” Magilligan says. Magilligan, along with Bountry, East, Grant, O’Connor, and Schmidt, is part of a group called the Powell Center, which is studying how rivers respond when they’re set free.

    In the hundred or so dam removals for which data is available, fish, lamprey, and eel populations are rebounding, and more sediment and nutrients are heading downstream, both expected outcomes. But the Powell Center scientists are surprised at just how fast recovery takes place. Formerly trapped sediment clears out within weeks or months. For example, a recent study showed that this freed sediment is quickly rebuilding the Elwha River delta. Some fish populations revive within a few years, not a few decades as many had expected. Many rivers are starting to resemble their pre-dam selves.

    But the Powell Center members also point out that dam removals may sometimes have undesirable consequences, like allowing non-native species formerly trapped upstream to colonize the rest of the river, or releasing contaminated sediment downstream. They agree there’s much more to figure out.

    Dam New World

    Few people may be more emblematic of the subtle shift in attitudes about dam removal in recent years than Gordon Grant. A much younger Grant spent a dozen years as a rafting guide. Back then, he’d sat around campfires singing “Damn the man who dams the river!” with people who chained themselves to boulders at the bottom of a valley slated to become a reservoir. One day Grant got curious enough about the forces shaping the rapids he ran that he went to graduate school. For nearly 30 years now, he’s been conducting research in fluvial geomorphology—the study of how rivers reshape the surface of the earth. I asked Gordon Grant if a dam is still, for him, at the inner circle of hell. “It used to be more than it is now,” he says. “It may be slippage, it may be gray hair, it may be something else, but I see dams in a somewhat different light now.”

    “I’ve seen dams that provide nothing for anybody and I’ve seen dams that provide a lot of power that otherwise would have been generated by coal,” Grant says of his research career. Both building and demolishing dams have tradeoffs, Grant argues, and as a scientist he’s interested in how economics, ecology, and hydrogeology each play a role. Emily Stanley says, “I’ve learned that it’s not enough to say ‘Yeah, we should blow ’em all up!’ We can’t just wave the wand and take them away. There will be huge consequences. But yeah, there’s too many dams.”

    Even Daniel Beard, Commissioner of Reclamation under the first Clinton Administration agrees there’s too many dams. He has been calling loudly and unequivocally for taking out one of the largest in the country, the Glen Canyon Dam. “Do I think that’s controversial? Absolutely. Do I think it’s politically realistic? Eh…not really. But somebody has speak up,” he says.

    Glen Canyon Dam on the Colorado River

    Most scientists and engineers are skeptical that any dam as large as Glen Canyon will go anywhere, anytime soon. Drought has brought the reservoir down to as low as one-third of its designed capacity in recent years, but even still it currently stores 12 million acre-feet—roughly the average volume of water that goes through the Grand Canyon in a year—and generates enough power for 300,000 homes. And even if the economics change dramatically, the dam itself is a formidable structure and one not easily removed. “I can’t imagine getting dropped into Glen Canyon and having the audacity to start wanting to plug that thing with concrete,” Doyle says. “If we really want to start removing Western dams, then we need an audacity to match that with which they went after building them.”

    Even removing the dam in Scituate, which is 370 years old and a mere 10 feet tall, is a tough sell. “This dam isn’t coming down,” David Ball, president of the Scituate Historical Society, told me on two occasions. The pond still provides about half the town’s drinking water.

    For some dams that do still serve purposes like Scituate and Glen Canyon, dam owners, conservation groups, and government agencies have worked to manage them more holistically. In Scituate, fish ladders and timed water releases are beginning to restore herring to the upstream watershed.

    At the Glen Canyon Dam, operators now create a simulacrum of spring floods by releasing extra water to help restore sediment in the Grand Canyon. The first artificial flood stormed through the Grand Canyon in the spring of 1996, and by 2012, a supportive Bureau of Reclamation had helped clear the way for nearly annual restoration floods. Though these floods surge with less than half of the flow of pre-dam torrents, they were still highly controversial at first, says Jack Schmidt, the Utah State professor. Releasing extra water in the spring means lost revenue, he explains, because it generates electricity that no one is interested in buying.

    And at Shasta Dam in California, water releases are now carefully controlled in order to keep the water temperature low enough for downstream Chinook salmon to survive, according to Deputy Interior Secretary Michael Connor. He expects that drought, exacerbated by climate change, will alter our relationship with dams. “There is nothing necessarily permanent. We should be relooking and rethinking the costs and the benefits of our infrastructure,” he says.

    Many dams will remain—and as climate change alters precipitation patterns, some new ones will be built. Dams shaped the country of the rivers they divide, and they don’t go down quietly. But time and economics will sweep more dams away.

    It’s hard to forget the moment when a once-restrained river breaks free. Connor, for one, vividly recalls the removal of the Elwha dams. “You know, you count on your one hand those days that really stand out, and those events that you really participate in. That is easily, for me, one of those days that I’ll always remember.”

    Some had hoped for that moment for a very long time. East, the USGS geologist, recalls meeting an 80-year-old Elwha woman who had never before seen the river untrammeled. The woman had said, joyfully, “I’ve been waiting for these dams to come out my whole life!”

    [See the original article for, an interactive map of dams in the US, were you can find any dam in which you might be interested, and an animation of the number of dam completions in the U.S. between 1800-2000]

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 481 other followers

%d bloggers like this: