Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:04 am on July 28, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Fossil Fuels Are Destroying Our Ability to Study the Past” 



    21 Jul 2015
    Tim De Chant

    It’s been used to date objects tens of thousands of years old, from fossil forests to the Dead Sea Scrolls, but in just a few decades, a tool that revolutionized archaeology could turn into little more than an artifact of a bygone era.

    Radiocarbon dating may be the latest unintended victim of our burning of fossil fuels for energy. By 2020, carbon emissions will start to affect the technique, and by 2050, new organic material could be indistinguishable from artifacts from as far back as AD 1050, according to research by Heather Graven, a lecturer at Imperial College London.

    The Great Isaiah Scroll, one of the seven Dead Sea Scrolls, has been dated using the radiocarbon technique.

    The technique relies on the fraction of radioactive carbon relative to total carbon. Shortly after World War II, Willard Libby discovered that, with knowledge of carbon-14’s predictable decay rate, he could accurately date objects that contained carbon by measuring the ratio of carbon-14 to all carbon in the sample. The less carbon-14 to total carbon, the older the artifact. Since only living plants and animals can incorporate new carbon-14, the technique became a reliable measure for historical artifacts. The problem is, as we’ve pumped more carbon dioxide into the atmosphere, we’ve unwittingly increased the total carbon side of the equation.

    Here’s Matt McGrath, reporting for BBC News:

    At current rates of emissions increase, according to the research, a new piece of clothing in 2050 would have the same carbon date as a robe worn by William the Conqueror 1,000 years earlier.

    “It really depends on how much emissions increase or decrease over the next century, in terms of how strong this dilution effect gets,” said Dr Graven.

    “If we reduce emissions rapidly we might stay around a carbon age of 100 years in the atmosphere but if we strongly increase emissions we could get to an age of 1,000 years by 2050 and around 2,000 years by 2100.”

    Scientists have been anticipating the diminished accuracy of radiocarbon dating as we’ve continued to burn more fossil fuels, but they didn’t have a firm grasp of how quickly it could go south. In the worst case scenario, we would no longer be able date artifacts younger than 2,000 years old. Put another way, by the end of the century, a test of the Shroud of Turn wouldn’t be able to definitively distinguished the famous piece of linen from a forgery made today.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 5:57 pm on July 27, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Agriculture May Have Started 11,000 Years Earlier Than We Thought” 



    Mon, 27 Jul 2015

    The technology that allowed us to build cities and develop specialized vocations may have first started 23,000 years ago in present day Israel—some 11,000 years earlier than expected—but then mysteriously disappeared from later settlements.

    Archaeologists found evidence of farming—including sickles, grinding stones, domesticated seeds, and, yes, weeds—in a sedentary camp that was flooded by the Sea of Galilee until the 1980s when drought and water pumping shrank the lake’s footprint. The 150,000 seeds found at the site represent 140 plant species, including wild oat, barley, and emmer wheat along with 13 weed species that are common today. The find not only illustrates humanity’s initial forays into farming, but it also provides the earliest evidence that weeds evolved alongside human ecological disturbances like farms and settlement clearings.

    Archaeologists found wild barley seeds buried at the site.

    Mysteriously, the lessons learned from those early trials either were forgotten or were a failure. The study’s authors point out that neither sickles nor similar seeds have been found at settlements dating to just after the Sea of Galilee site, which is known as Ohalo II.

    The settlement was composed of a number of huts covered with tree branches, leaves, and grasses. Archaeologists also found a variety of flint and ground stone tools, several hearths, beads, animal remains, and an adult male gravesite. They suspect Ohalo II was occupied year round based on the remains of various migratory birds, which are known to visit the area during different times of year.

    The seeds that made up much of the settlers’ diets are surprisingly familiar. Here’s Ainit Snir and colleagues, writing in their paper published in PLoS One:

    Some of the plants are the progenitors of domesticated crop species such as emmer wheat, barley, pea, lentil, almond, fig, grape, and olive. Thus, about 11,000 years before what had been generally accepted as the onset of agriculture, people’s diets relied heavily on the same variety of plants that would eventually become domesticated.

    While Snir and coauthors think that Ohalo II’s fields were simply early trials and that plants weren’t fully domesticated until 11,000 years later, they do suspect that future discoveries could flesh out long, trial-and-error development of agriculture.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 1:51 pm on July 20, 2015 Permalink | Reply
    Tags: , , NOVA, Reed-Solomon codes   

    From NOVA: “The Codes of Modern Life” 



    15 Jul 2015
    Alex Riley

    On August 25th 2012, the spacecraft Voyager 1 exited our Solar System and entered interstellar space, set for eternal solitude among the stars. Its twin, Voyager 2, isn’t far behind. Since their launch from Cape Canaveral in Florida, in 1977, their detailed reconnaissance of the Jovian planets—Jupiter, Saturn, Uranus, Neptune—and over 60 moons extended the human senses beyond Galileo’s wildest dreams.

    After passing Neptune, the late astrophysicist Carl Sagan proposed that Voyager 1 should turn around and capture the first portrait of our planetary family. As he wrote in his 1994 book, Pale Blue Dot, “It had been well understood by the scientists and philosophers of classical antiquity that the Earth was a mere point in a vast encompassing Cosmos, but no one had ever seen it as such. Here was our first chance (and perhaps our last for decades to come).”

    Earth, as seen from Voyager 1 more than 4 billion miles away.

    Indeed, our planet can be seen as a fraction of a pixel against a backdrop of darkness that’s broken only by a few scattered beams of sunlight reflected off the probe’s camera. The precious series of images were radioed back to Earth at the speed of light, taking five and a half hours to reach the huge conical receivers in California, Spain, and Australia more than 4 billion miles away. Over such astronomical distances, one pixel out of 640,000 can easily be replaced by another or lost entirely in transmission. It wasn’t, in part due to a single mathematical breakthrough published decades before.

    In 1960, Irving Reed and Gustave Solomon published a paper in the Journal of the Society for Industrial and Applied Mathematics, entitled, Polynomial Codes Over Certain Finite Fields, a string of words that neatly convey the arcane nature of their work. “Almost all of Reed and Solomon’s original paper doesn’t mean anything to most people,” says Robert McEliece, a mathematician and information theorist at California Institute of Technology. But within those five pages was the basic recipe for the most efficacious error-correction codes yet created. By adding just the right levels of redundancy to data files, this family of algorithms can correct for error that often occurs during transmission or storage without taking up too much precious space.

    Today, Reed-Solomon codes go largely unnoticed, but they are everywhere, reducing errors in everything from mobile phone calls to QR codes, computer hard drives, and data beamed from the New Horizons spacecraft as it zoomed by Pluto. As demand for digital bandwidth and storage has soared, Reed-Solomon codes have followed. Yet curiously, they’ve been absent in one of the most compact, longest-lasting, and most promising of storage mediums—DNA.

    From Voyager to DNA

    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    Several labs have investigated nature’s storage device to archive our ever-increasing mountain of digital information; encoding small amounts of data in DNA and, more importantly, reading it back. But those trials lacked sophisticated error correction, which DNA data systems will need if they are to become our storage medium of choice. Fortunately, a team of scientists, led by Robert Grass, a lecturer at ETH Zurich, rectified that omission earlier this year when they stored a duo of files in DNA using Reed-Solomon codes. It’s a mash up that could help us reliably store our fragile digital data for generations to come.

    Life’s Storage

    DNA is best known as the information storage device for life on Earth. Only four molecules—adenine, cytosine, thymine, and guanine, commonly referred to by their first letters—make up the rungs on the famous double helix of DNA. These sequences are the basis of every animal, plant, fungi, archaea, and bacteria that has ever lived in the 4 billion some years that life has existed on Earth.

    “It’s not a form of information that’s likely to be outdated very quickly,” says Sriram Kosuri a geneticist from University of California, Los Angeles. “There’s always going to be a reason for studying DNA as long as we’re still around.”

    It is also incredibly compact. Since it folds in three dimensions, we could store all of the world’s current data—everyone’s photos, every Facebook status update, all of Wikipedia, everything—using less than an ounce of DNA. And, with its propensity to replicate given the right conditions, millions of copies of DNA can be made in the lab in just a few hours. Such favorable traits make DNA an ideal candidate for storing lots of information, for a long time, in a small space.

    A Soviet scientist named Mikael Nieman recognized DNA’s potential back in 1964, when he first proposed the idea of storing data in natural biopolymers. In 1988, his theory was finally put into practice when the first messages were stored in DNA. Those strings were relatively simple. Only in recent years have laboratories around the world started to convert large amounts of the binary code that’s spoken by computers into genetic code.

    In 2012, by converting the ones of binary code into As or Cs, and zeros into Ts and Gs, Kosuri along with George Church and Yuan Gao stored an entire book called Regenesis, totaling 643 kilobytes, into the genetic code. A year later, Ewan Birney, Nick Goldman, and their colleagues from the European Bioinformatics Institute added a slightly more sophisticated way of translating binary to nucleic acid that reduced the number of repeated bases.

    Such repeats are a common problem when writing and reading of DNA, or synthesizing and sequencing, as they’re called. Although Birney, Goldman, and team stored a similar amount of information as Kosuri, Church, and Gao—739 kilobytes—it was spread over a range of media types: 154 Shakespearean sonnets, Watson and Crick’s famous 1953 paper that described DNA’s molecular structure, an audio file of Martin Luther King Jr.’s “I Have a Dream” speech, and a photograph of the building they were working in near Cambridge, UK.

    The European team also integrated a deliberate error-correction system: distributing their data over more than 153,000 short, overlapping sequences of DNA. Like shouting a drink order multiple times in a noisy bar, the regions of overlap increased the likelihood that the message would be understood at the other end. Indeed, after a Californian company called Agilent Technologies manufactured the team’s DNA sequences, packaged them, and sent them to the U.K. via Germany, the team was able to remove any errors that had occurred “by hand” using their overlapping regions. In the end, they recovered their files with complete fidelity. The text had no spelling mistakes, the photo was high-res, and the speech was clear and eloquent.

    “But that’s not what we do,” says Grass, the lecturer at the Swiss Federal Institute of Technology. After seeing Church and colleagues’ publication in the news in 2012, he wanted to compare how competent different storage media were over long periods of time.

    “The original idea was to do a set of tests with various storage formats,” he says, “and torture them with various conditions.” Hot and cold, wet and dry, at high pressure, and in an oxygen-rich environment, for example. He contacted Reinhard Heckel, a friend he had met at Belvoir Rowing Club in Zurich for advice. Heckel, who was a PhD student in communication theory at the time, voiced concern that such an experiment would be unfair since DNA didn’t have the same error-correction systems as other storage devices such as CDs and computer hard drives.

    To make it a fair fight, they implemented Reed-Solomon codes into their DNA storage method. “We quickly found out that we could ‘beat’ traditional storage formats in terms of long term reliability by far,” Grass says. When stored on most conventional storage devices—USB pens, DVDs, or magnetic tapes—data starts to degrade after 50 years or so. But, early on in their work, Grass and his colleagues estimated that DNA could hold data error-free for millennia, thanks to the inherent stability of its double helix and that breakthrough in mathematical theory from the mid-20th century.

    Out from Obscurity

    When storing and sending information from one place to another, you almost always run the risk of introducing errors. Like in the “telephone” game, key parts may be modified or lost entirely. There has been a rich history of reducing such errors, and few things have propelled the field more than the development of information theory. In 1948, Claude Shannon, an ardent blackjack player and mathematician, proposed that by simplifying files or transmissions into numerous smaller components—yes or no questions—combined with error-correcting codes, the relative risk of error becomes very low. Using the 1s and 0s of binary, he hushed the noise of telephone switching circuits.

    Using this binary foundation, Reed and Solomon attempted to shush these whispers even further. But their error-correction codes weren’t put into use straight away. They couldn’t, in fact—the cyphers needed to decode them weren’t invented until 1968. Plus, there wasn’t anything to use them on; the technology that could utilize them hadn’t been invented. “They are very clever theoretical objects, but no one ever imagined they were going to be practical until the digital electronics became so sophisticated,” says McEliece, the Caltech information theorist.

    Once technology did catch up, one of the codes’ first uses was in transmitting data back from Voyager 1 and 2. Since the redundancy provided by these codes (together with another type, known as convolution codes) cleaned up mistakes—the loss or alteration of pixels, for example—the space probes didn’t have to send the same image again and again. That meant more high-resolution images could be radioed back to Earth as Voyager passed the outer planets of our solar system.

    Reed-Solomon codes correct for common transmission errors, including missing pixels (white), false signals (black), and paused transmissions (the white stripe).

    Reed-Solomon codes weren’t widely used until October 1982, when compact discs were commercialized by the music industry. To manufacture huge quantities en masse, factories used a master version of the CD to stamp out new copies, but subtle imperfections in the process along with inevitable scratches when the discs were handled all but guaranteed errors would creep into the data. But, by adding redundancy to accommodate for errors and minor scratches, Reed-Solomon codes made sure that every disc, when played, was as flawless as the next. “This and the hard disk was the absolute distribution of Reed-Solomon codes all over the world,” says Martin Bossert, director of the Institute of Telecommunications and Applied Information Theory at the University of Ulm, Germany.

    At a basic level, here’s how Reed-Solomon codes work. Suppose you wanted to send a simple piece of information like the equation for a parabola (a symmetrical curved line). In such an equation, there are three defining points: 4 + 5x + 7×2. By adding incomplete redundancy in the form of two extra numbers—a 4 and a 7, for example—a total of five numbers is sent in the transmission. As a result, any transposition or loss of information can be corrected for by feeding the additional numbers through the Reed-Solomon algorithm. “You still have an overrepresentation of your system,” Grass says. “It doesn’t matter which one you lose, you can still get back to the original information.”

    Using similar formulae, Grass and his colleagues converted two files—the Swiss Federal Charter from 1291 and an English translation of The Methods of Mechanical Theorems by Archimedes—into DNA. The redundant information, in the form of extra bases placed over 4,991 short sequences according to the Reed-Solomon algorithm, provided the basis for error-correction when the DNA was read and the data retrieved later on.

    That is, instead of wastefully overlapping large chunks of sequences as the EBI researchers did, “you just add a small amount of redundancy and still you can correct errors at any position, which seemed very strange at the beginning because it’s somehow illogical,” Grass says. As well as using fewer base pairs per kilobyte of data, this tack has the added bonus of automated, algorithmic error-correction.

    Indeed, with a low error-rate—less than three base changes per 117-base sequence—the overrepresentation in their sequences meant that the Reed-Solomon codes could still get back to the original information.

    The same basic principle is used in written language. In fact, you are doing something very similar right now. Even when text contains spelling errors or even when whole words are missing, you can still perfectly read the message and reconstruct the sentence accordingly. The reason? Language is inherently redundant. Not all combinations of letters—including spaces as a 27th option—give a meaningful word, sentence, or paragraph.

    On top of this “inner” redundancy, Grass and colleagues installed another genetic safety net. On the ends of the original sequences, they added large chunks of redundancy. “So if we lose whole sequences or if one is completely screwed and it can’t be corrected with the inner [redundancy], we still have the outer codes,” Grass says. It’s similar to how CDs safeguard against scratches.

    It may sound like overkill, but so much redundancy is warranted, at least for now. There simply isn’t enough information on the rate and types of errors that occur during DNA synthesis and sequencing. “We have an inkling of the error-rate, but all of this is very crude at this point,” Kosuri says. “We just don’t have a good feeling for that, so everyone just overdoes the corrections.” Further, given that the field of genomics is moving so fast, with new ways to write and read DNA, errors might differ depending on what technologies are being used. The same was true for other storage devices while still in their infancy. After further testing, the error-correction codes could be more attuned to the expected error rates and the redundancy reduced, paving the way for higher bandwidth and greater storage capacity.

    Into the Future

    Compared with the previous studies, storing two files totaling 83 kilobytes in DNA isn’t groundbreaking. The image below is roughly five times larger. But Grass and his colleagues really wanted to know just how much better DNA was at long-term storage. With their Reed-Solomon coding in place, Grass and colleagues mimicked nature to find out.

    “The idea was always to make an artificial fossil, chemically,” Grass says. They tried impregnating their DNA sequences in filter paper, they used a biopolymer to simulate the dry conditions within spores and seeds of plants, and they encapsulated them in microscopic beads of glass. Compared with DNA that hasn’t been modified chemically, all three trials led to markedly lower rates of DNA decomposition.

    Grass and colleagues glass DNA storage beads

    The glass beads were the best option, however. Water, when unimpeded, destroys DNA. If there are too many breaks and errors in the sequences, no error-correction system can help. The beads, however, protected the DNA from the damaging effects of humidity.

    With their layers of error-correction and protective coats in place, Grass and his colleagues then exposed the glass beads to three heat treatments—140˚, 149˚, and 158˚ F—for up to a month “to simulate what would happen if you store it for a long time,” he says. Indeed, after unwrapping their DNA from the beads using a fluoride solution and then re-reading the sequences, they found that slight errors had been introduced similar to those which appear over long timescales in nature. But, at such low levels, the Reed-Solomon codes healed the wounds.

    Using the rate at which errors arose, the researchers were able to extrapolate how long the data could remain intact at lower temperatures. If kept in the clement European air outside their laboratory in Zurich, for example, they estimate a ballpark figure of around 2,000 years. But place these glass beads in the dark at –0.4˚ F, the conditions of the Svalbard Global Seed Bank on the Norwegian island of Spitsbergen, and you could save your photos, music, and eBooks for two million. That’s roughly ten times as long as our species has been on Earth.

    Using heat treatments to mimic the effects of age isn’t foolproof, Grass admits; a month at 159˚ F certainly isn’t the same as millennia in the freezer. But his conclusions aren’t unsupported. In recent years, palaeogenetic research into long-dead animals has revealed that DNA can persist long after death. And when conditions are just right—cold, dark, and dry—these molecular strands can endure long after the extinction of an entire species. In 2012, for instance, the genome of an extinct human relative that died around 80,000 years ago was reconstructed from a finger bone. A year later, that record was shattered when scientists sequenced the genome of an extinct horse that died in Canadian permafrost around 700,000 years ago. “We already have long-term data,” Grass says. “Real long-term data.”

    But despite its inherent advantages, there are still some major hurdles to surmount before DNA becomes a viable storage option. For one, synthesis and sequencing is still too costly. “We’re still on the order of a million-fold too expensive on both fronts,” Kosuri says. Plus, it’s still slow to read and write, and it’s not rewritable nor is it random access. Currently, today’s DNA data storage techniques are similar to magnetic tape—the whole memory has to be read to retrieve a piece of information.

    Such caveats limit DNA to archival data storage, at least for the time being. “The question is if it’s going to drop fast enough and low enough to really compete in terms of dollars per gigabyte,” Grass says. It’s likely that DNA will continue to be of interest to medical and biological laboratories, which will help to speed up synthesis and sequencing and drive down prices.

    Whatever new technologies are on the horizon, history has taught us that Reed-Solomon-based coding will probably still be there, behind the scenes, safeguarding our data against errors. Like the genes within an organism, the codes have been passed down to subsequent generations, slightly adjusted and optimized for their new environment. They have a proven track record that starts on Earth and extends ever further into the Milky Way. “There cannot be a code that can correct more errors than Reed-Solomon codes…It’s mathematical proof,” Bossert says. “It’s beautiful.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 1:19 pm on July 20, 2015 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “Black Holes Could Turn You Into a Hologram, and You Wouldn’t Even Notice” 



    01 Jul 2015
    Tim De Chant

    Black holes may not have event horizons, but fuzzy surfaces.

    Few things are as mysterious as black holes. Except, of course, what would happen to you if you fell into one.

    Physicists have been debating what might happen to anyone unfortunate enough to slip toward the singularity, and so far, they’ve come up with approximately 2.5 ways you might die, from being stretched like spaghetti to burnt to a crisp.

    The fiery hypothesis is a product of Stephen Hawking’s firewall theory, which also says that black holes eventually evaporate, destroying everything inside. But this violates a fundamental principle of physics—that information cannot be destroyed—so other physicists, including Samir Mathur, have been searching for ways to address that error.

    Here’s Marika Taylor, writing for The Conversation:

    The general relativity description of black holes suggests that once you go past the event horizon, the surface of a black hole, you can go deeper and deeper. As you do, space and time become warped until they reach a point called the “singularity” at which point the laws of physics cease to exist. (Although in reality, you would die pretty early on on this journey as you are pulled apart by intense tidal forces).

    In Mathur’s universe, however, there is nothing beyond the fuzzy event horizon.

    Mathur’s take on black holes suggests that they aren’t surrounded by a point-of-no-return event horizon or a firewall that would incinerate you, but a fuzzball with small variations that maintain a record of the information that fell into it. What does touch the fuzzball is converted into a hologram. It’s not a perfect copy, but a doppelgänger of sorts.

    Perhaps more bizarrely, you even wouldn’t be aware that of the transformation. Say you were to be sucked toward a black hole. At the point where you’d normally hit the event horizon, Mathur says, you’d instead touch the fuzzy surface. But instead of noticing anything, the fuzzy surface would appear like any other part of space immediately around you. Everything would seem the same as it was, except that you’d be a hologram.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:31 am on July 18, 2015 Permalink | Reply
    Tags: , , , , NOVA,   

    From NOVA: “How Time Got Its Arrow” 



    15 Jul 2015

    Lee Smolin, Perimeter Institute for Theoretical Physics

    I believe in time.

    I haven’t always believed in it. Like many physicists and philosophers, I had once concluded from general relativity and quantum gravity that time is not a fundamental aspect of nature, but instead emerges from another, deeper description. Then, starting in the 1990s and accelerated by an eight year collaboration with the Brazilian philosopher Roberto Mangabeira Unger, I came to believe instead that time is fundamental. (How I came to this is another story.) Now, I believe that by taking time to be fundamental, we might be able to understand how general relativity and the standard model emerge from a deeper theory, why time only goes one way, and how the universe was born.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Flickr user Robert Couse-Baker, adapted under a Creative Commons license.

    The story starts with change. Science, most broadly defined, is the systematic study of change. The world we observe and experience is constantly changing. And most of the changes we observe are irreversible. We are born, we grow, we age, we die, as do all living things. We remember the past and our actions influence the future. Spilled milk is hard to clean up; a cool drink or a hot bath tend towards room temperature. The whole world, living and non-living, is dominated by irreversible processes, as captured mathematically by the second law of thermodynamics, which holds that the entropy of a closed system usually increases and seldom decreases.

    It may come as a surprise, then, that physics regards this irreversibility as a cosmic accident. The laws of nature as we know them are all reversible when you change the direction of time. Film a process described by those laws, and then run the movie backwards: the rewound version is also allowed by the laws of physics. To be more precise, you may have to change left for right and particles for antiparticles, along with reversing the direction of time, but the standard model of particle physics predicts that the original process and its reverse are equally likely.

    The same is true of Einstein’s theory of general relativity, which describes gravity and cosmology. If the whole universe were observed to run backwards in time, so that it heated up while it collapsed, rather than cooled as it expanded, that would be equally consistent with these fundamental laws, as we currently understand them.

    This leads to a fundamental question: Why, if the laws are reversible, is the universe so dominated by irreversible processes? Why does the second law of thermodynamics hold so universally?

    Gravity is one part of the answer. The second law tells us that the entropy of a closed system, which is a measure of disorder or randomness in the motions of the atoms making up that system, will most likely increase until a state of maximum disorder is reached. This state is called equilibrium. Once it is reached, the system is as mixed as possible, so all parts have the same temperature and all the elements are equally distributed.

    But on large scales, the universe is far from equilibrium. Galaxies like ours are continually forming stars, turning nuclear potential energy into heat and light, as they drive the irreversible flows of energy and materials that characterize the galactic disks. On these large scales, gravity fights the decay to equilibrium by causing matter to clump,,creating subsystems like stars and planets. This is beautifully illustrated in some recent papers by Barbour, Koslowski and Mercati.

    But this is only part of the answer to why the universe is out of equilibrium. There remains the mystery of why the universe at the big bang was not created in equilibrium to start with, for the picture of the universe given us by observations requires that the universe be created in an extremely improbable state—very far from equilibrium. Why?

    So when we say that our universe started off in a state far from equilibrium, we are saying that it started off in a state that would be very improbable, were the initial state chosen randomly from the set of all possible states. Yet we must accept this vast improbability to explain the ubiquity of irreversible processes in our world in terms of the reversible laws we know.

    In particular, the conditions present in the early universe, being far from equilibrium, are highly irreversible. Run the early universe backwards to a big crunch and they look nothing like the late universe that might be in our future.

    In 1979 Roger Penrose proposed a radical answer to the mystery of irreversibility. His proposal concerned quantum gravity, the long-searched-for unification of all the known laws, which is believed to govern the processes that created the universe in the big bang—or transformed it from whatever state it was in before the big bang.

    Penrose hypothesized that quantum gravity, as the most fundamental law, will be unlike the laws we know in that it will be irreversible. The known laws, along with their time-reversibility, emerge as approximations to quantum gravity when the universe grows large and cool and dilute, Penrose argued. But those approximate laws will act within a universe whose early conditions were set up by the more fundamental, irreversible laws. In this way the improbability of the early conditions can be explained.

    In the intervening years our knowledge of the early universe has been dramatically improved by a host of cosmological observations, but these have only deepened the mysteries we have been discussing. So a few years ago, Marina Cortes, a cosmologist from the Institute for Astronomy in Edinburgh, and I decided to revive Penrose’s suggestion in the light of all the knowledge gained since, both observationally and theoretically.

    Dr. Cortes argued that time is not only fundamental but fundamentally irreversible. She proposed that the universe is made of processes that continuously generate new events from present events. Events happen, but cannot unhappen. The reversal of an event does not erase that event, Cortes says: It is a new event, which happens after it.

    In December of 2011, Dr. Cortes began a three-month visit to Perimeter Institute, where I work, and challenged me to collaborate with her on realizing these ideas. The first result was a model we developed of a universe created by events, which we called an energetic causal set model.

    This is a version of a kind of model called a causal set model, in which the history of the universe is considered to be a discrete set of events related only by cause-and-effect. Our model was different from earlier models, though. In it, events are created by a process which maximizes their uniqueness. More precisely, the process produces a universe created by events, each of which is different from all the others. Space is not fundamental, only the events and the causal process that creates them are fundamental. But if space is not fundamental, energy is. The events each have a quantity of energy, which they gain from their predecessors and pass on to their successors. Everything else in the world emerges from these events and the energy they convey.

    We studied the model universes created by these processes and found that they generally pass through two stages of evolution. In the first stage, they are dominated by the irreversible processes that create the events, each unique. The direction of time is clear. But this gives rise to a second stage in which trails of events appear to propagate, creating emergent notions of particles. Particles emerge only when the second, approximately reversible stage is reached. These emergent particles propagate and appear to interact through emergent laws which seem reversible. In fact, we found, there are many possible models in which particles and approximately reversible laws emerge after a time from a more fundamental irreversible, particle-free system.

    This might explain how general relativity and the standard model emerged from a more fundamental theory, as Penrose hypothesized. Could we, we wondered, start with general relativity and, staying within the language of that theory, modify it to describe an irreversible theory? This would give us a framework to bridge the transition between the early, irreversible stage and the later, reversible stage.

    In a recent paper, Marina Cortes, PI postdoc Henrique Gomes and I showed one way to modify general relativity in a way that introduces a preferred direction of time, and we explored the possible consequences for the cosmology of the early universe. In particular, we showed that there were analogies of dark matter and dark energy, but which introduce a preferred direction of time, so a contracting universe is no longer the time-reverse of an expanding universe.

    To do this we had to first modify general relativity to include a physically preferred notion of time. Without that there is no notion of reversing time. Fortunately, such a modification already existed. Called shape dynamics, it had been proposed in 2011 by three young people, including Gomes. Their work was inspired by Julian Barbour, who had proposed that general relativity could be reformulated so that a relativity of size substituted for a relativity of time.

    Using the language of shape dynamics, Cortes, Gomes and I found a way to gently modify general relativity so that little is changed on the scale of stars, galaxies and planets. Nor are the predictions of general relativity regarding gravitational waves affected. But on the scale of the whole universe, and for the early universe, there are deviations where one cannot escape the consequences of a fundamental direction of time.

    Very recently I found still another way to modify the laws of general relativity to make them irreversible. General relativity incorporates effects of two fixed constants of nature, Newton’s constant, which measures the strength of the gravitational force, and the cosmological constant [usually denoted by the Greek capital letter lambda: Λ], which measures the density of energy in empty space. Usually these both are fixed constants, but I found a way they could evolve in time without destroying the beautiful harmony and consistency of the Einstein equations of general relativity.

    These developments are very recent and are far from demonstrating that the irreversibility we see around us is a reflection of a fundamental arrow of time. But they open a way to an understanding of how time got its direction that does not rely on our universe being a consequence of a cosmic accident.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:37 am on July 18, 2015 Permalink | Reply
    Tags: , , Buckminsterfullerene, NOVA,   

    From NOVA: “Long-Lasting “Spaceballs” Solve Century-Old Astronomy Puzzle” 



    17 Jul 2015
    Anna Lieb


    Nearly 100 years ago, a graduate student named Mary Lea Heger observed contaminated starlight. As the light traveled to her telescope, it was interacting with great clouds of—something—in the spaces between stars. No one could figure out what that something was, until now.

    When you pass sunlight through a prism, it separates into different colors, which correspond to different wavelengths. Astronomers like Heger often analyze incoming light by looking at something called a spectrum, which is a bit like looking at the colorful end of a prism. A spectrum shows you the relative strengths of the different wavelengths of the light comprising your sample. If the light interacts with something before it gets to you—say, a cloud of gas, the spectrum will change, because the gas absorbs some wavelengths more than others.

    Heger’s spectra had an unusual pattern that didn’t match any known substances, so no one could figure out what those interstellar clouds were made of. The spectra, referred to as “diffuse interstellar bands” (DIB), remained mysterious for many decades.

    In 1983, scientists accidentally discovered a strange molecule called buckminsterfullerene. Commonly known as a “buckyball,” this substance consists of 60 carbon atoms in a soccer-ball shaped arrangement. John Maier, a chemist at the University of Basel in Switzerland, and his collaborators suspected that buckyballs might be part of the strange signal coming from Heger’s interstellar medium, but they needed to know how buckyballs behave in space—a challenging thing to measure here on Earth.

    Here’s Elizabeth Gibney, reporting for Nature News:

    Maier’s team analysed that behaviour by measuring the light-absorption of buckyballs at a temperature of near-absolute zero and in an extremely high vacuum, achieved by trapping the ions using electric fields, in a buffer of neutral helium gas. “It was so technically challenging to create conditions such as in interstellar space that it took 20 years of experimental development,” says Maier.

    Maier’s team published the work in the journal Nature this week, helping to shed light on what the Nature commentary calls “one of the longest-standing mysteries of modern astronomy.” Not only do the results show that Buckminsterfullerine comprises some of the mysterious interstellar medium, but they also suggest these “spaceballs” are stable enough to last millions of years as they wander far and wide through space.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 7:59 am on June 19, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Do We Need to Rewrite General Relativity?” 



    18 Jun 2015
    Matthew Francis

    A cosmological computer simulation shows dark matter density overlaid with a gas velocity field. Credit: Illustris Collaboration/Illustris Simulation

    General relativity, the theory of gravity Albert Einstein published 100 years ago, is one of the most successful theories we have. It has passed every experimental test; every observation from astronomy is consistent with its predictions. Physicists and astronomers have used the theory to understand the behavior of binary pulsars, predict the black holes we now know pepper every galaxy, and obtain deep insights into the structure of the entire universe.

    Yet most researchers think general relativity is wrong.

    To be more precise: most believe it is incomplete. After all, the other forces of nature are governed by quantum physics; gravity alone has stubbornly resisted a quantum description. Meanwhile, a small but vocal group of researchers thinks that phenomena such as dark matter are actually failures of general relativity, requiring us to look at alternative ideas.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 7:09 am on June 16, 2015 Permalink | Reply
    Tags: , , NOVA,   

    From NOVA: “A Window into New Physics” 



    10 Jun 2015
    Kate Becker

    In 2007, David Narkevic was using a new algorithm to chug through 480 hours of archived data collected by the Parkes radio telescope in Australia. The data was already six years old and had been thoroughly combed for the repeating drumbeat signals that come from rapidly-rotating dead stars called pulsars.

    But Narkevic, a West Virginia University undergrad working under the supervision of astrophysicist Duncan Lorimer, was scouring these leftovers for a different animal: single pulses of unusually bright radio waves that are known to punctuate the rhythm of the most energetic pulsars.

    The Parkes Observatory hosts a large radio telescope in central New South Wales, Australia.

    Radio astronomers have an arsenal of well-honed tricks for teasing out faint signals including correcting for “dispersion.” Dispersion is when signals traveling through space arrive slightly earlier at high frequencies than they do at low frequencies according to a precise formula that describes how electromagnetic radiation is delayed by free-floating electrons. The more interstellar stuff the signals have to traverse, the more dispersed they are, so “dispersion measure” functions as a rough proxy for distance.

    Distant, and therefore highly dispersed signals, are difficult to pick up because their energy is smeared out across frequency and time. So, astrophysicists design search algorithms that apply one correction factor after another, with the hope that, by trial and error, they might hit on the right one and pluck a signal out from the noise. The process requires a lot of computing time, so astronomers typically only use common-sense dispersion corrections. But with all the common-sense results already wrung out from the data set, Narkevic was trying out correction factors corresponding to distances far beyond the Milky Way and its neighboring galaxies.

    To his surprise, it worked: He discovered a bright burst of radio waves, lasting less than five milliseconds, coming from a point on the sky a few degrees away from the Small Magellanic Cloud but that seemed to originate from far beyond it.

    NASA/ESA Hubble and Digitized Sky Survey 2

    It was impossible to pin down its precise location and distance but, based on the dispersion, Lorimer and his team calculated that it had to be far: billions of light years beyond the Milky Way.

    Lorimer’s team trained the Parkes telescope on the site for 90 more hours but never picked up another burst. Whatever Narkevic had found, it didn’t look like one of the pulsar pulses Lorimer had originally set out to find.

    That left plenty of other possibilities. It could be some human-made interference masquerading as a mysterious cosmic object: military radar, microwave ovens, bug zappers, and even electric blankets all produce electromagnetic radiation that can confuse readings from radio telescopes. But the “Lorimer burst” didn’t look like it was coming from one of these sources. For one thing, the dispersion was by-the-book: that is, the signal “swept” in at high frequencies first, and low frequencies later. For another, it was picked up by just three of the telescope’s 13 “beams,” each of which corresponds to a single pixel on a sky map, suggesting that it was localized out there, somewhere in the sky, rather than coming from a nearby source of interference, which would swamp the whole telescope.

    “We couldn’t think of any radio-frequency interference that would mimic those characteristics,” says astrophysicist Maura McLaughlin, also of West Virginia University, who was part of the discovery team. The researchers also ruled out some of the usual cosmic suspects: The burst was too bright to be a spasmodic eruption from a pulsar and too high-frequency to be the radio counterpart to a gamma-ray burst. Magnetars, highly magnetized neutron stars that sizzle with X-rays and gamma-rays, remained a strong possibility. “I tend to go with the least exotic things,” McLaughlin says, citing Occam’s razor: “The simplest thing is always the best. But I wouldn’t be surprised if it was something really strange and exotic, too.”

    Such observational puzzles are candy for theorists, and fast radio bursts, or FRBs as they are called, present a particularly sweet mystery: Their extreme properties hint that they might be able to reveal phenomena that push the boundaries of known physics, perhaps probing the properties of dark matter or quantum gravity theories beyond the Standard Model.

    Standard Model of Particle Physics. The diagram shows the elementary particles of the Standard Model (the Higgs boson, the three generations of quarks and leptons, and the gauge bosons), including their names, masses, spins, charges, chiralities, and interactions with the strong, weak and electromagnetic forces. It also depicts the crucial role of the Higgs boson in electroweak symmetry breaking, and shows how the properties of the various particles differ in the (high-energy) symmetric phase (top) and the (low-energy) broken-symmetry phase (bottom).

    So while observational astronomers kept searching for more FRBs, theorists began speculating about what they might be.

    Imploding Neutron Stars

    There were three clues: The burst was short, powerful, and distant. To astrophysicists, a short signal points to a small source—in this case, one so small that a light beam could cross it in the duration of the burst, just a few milliseconds. That means that FRB “progenitors,” whatever they are, probably measure less than one thousandth the width of the sun. What could pack such a huge amount of energy into that tiny package? “The only things that can produce that much energy are neutron stars and black holes,” says Jim Fuller, a theorist at Caltech.

    Fuller started thinking seriously about fast radio bursts in 2014, just as they were enjoying a scientific comeback. Studies of the Lorimer burst had languished for years after a group led by Sarah Burke-Spolaor, then a postdoc at the Commonwealth Scientific and Industrial Research Organisation in Australia, detected 16 similar bursts and was able to unambiguously chalk them up to earthly interference. But then, in 2013, Burke-Spolaor found a Lorimer burst of her own. A handful more followed. FRBs were back from the dead.

    Meanwhile, Fuller had a different astronomical mystery on his mind: the apparent scarcity of pulsars near the center of the Milky Way. There should be plenty of pulsars within a few light years of the galactic center, Fuller says, but despite years of searching, astronomers have found just one. What happened to the rest of them? Astrophysicists call this the “missing pulsar problem.”

    The FRBs seemed to be coming from a few degrees away from the Small Magellanic Cloud.

    Last year, a pair of astronomers proposed an unconventional answer: those missing pulsars might have “imploded” under the weight of dark matter, which is abundant in the center of the galaxy. Though dark matter passes easily through planets and stars, it could get trapped in the dense meat of a neutron star, they argued. Once there, it would slowly sink down to the star’s center. Over time, dark matter would pile up in the core, eventually collapsing into a tiny black hole that would eat away at the neutron star from the inside out. The star would gradually erode over thousands or millions of years until, in one great slurp, the black hole would devour nearly the whole mass of the neutron star in a matter of milliseconds.

    “Probably, it will be a very violent event, where the magnetic field is totally expelled from the black hole and reconnects with itself,” Fuller says. Some of the energy of the ravaged magnetic field would be turned into electromagnetic radiation: a blast of radio waves that might look a lot like an FRB.

    “It’s a pretty crazy idea,” Fuller admits. But it does make some predictions that we can observe. If Fuller’s model is right, neutron star implosions should have left behind lots of small black holes near the center of the galaxy, each holding about one-and-a-half times the mass of our sun. Though astronomers can’t see a black hole directly, if the black hole happens to be drawing matter from a companion star, as is relatively common, it will give off characteristic bursts of X-rays. A different kind of X-ray burst, on the other hand, could signal the presence of a neutron star, not a black hole. If there are lots of neutron stars hanging out around the galactic center, that would challenge Fuller’s scenario. (Some recent X-ray observations point toward the existence of those neutron stars, though the evidence is not yet definitive, Fuller says.)

    Fuller’s argument also predicts that FRBs should be coming from very close to the center of other galaxies. So far, astronomers haven’t pinpointed the location of a single FRB, and localizing one within a galaxy is an added challenge.

    If Fuller’s predictions hold up, they will yield fresh insight into the nature of dark matter, which is still almost totally a blank. First, it will mean that dark matter particles don’t annihilate each other, as some recent observations have hinted. It would also reveal dark matter’s “cross section”—that is, the likelihood that a particle of dark matter will interact with normal matter, as opposed to passing straight through it. For the neutron star implosion scenario to hold up, dark matter’s cross section must be just somewhere in a very narrow range of possibilities, Fuller says.

    Bouncing Black Holes

    Another possibility for what’s causing FRBs comes from the leading edge of black hole physics, where theorists are puzzling over the difficult answer to an apparently simple question: What happens to the stuff that falls into a black hole? Physicists once thought that it was inevitably compressed into an infinitely small, infinitely dense point called a singularity. But because the known laws of physics break down at this point, the singularity has always been a raw nerve for physicists.

    Many physicists would like to find a way to sidestep the singularity, and theorists working on a theory called loop quantum gravity think they have found a way to do so. Loop quantum gravity proposes that the fabric of spacetime is woven of tiny—you guessed it—loops. These loops can’t be compressed indefinitely—push them too far, and they push back. In the universe of loop quantum gravity, a would-be black hole can collapse only until gravity is overcome by the outward pressure generated by the loops, which then hurtles the black hole’s innards back out into space, transforming it into its mathematical opposite, a white hole.

    Abruptly, the contents of the black hole would be converted into a tremendous blast of energy concentrated at a wavelength of a few millimeters, according to Carlo Rovelli, a theorist at Aix-Marseille University, and his colleagues in France and the Netherlands. We might be able to pick up the first of these cosmic kabooms today, coming from some of the universe’s earliest black holes, Rovelli says, and they might look a lot like fast radio bursts. It’s not a perfect match: fast radio bursts emit at a lower frequency, corresponding to a wavelength of about 20 centimeters, and they don’t give off as much energy as the theorists predict for a “quantum bounce.” But, Rovelli says, the model’s predictions are still very crude and don’t account for the black hole’s motion, interactions between the matter it contains, or even the fact that the black hole has mass.

    Rovelli says the model does make one clear, testable prediction: a peculiar correlation between the wavelength at which the signal is received and the distance to the black hole. That’s because the wavelength of the emitted energy depends on two things: the size of the black hole and its distance from Earth. The most distant explosions should be coming from the youngest, and therefore smallest, black holes, meaning that their energy will be skewed toward shorter wavelengths. But as the radiation travels across the expanding universe, it will be stretched out, or “redshifted,” so that the signals we pick up on Earth register at a longer wavelength than they were emitted. Add up the effects and you should see the specific curve that Rovelli and his colleagues predict. As astronomers find more fast radio bursts, they will be able to test whether they match the predicted curve.

    It may sound like a long shot. But, if it’s right, the payoff would be huge: “If the observed Fast Radio Bursts are connected to this phenomenon, they represent the first known direct observation of a quantum gravity effect,” wrote Rovelli and his colleagues.

    It could also get physicists out of a theoretical jam called the black hole information paradox, which pits two unshakable tenets of physics against each other. On one side, the principle of unitarity holds that information can never be lost; on the other, according to the rules of black hole thermodynamics, the only thing that ever escapes from a black hole, Hawking radiation, is randomly scrambled and preserves no information. To solve the paradox, some physicists have proposed that the entanglement between incoming particles and those radiated out as Hawking radiation could be spontaneously broken, putting up a “firewall” of energy at the black hole’s horizon. But the concept is still controversial: plenty of ideas in modern physics violate our intuition about how the world is supposed to work, but a sizzling wall of energy floating around a black hole? Really?

    The quantum bounce effect could resolve the information paradox and neutralize the need for a firewall, argue Rovelli and his colleagues. The information inside the black hole isn’t lost: it just comes out later.

    Superconducting Cosmic Strings

    Fast radio bursts could also be a modern manifestation of something that happened 13.7 billion years ago, just after the Big Bang, when the baby universe was roiling with so much energy that all the fundamental physical forces acted as one. At this moment, the Higgs field had not yet switched on and nothing in the universe had mass. Then, on came the Higgs field, unfurling through space and pinging every particle it encountered with its magic wand, bestowing the gift of mass.

    Some theorists think that the field associated with the Higgs boson, discovered in 2012 at the Large Hadron Collider [LHC], is just one of many similar fields, each of which plays a role in giving particles mass.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles

    But many models predict that these fields would not diffuse perfectly through all of space. Instead, they would miss a few spots. These gaps, the thinking goes, would become defects called cosmic strings, skinny tubes of space that, like springy rubber bands, are tense with stored energy. Extending over millions of light years and traveling close to the speed of light, these hypothetical strings would be so massive that a single centimeter-long snippet would contain a mountain’s-worth of mass, says Tanmay Vachaspati, a physicist at Arizona State University who, along with Alexander Vilenkin at Tufts University, did early work on the formation and evolution of cosmic strings.

    Invisible to most telescopes, cosmic strings could be detected via the gravitational waves they emit as they shimmy through space and crash into other cosmic strings. So far, astronomers haven’t made any affirmative detection of these gravitational waves, though the fact that they haven’t shown up yet allows physicists to put some limits on the maximum mass of the strings.

    A still-more-exotic breed of cosmic strings called superconducting cosmic strings, which carry an electrical current, could turn out to be easier for astronomers to observe. First proposed by theorist Edward Witten, these electrified strings should give off detectable electromagnetic radiation as they move through space, Vachaspati says. The emission would look like a constant hum of very-low-frequency radio waves, occasionally spiked with brief, higher-frequency bursts from dramatic events called kinks and cusps. Kinks happen when two strings meet and reconnect at their point of intersection, Vachaspati says. Cusps are like the end of a whip, lashing out into space at close to the speed of light. What, exactly, their radio emission might look like depends on many still-unknown parameters of the strings, Vachaspati says. But it is possible that they would look very much like fast radio bursts.

    There is one problem, though. Vachaspati and his colleagues predict that the radio emission from superconducting cosmic strings should be linearly polarized: that is, it should oscillate in a plane. So far, polarization has only been measured for one fast radio burst, but that was circularly polarized, meaning that its electric field draws out a spiral around the direction its traveling.

    Some theorists, including Vilenkin, think it might be possible for a superconducting cosmic string to produce a circularly polarized signal under certain conditions. And with polarization measured for just one FRB so far, it’s too soon to discount the hypothesis entirely.

    Future Observations

    Today, astronomers have detected about a dozen fast radio bursts. (A group of apparently similar signals, curiously clustered around lunchtime, were recently traced to a more mundane source: the Parkes observatory microwave oven.) But observers and theorists in every camp agree on this: to figure out what is causing FRBs, they need to find more of them.

    “Right now, there are far more theories about what’s causing FRBs than FRBs themselves,” says Burke-Spolaor, who is now leading up a search for FRBs with the Very Large Array (VLA), a network of radio telescopes in New Mexico.


    With more bursts in their catalog, astronomers will be able to draw more meaningful conclusions about how common FRBs are and how they are distributed across the sky. They will also be able to answer two critical questions: where the bursts are coming from, and what they look like in other parts of the electromagnetic spectrum.

    So far, astronomers have localized each Parkes burst to a disc of sky that’s about a half-degree across—about the size of the full moon. To astronomers, that’s an enormous region: extend your vision out to the distance at which FRBs are expecting to be going off, and that little patch of sky could contain hundreds of galaxies. Using the VLA, Burke-Spolaor should be able to pin down a burst’s location to a single galaxy. But first, she has to find one. Based on the number of FRBs that have been seen so far, she estimates that it will take about 600 hours of skywatching to have a solid chance of observing one. So far, she has a little under 200 hours down.

    Unlike the archival search that turned up the first FRB, Burke-Spolaor’s search campaign is attempting to catch FRBs in the act. That will give astronomers a chance to quickly swivel other telescopes to the same spot and potentially see the bursts giving off energy at other wavelengths. So far, only three FRBs have been caught in real time, including a May 14, 2014, burst observed at Parkes by a team of astronomers including Emily Petroff, a PhD student in astrophysics at Swinburne University of Technology in Melbourne, Australia. Within a few hours, a dozen other telescopes were watching the source of the burst at wavelengths ranging from X-rays to radio waves. But not one of them saw anything unusual. Papers on two more bursts, observed in February and April of this year, are currently being prepared for publication; astronomers followed up on those bursts with observations at multiple wavelengths, too, but haven’t yet announced the result of those studies.

    Meanwhile, Jayanth Chennamangalam, a former student of Lorimer’s who is now a post-doc at Oxford, is putting the finishes touches on a system that will scan every 100 microseconds of incoming radio data at the Arecibo dish in Puerto Rico for sudden, short pulses.


    The system, called ALFABURST, will piggyback on the latest iteration of SERENDIP, a spectrometer that’s been tapping the Arecibo’s feed for years, listening for signals from extraterrestrial civilizations. Ultimately, it will be able to alert astronomers to unusual bursts within seconds—fast enough for rapid follow-up at other wavelengths.

    Will fast radio bursts turn out to be a window into new physics or just a new perspective on something more familiar? It’s too early to say. But for now, researchers can relish the moment of being maybe, just possibly, on the verge of finding something genuinely new to science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:29 pm on May 28, 2015 Permalink | Reply
    Tags: , Classical Mechanics, NOVA, ,   

    From NOVA: “Ultracold Experiment Could Solve One of Physics’s Biggest Contradictions” 



    28 May 2015
    Allison Eck

    A vortex structure emerges within a rotating Bose-Einstein condensate.

    There’s a mysterious threshold that’s predicted to exist beyond the limits of what we can see. It’s called the quantum-classical transition.

    If scientists were to find it, they’d be able to solve one of the most baffling questions in physics: why is it that a soccer ball or a ballet dancer both obey the Newtonian laws while the subatomic particles they’re made of behave according to quantum rules? Finding the bridge between the two could usher in a new era in physics.

    We don’t yet know how the transition from the quantum world to the classical one occurs, but a new experiment, detailed in Physical Review Letters, might give us the opportunity to learn more.

    The experiment involves cooling a cloud of rubidium atoms to the point that they become virtually motionless. Theoretically, if a cloud of atoms becomes cold enough, the wave-like (quantum) nature of the individual atoms will start to expand and overlap with one another. It’s sort of like circular ripples in a pond that, as they get bigger, merge to form one large ring. This phenomenon is more commonly known as a Bose-Einstein condensate, a state of matter in which subatomic particles are chilled to near absolute zero (0 Kelvin or −273.15° C) and coalesce into a single quantum object. That quantum object is so big (compared to the individual atoms) that it’s almost macroscopic—in other words, it’s encroaching on the classical world.

    The team of physicists cooled their cloud of atoms down to the nano-Kelvin range by trapping them in a magnetic “bowl.” To attempt further cooling, they then shot the cloud of atoms upward in a 10-meter-long pipe and let them free-fall from there, during which time the atom cloud expanded thermally. Then the scientists contained that expansion by sending another laser down onto the atoms, creating an electromagnetic field that kept the cloud from expanding further as it dropped. It created a kind of “cooling” effect, but not in the traditional way you might think—rather, the atoms have a lowered “effective temperature,” which is a measure of how quickly the atom cloud is spreading outward. At this point, then, the atom cloud can be described in terms of two separate temperatures: one in the direction of downward travel, and another in the transverse direction (perpendicular to the direction of travel).

    Here’s Chris Lee, writing for ArsTechnica:

    “This is only the start though. Like all lenses, a magnetic lens has an intrinsic limit to how well it can focus (or, in this case, collimate) the atoms. Ultimately, this limitation is given by the quantum uncertainty in the atom’s momentum and position. If the lensing technique performed at these physical limits, then the cloud’s transverse temperature would end up at a few femtoKelvin (10-15). That would be absolutely incredible.

    A really nice side effect is that combinations of lenses can be used like telescopes to compress or expand the cloud while leaving the transverse temperature very cold. It may then be possible to tune how strongly the atoms’ waves overlap and control the speed at which the transition from quantum to classical occurs. This would allow the researchers to explore the transition over a large range of conditions and make their findings more general.”

    Jason Hogan, assistant professor of physics at Stanford University and one of the study’s authors, told NOVA Next that you can understand this last part by using the Heisenberg Uncertainty Principle. As a quantum object’s uncertainty in momentum goes down, its uncertainty in position goes up. Hogan and his colleagues are essentially fine-tuning these parameters along two dimensions. If they can find a minimum uncertainty in the momentum (by cooling the particles as much as they can), then they could find the point at which the quantum-to-classical transition occurs. And that would be a spectacular discovery for the field of particle physics.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:28 am on May 8, 2015 Permalink | Reply
    Tags: , , NOVA   

    From NOVA: “Sneaking into the Brain with Nanoparticles” 



    12 Mar 2015
    Teal Burrell

    About a decade ago, Beverly Rzigalinski, a molecular biologist now at Virginia College of Osteopathic Medicine, was asked by a colleague to look into nanoparticles. “Nanoparticles? Yuck,” she thought. She off-handedly told a student to throw them on some neurons growing in the lab and take notes on what happened. She had no hope for the experiment, sure the nanoparticles would kill all the neurons, but at least she could say she tried.

    Rzigalinski was given cerium oxide nanoparticles to work with, ten-nanometer wide particles derived from a rare earth metal. (Human hair, by comparison, is 100,000 nanometers wide.) No one had looked at their biological applications, and Rzigalinski was not particularly impressed with their résumé. Cerium oxide nanoparticles’ listed industrial uses included glass polishing and fuel combustion, nothing that seemed promising for neuroscience.

    A month and a half later, Rzigalinski noticed the dishes still sitting in the lab’s incubator. She marched straight over to the student, launching into a lecture about not wasting expensive resources on cells that were surely long dead. (Neurons in the lab typically stayed alive for only three weeks.) But the student assured her the cells treated with nanoparticles were still alive. Skeptically, she peered into the microscope and was surprised to find living, flourishing neurons. Rzigalinski has been studying nanoparticles ever since.

    Other neuroscientists are joining her, taking advantage of nanoparticles’ unique properties to identify new therapies, shuttle existing therapies into the brain, and examine the brain on a level and depth never before possible.

    Recyclable Antioxidants

    When treated with cerium oxide particles, Rzigalinski’s neurons survived for up to six months, nine times longer than usual. Cerium oxide nanoparticles may extend life in this way by neutralizing free radicals, unpaired electrons that are highly reactive and can damage DNA. The body’s defenses against free radicals may wear down with time; aging may be due in part to free radicals slowly accumulating unchecked.

    Damage induced by free radicals also contributes to a number of neurological diseases. Rzigalinski’s work is revealing how cerium oxide nanoparticles can prevent or reverse this destruction as well. Treating mouse models of Parkinson’s disease with cerium oxide particles rescued the loss of dopaminergic cells, the death of which leads to the disease’s characteristic tremors and slow shuffling gait. Cerium oxide nanoparticles also seemed to halt the free radical-triggered cascade of damage that typically follows traumatic brain injury; after injury, nanoparticle-treated mice had fewer signs of free radical damage and better memories compared to control-treated mice. Finally, when flies were administered nanoparticles following a stroke (in a timeframe analogous to receiving treatment upon arrival to a hospital), the treated flies not only lived longer but also had improved performance on fly-specific tasks, like quickly buzzing to the top of the cage.

    Antioxidants like vitamins C and E also sop up free radicals, but each antioxidant molecule only destroys one free radical. As Rzigalinski points out, the advantage of cerium oxide nanoparticles is that, “These nanoparticles are regenerative, so they’ll scavenge thousands, or hundreds of thousands, of free radicals.” Cerium oxide nanoparticles neutralize free radicals by snatching the electrons, shuffling them around, and eventually depositing them as water, restoring the nanoparticles to their original state, ready to abolish more free radicals. This recycling means the nanoparticles will continue working after a single dose. Rzigalinski found nanoparticles present as long as six months after injection in mice and, crucially, toxicity has not been an issue, since the dosage is so low. Single doses, or even low doses, can both prevent harmful side effects and keep costs down.

    Cerium oxide nanoparticles are effective because, after injection, they immediately get coated with proteins that help carry them into the heart, lungs, and brain—where they need to be to slash disease-causing free radicals. Not all drugs are so lucky.

    Trojan Horses

    The trouble with treating brain diseases is the brain exists in a separate world, sealed off from the rest of the body. Cells are tightly packed around the brain’s blood vessels, forming the blood brain barrier, a heavily guarded barricade separating the blood and its contents from the brain and spinal cord. This security system works to keep any bacterial infections and toxins in the blood from getting into the ultrasensitive brain. If small or fat-soluble enough, certain approved entities—like water, gases, alcohol, and some hormones—can leak across the border. Larger molecules require exclusive receptors to allow them through, a unique key that unlocks a particular door and grants them access.

    While creating an extra level of protection from diseases outside the brain, the blood brain barrier causes trouble for trying to solve diseases within the brain. It’s a notorious nemesis of drug development, preventing an estimated 98% of potential treatments from getting in. Tomas Skrinskas of Precision NanoSystems—a biotechnology company specializing in delivering materials to cells—lamented, “The blood-brain barrier is probably the trickiest [challenge] there is.”

    In this image of the blood-brain barrier, green-stained glial cells surround the blood vessels (seen here in black), providing support for red-stained neurons. No image credit.

    To overcome this hurdle, one current solution involves flooding the blood with drugs, hoping a small proportion passes through by sheer force of will or strength in numbers. But ingesting lots of drugs can trigger nasty side effects elsewhere in the body. Another way to crack through the defenses is to hack into systems already in place for transporting small molecules. Enter nanoparticles.

    While some nanoparticles act as treatments, others play the role of Trojan horse: they pretend to be ordinary, recognized molecules, gain access through special receptors, and sneak the drugs with them as they pass through the restricted entry gates. Nate Vinzant, an undergraduate in Gina Forster’s lab at the University of South Dakota, is using iron oxide to smuggle anti-anxiety drugs into the brain.

    When injected directly into the brain, antisauvagine decreases anxiety in rats. However, direct injection into the brain isn’t a feasible treatment option for humans and antisauvagine is incapable of passing from the blood to the brain on its own. To sneak it in, Vinzant attached antisauvagine to iron oxide nanoparticles, which are regularly taken into the brain via specific receptors. When hitched to iron, antisauvagine goes along for the ride because “the brain thinks it’s iron,” Vinzant says. Indeed, typically anxious rats given iron-bound antisauvagine displayed less signs of stress than untreated rats, confirming that the drug made its way from the injection site in the abdomen, through the blood, and across the barrier.

    More than just a boon for anxiety treatment, this research is a proof of principle. Other drugs can be tethered to nanoparticles like iron and use the same uptake mechanism.

    Remote Controls

    In addition to improving treatments, nanoparticles can also help researchers understand diseases and the brain in general. President Obama’s BRAIN Initiative, a program aiming to map the neurons and connections within the human brain, is initially focused on the development of novel technologies that may lead to future breakthroughs. This fall, Sarah Stanley, a post-doctoral researcher in Jeffrey Friedman’s lab at Rockefeller University, received one of the initiative’s first grants to develop technology that uses nanoparticles to control neurons.

    Stanley’s goal is to examine a diffuse network of neurons distributed throughout the brain. “We were really looking for a way of remotely modulating cells,” Stanley explains, but existing tools weren’t able to go deep or dispersed enough. For example, one popular new technique known as optogenetics, which uses light to activate neurons, wouldn’t work for Stanley’s project because light can’t penetrate very far into tissue. Another method involving uniquely designed drugs and receptors can’t be quickly turned on and off. So Stanley turned to nanoparticles.

    Ferritin nanoparticles bind and store iron, and Stanley genetically tweaked the nanoparticles to also associate with a temperature-sensitive channel. When the channel is heated, it opens, leading to the activation of certain genes.

    To generate heat, she used radio waves. Unlike light, radio waves freely penetrate tissue. They hit the ferritin nanoparticle, heating the iron core. The hot iron then heats the associated channel, causing it to open. Stanley tested the system by linking it to the production of insulin; when the radio waves heated the iron, the channel opened and the insulin gene was turned on, leading to a measurable increase in insulin. The nanoparticle is “basically acting as a sensor for radio waves,” says Stanley. It’s “transducing what would be entirely innocuous signals into enough energy to open the channel.”

    To optimize the system, Stanley first tested it in liver and stem cells of mice, but she is now moving into mouse neurons, intending to turn them on and off with her nanoparticle remote control. The radio waves’ penetration should allow researchers to use this technique to manipulate cells that are both deep and spread throughout the brain. “This tool will allow us to be able to modulate any cells in any [central nervous system] region at the same time in a freely moving mouse,” Stanley notes.

    For now, remotely controlling neurons in this way will only be used in research to discover more about these deep or dispersed networks. But eventually, it could potentially be combined with gene therapy to fine-tune protein levels. For example, in diseases with a mutated or dysfunctional gene, like Rett Syndrome, a developmental disorder causing movement and communication difficulties, gene therapy aims to replace the defective gene. Adding a functional gene isn’t always enough, however, as it must be adjusted to produce the appropriate amount of protein. Controlling the gene with radio waves and nanoparticles would allow doctors to carefully tweak the protein production.

    Although that’s a long way off, nanoparticles are claiming their spot in the future of neuroscience. In a press conference on innovative technologies at November’s Society for Neuroscience Conference in Washington, D.C., David Van Essen, a neuroscientist at Washington University in St. Louis, indirectly praised Stanley’s project. “It was really exciting to see earlier this fall that the [National Institutes of Health] has awarded about 50 new grants for some amazing, innovative ideas.” He then went on to introduce Rzigalinski’s research on Parkinson’s disease, mentioning how nanotechnology is a new tool providing hope for reversing devastating diseases.

    Neuroscientists may need to temper their excitement, however. Clinical trials for cancer treatments have stalled as some nanoparticles—including iron—have been found to generate free radicals, which can trigger cell death. But a compromise may be possible: iron nanoparticles are also being studied to enhance magnetic resonance imaging (MRI) signals and toxicity doesn’t seem to be an issue so long as the doses are kept low. If the drugs the nanoparticles carry with them are powerful enough, lower doses can be used and harmful side effects prevented.

    So far, cerium oxide nanoparticles have avoided this issue, but their relentless crusade against free radicals may lead to a different problem: free radicals are crucial to certain cellular processes, including the regulation of blood pressure and function of the immune system. The question of how much free radical scavenging is too much remains to be answered. But, considering the elevated levels of free radicals in disease, perhaps the nanoparticles will have their hands full lowering levels to a healthy range, let alone reducing them enough to cause trouble.

    It’s still too early to know whether nanoparticles will usher in a new wave of clinical treatments, but to many researchers, it’s clear that they show great promise. Rzigalinski, for example, has long since apologized to her student for her disbelieving rant. Small as they may be, nanoparticles have the ability to take on Goliath-sized tasks, bringing researchers deep inside the brain, past its defenses, ready to fight destructive forces in new ways.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 452 other followers

%d bloggers like this: