Tagged: Cosmology Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:42 am on September 29, 2015 Permalink | Reply
    Tags: , , Cosmology, THE Q CONTINUUM SIMULATION   

    From AAS NOVA: “The Q Continuum Simulation” 


    Amercan Astronomical Society

    28 September 2015
    Susanna Kohler


    Each frame in this image ([in the full article]click for the full view!) represents a different stage in the simulated evolution of our universe, ending at present day in the rightmost panel. In a recently-published paper, Katrin Heitmann (Argonne National Laboratory) and collaborators reveal the results from — and challenges inherent in — the largest cosmological simulation currently available: the Q Continuum simulation. Evolving a volume of 1300 Mpc3, this massive N-body simulation tracks over half a trillion particles as they clump together as a result of their mutual gravity, imitating the evolution of our universe over the last 13.8 billion years. Cosmological simulations such as this one are important for understanding observations, testing analysis pipelines, investigating the capabilities of future observing missions, and much more. For more information and the original image (as well as several other awesome images!), see the paper below.

    Katrin Heitmann et al 2015 ApJS 219 34. doi:10.1088/0067-0049/219/2/34

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:18 pm on September 28, 2015 Permalink | Reply
    Tags: , , Cosmology,   

    From NOVA: “Could the Universe Be Lopsided?” 



    28 Sep 2015
    Paul Halpern

    One hundred years ago, [Albert] Einstein re-envisioned space and time as a rippling, twisting, flexible fabric called spacetime. His theory of general relativity showed how matter and energy change the shape of this fabric. One might expect, therefore, that the fabric of the universe, strewn with stars, galaxies, and clouds of particles, would be like a college student’s dorm room: a mess of rumpled, crumpled garments.

    Indeed, if you look at the universe on the scale of stars, galaxies, and even galaxy clusters, you’ll find it puckered and furrowed by the gravity of massive objects. But take the wider view—the cosmologists’ view, which encompasses the entire visible universe—and the fabric of the universe is remarkably smooth and even, no matter which direction you turn. Look up, down, left, or right and count up the galaxies you see: you’ll find it’s roughly the same from every angle. The cosmic microwave background [CMB], the cooled-down relic of radiation from the early universe, demonstrates the same remarkable evenness on the very largest scale.

    Cosmic Background Radiation Planck
    CMB per ESA/Planck

    ESA Planck
    ESA/Planck satellite

    A computer simulation of the ‘cosmic web’ reveals the great filaments, made largely of dark matter, located in the space between galaxies. By NASA, ESA, and E. Hallman (University of Colorado, Boulder), via Wikimedia Commons

    Physicists call a universe that appears roughly similar in all directions isotropic. Because the geometry of spacetime is shaped by the distribution of matter and energy, an isotropic universe must posses a geometric structure that looks the same in all directions as well. The only three such possibilities for three-dimensional spaces are positively curved (the surface of a hypersphere, like a beach ball but in a higher dimension), negatively curved (the surface of a hyperboloid, shaped like a saddle or potato chip), or flat. Russian physicist [Alexei] Fridmann, Belgian cleric and mathematician Georges Lemaître and others incorporated these three geometries into some of the first cosmological solutions of Einstein’s equations. (By solutions, we mean mathematical descriptions of how the three spatial dimensions of the universe behave over time, given the type of geometry and the distribution of matter and energy.) Supplemented by the work of American physicist Howard Robertson and British mathematician Arthur Walker, this class of isotropic solutions has become the standard for descriptions of the universe in the Big Bang theory.

    However, in 1921 Edward Kasner—best known for his coining of the term “Googol” for the number 1 followed by 100 zeroes—demonstrated that there was another class of solutions to Einstein’s equations: anisotropic, or “lopsided,” solutions.

    Known as the Kasner solutions, these cosmic models describe a universe that expands in two directions while contracting in the third. That is clearly not the case with the actual universe, which has grown over time in all three directions. But the Kasner solutions become more intriguing when you apply them to a kind of theory called a Kaluza-Klein model, in which there are unseen extra dimensions beyond space and time. Thus space could theoretically have three expanding dimensions and a fourth, hidden, contracting dimension. Physicists Alan Chodos and Steven Detweiler explored this concept in their paper Where has the fifth dimension gone?

    Kasner’s is far from the only anisotropic model of the universe. In 1951, physicist Abraham Taub applied the shape-shifting mathematics of Italian mathematician Luigi Bianchi to general relativity and revealed even more baroque classes of anisotropic solutions that expand, contract or pulsate differently in various directions. The most complex of these, categorized as Bianchi type-IX, turned out to have chaotic properties and was dubbed by physicist Charles Misner the “Mixmaster Universe” for its resemblance to the whirling, twirling kitchen appliance.

    Like a cake rising in a tray, while bubbling and quivering on the sides, the Mixmaster Universe expands and contracts, first in one dimension and then in another, while a third dimension just keeps expanding. Each oscillation is called a Kasner epoch. But then, after a certain number of pulses, the direction of pure expansion abruptly switches. The formerly uniformly expanding dimension starts pulsating, and one of those formerly pulsating starts uniformly expanding. It is as if the rising cake were suddenly turned on its side and another direction started rising instead, while the other directions, including the one that was previously rising, just bubbled.

    One of the weird things about the Mixmaster Universe is that if you tabulate the number of Kasner epochs in each era, before the behavior switches, it appears as random as a dice roll. For example, the universe might oscillate in two directions five times, switch, oscillate in two other directions 17 times, switch again, pulsate another way twice, and so forth—without a clear pattern. While the solution stems from deterministic general relativity, it seems unpredictable. This is called deterministic chaos.

    Could the early moments of the universe have been chaotic, and then somehow regularized over time, like a smoothed-out pudding? Misner initially thought so, until he realized that the Mixmaster Universe couldn’t smooth out on its own. However, it could have started out “lopsided,” then been stretched out during an era of ultra-rapid expansion called inflation until its irregularities were lost from sight.

    As cosmologists have collected data from instruments such as the Hubble Space Telescope, Planck Satellite, and WMAP satellite (now retired), the bulk of the evidence supports the idea that our universe is indeed isotropic.

    NASA Hubble Telescope
    NASA/ESA Hubble


    But a minority of researchers have used measurements of the velocities of galaxies and other observations, such as an odd line up of temperature fluctuations in the cosmic microwave background dubbed the “Axil of Evil” to assert that the universe could be slightly irregular after all.

    For example, starting in 2008, Alexander Kashlinsky, a researcher at NASA’s Goddard Space Flight Center, and his colleagues have statistically analyzed cosmic microwave background data gathered by first the WMAP satellite and the Planck satellite to show that, in addition to their motion due to cosmic expansion, many galaxy clusters seem to be heading toward a particular direction on the sky. He dubbed this phenomenon “dark flow,” and suggested that it is evidence of a previously-unseen cosmic anisotropy known as a “tilt.” Although the mainstream astronomical community has disputed Kashlinsky’s conclusion, he has continued to gather statistical evidence for dark flow and the idea of tilted universes.

    Whether or not the universe really is “lopsided,” it is intriguing to study the rich range of solutions of Einstein’s general theory of relativity. Even if the preponderance of evidence today points to cosmic regularity, who knows when a new discovery might call that into question, and compel cosmologists to dust off alternative ideas. Such is the extraordinary flexibility of Einstein’s masterful theory: a century after its publication, physicists are still exploring its possibilities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 11:22 am on July 26, 2015 Permalink | Reply
    Tags: , Cosmology, , , Time Travel   

    From RT: “Time-traveling photons connect general relativity to quantum mechanics” 

    RT Logo


    23 Jun, 2014
    No Writer Credit

    Space-time structure exhibiting closed paths in space (horizontal) and time (vertical). A quantum particle travels through a wormhole back in time and returns to the same location in space and time. (Photo credit: Martin Ringbauer)

    Scientists have simulated time travel by using particles of light acting as quantum particles sent away and then brought back to their original space-time location. This is a huge step toward marrying two of the most irreconcilable theories in physics.

    Since traveling all the way to a black hole to see if an object you’re holding would bend, break or put itself back together in inexplicable ways is a bit of a trek, scientists have decided to find a point of convergence between general relativity and quantum mechanics in lab conditions, and they achieved success.

    Australian researchers from the UQ’s School of Mathematics and Physics wanted to plug the holes in the discrepancies that exist between two of our most commonly accepted physics theories, which is no easy task: on the one hand, you have Einstein’s theory of general relativity, which predicts the behavior of massive objects like planets and galaxies; but on the other, you have something whose laws completely clash with Einstein’s – and that is the theory of quantum mechanics, which describes our world at the molecular level. And this is where things get interesting: we still have no concrete idea of all the principles of movement and interaction that underpin this theory.

    Natural laws of space and time simply break down there.

    The light particles used in the study are known as photons, and in this University of Queensland study, they stood in for actual quantum particles for the purpose of finding out how they behaved while moving through space and time.

    The team simulated the behavior of a single photon that travels back in time through a wormhole and meets its older self – an identical photon. “We used single photons to do this but the time-travel was simulated by using a second photon to play the part of the past incarnation of the time traveling photon,” said UQ Physics Professor Tim Ralph asquotedby The Speaker.

    The findings were published in the journal Nature Communications and gained support from the country’s key institutions on quantum physics.

    Some of the biggest examples of why the two approaches can’t be reconciled concern the so-called space-time loop. Einstein suggested that you can travel back in time and return to the starting point in space and time. This presented a problem, known commonly as the ‘grandparents paradox,’ theorized by Kurt Godel in 1949: if you were to travel back in time and prevent your grandparents from meeting, and in so doing prevent your own birth, the classical laws of physics would prevent you from being born.

    But Tim Ralph has reminded that in 1991, such situations could be avoided by harnessing quantum mechanics’ flexible laws: “The properties of quantum particles are ‘fuzzy’ or uncertain to start with, so this gives them enough wiggle room to avoid inconsistent time travel situations,” he said.

    There are still ways in which science hasn’t tested the meeting points between general relativity and quantum mechanics – such as when relativity is tested under extreme conditions, where its laws visibly seem to bend, just like near the event horizon of a black hole.

    But since it’s not really easy to approach one, the UQ scientists were content with testing out these points of convergence on photons.

    “Our study provides insights into where and how nature might behave differently from what our theories predict,” Professor Ralph said.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:19 pm on July 20, 2015 Permalink | Reply
    Tags: , , Cosmology,   

    From NOVA: “Black Holes Could Turn You Into a Hologram, and You Wouldn’t Even Notice” 



    01 Jul 2015
    Tim De Chant

    Black holes may not have event horizons, but fuzzy surfaces.

    Few things are as mysterious as black holes. Except, of course, what would happen to you if you fell into one.

    Physicists have been debating what might happen to anyone unfortunate enough to slip toward the singularity, and so far, they’ve come up with approximately 2.5 ways you might die, from being stretched like spaghetti to burnt to a crisp.

    The fiery hypothesis is a product of Stephen Hawking’s firewall theory, which also says that black holes eventually evaporate, destroying everything inside. But this violates a fundamental principle of physics—that information cannot be destroyed—so other physicists, including Samir Mathur, have been searching for ways to address that error.

    Here’s Marika Taylor, writing for The Conversation:

    The general relativity description of black holes suggests that once you go past the event horizon, the surface of a black hole, you can go deeper and deeper. As you do, space and time become warped until they reach a point called the “singularity” at which point the laws of physics cease to exist. (Although in reality, you would die pretty early on on this journey as you are pulled apart by intense tidal forces).

    In Mathur’s universe, however, there is nothing beyond the fuzzy event horizon.

    Mathur’s take on black holes suggests that they aren’t surrounded by a point-of-no-return event horizon or a firewall that would incinerate you, but a fuzzball with small variations that maintain a record of the information that fell into it. What does touch the fuzzball is converted into a hologram. It’s not a perfect copy, but a doppelgänger of sorts.

    Perhaps more bizarrely, you even wouldn’t be aware that of the transformation. Say you were to be sucked toward a black hole. At the point where you’d normally hit the event horizon, Mathur says, you’d instead touch the fuzzy surface. But instead of noticing anything, the fuzzy surface would appear like any other part of space immediately around you. Everything would seem the same as it was, except that you’d be a hologram.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:24 pm on July 19, 2015 Permalink | Reply
    Tags: , Cosmology, , ,   

    From WIRED: “Chemists Invent New Letters for Nature’s Genetic Alphabet” 

    Wired logo


    Emily Singer

    Olena Shmahalo/Quanta Magazine

    DNA stores our genetic code in an elegant double helix.

    The structure of the DNA double helix. The atoms in the structure are colour-coded by element and the detailed structure of two base pairs are shown in the bottom right.

    But some argue that this elegance is overrated. “DNA as a molecule has many things wrong with it,” said Steven Benner, an organic chemist at the Foundation for Applied Molecular Evolution in Florida.

    Nearly 30 years ago, Benner sketched out better versions of both DNA and its chemical cousin RNA, adding new letters and other additions that would expand their repertoire of chemical feats.

    A hairpin loop from a pre-mRNA. Highlighted are the nucleobases (green) and the ribose-phosphate backbone (blue). Note that this is a single strand of RNA that folds back upon itself.

    He wondered why these improvements haven’t occurred in living creatures. Nature has written the entire language of life using just four chemical letters: G, C, A and T. Did our genetic code settle on these four nucleotides for a reason? Or was this system one of many possibilities, selected by simple chance? Perhaps expanding the code could make it better.

    Benner’s early attempts at synthesizing new chemical letters failed. But with each false start, his team learned more about what makes a good nucleotide and gained a better understanding of the precise molecular details that make DNA and RNA work. The researchers’ efforts progressed slowly, as they had to design new tools to manipulate the extended alphabet they were building. “We have had to re-create, for our artificially designed DNA, all of the molecular biology that evolution took 4 billion years to create for natural DNA,” Benner said.

    Now, after decades of work, Benner’s team has synthesized artificially enhanced DNA that functions much like ordinary DNA, if not better. In two papers published in the Journal of the American Chemical Society last month, the researchers have shown that two synthetic nucleotides called P and Z fit seamlessly into DNA’s helical structure, maintaining the natural shape of DNA. Moreover, DNA sequences incorporating these letters can evolve just like traditional DNA, a first for an expanded genetic alphabet.

    The new nucleotides even outperform their natural counterparts. When challenged to evolve a segment that selectively binds to cancer cells, DNA sequences using P and Z did better than those without.

    “When you compare the four-nucleotide and six-nucleotide alphabet, the six-nucleotide version seems to have won out,” said Andrew Ellington, a biochemist at the University of Texas, Austin, who was not involved in the study.

    Benner has lofty goals for his synthetic molecules. He wants to create an alternative genetic system in which proteins—intricately folded molecules that perform essential biological functions—are unnecessary. Perhaps, Benner proposes, instead of our standard three-component system of DNA, RNA and proteins, life on other planets evolved with just two.

    Better Blueprints for Life

    The primary job of DNA is to store information. Its sequence of letters contains the blueprints for building proteins. Our current four-letter alphabet encodes 20 amino acids, which are strung together to create millions of different proteins. But a six-letter alphabet could encode as many as 216 possible amino acids and many, many more possible proteins.

    Expanding the genetic alphabet dramatically expands the number of possible amino acids and proteins that cells can build, at least in theory. The existing four-letter alphabet produces 20 amino acids (small circle) while a six-letter alphabet could produce 216 possible amino acids. Olena Shmahalo/Quanta Magazine

    Why nature stuck with four letters is one of biology’s fundamental questions. Computers, after all, use a binary system with just two “letters”—0s and 1s. Yet two letters probably aren’t enough to create the array of biological molecules that make up life. “If you have a two-letter code, you limit the number of combinations you get,” said Ramanarayanan Krishnamurthy, a chemist at the Scripps Research Institute in La Jolla, Calif.

    On the other hand, additional letters could make the system more error prone. DNA bases come in pairs—G pairs with C and A pairs with T. It’s this pairing that endows DNA with the ability to pass along genetic information. With a larger alphabet, each letter has a greater chance of pairing with the wrong partner, and new copies of DNA might harbor more mistakes. “If you go past four, it becomes too unwieldy,” Krishnamurthy said.

    But perhaps the advantages of a larger alphabet can outweigh the potential drawbacks. Six-letter DNA could densely pack in genetic information. And perhaps six-letter RNA could take over some of the jobs now handled by proteins, which perform most of the work in the cell.

    Proteins have a much more flexible structure than DNA and RNA and are capable of folding into an array of complex shapes. A properly folded protein can act as a molecular lock, opening a chamber only for the right key. Or it can act as a catalyst, capturing and bringing together different molecules for chemical reactions.

    Adding new letters to RNA could give it some of these abilities. “Six letters can potentially fold into more, different structures than four letters,” Ellington said.

    Back when Benner was sketching out ideas for alternative DNA and RNA, it was this potential that he had in mind. According to the most widely held theory of life’s origins, RNA once performed both the information-storage job of DNA and the catalytic job of proteins. Benner realized that there are many ways to make RNA a better catalyst.

    “With just these little insights, I was able to write down the structures that are in my notebook as alternatives that would make DNA and RNA better,” Benner said. “So the question is: Why did life not make these alternatives? One way to find out was to make them ourselves, in the laboratory, and see how they work.”

    Steven Benner’s lab notebook from 1985 outlining plans to synthesize “better” DNA and RNA by adding new chemical letters. Courtesy of Steven Benner

    It’s one thing to design new codes on paper, and quite another to make them work in real biological systems. Other researchers have created their own additions to the genetic code, in one case even incorporating new letters into living bacteria. But these other bases fit together a bit differently from natural ones, stacking on top of each other rather than linking side by side. This can distort the shape of DNA, particularly when a number of these bases cluster together. Benner’s P-Z pair, however, is designed to mimic natural bases.

    One of the new papers by Benner’s team shows that Z and P are yoked together by the same chemical bond that ties A to T and C to G. (This bond is known as Watson-Crick pairing, after the scientists who discovered DNA’s structure.) Millie Georgiadis, a chemist at Indiana University-Purdue University Indianapolis, along with Benner and other collaborators, showed that DNA strands that incorporate Z and P retain their proper helical shape if the new letters are strung together or interspersed with natural letters.

    “This is very impressive work,” said Jack Szostak, a chemist at Harvard University who studies the origin of life, and who was not involved in the study. “Finding a novel base pair that does not grossly disrupt the double-helical structure of DNA has been quite difficult.”

    The team’s second paper demonstrates how well the expanded alphabet works. Researchers started with a random library of DNA strands constructed from the expanded alphabet and then selected the strands that were able to bind to liver cancer cells but not to other cells. Of the 12 successful binders, the best had Zs and Ps in their sequences, while the weakest did not.

    “More functionality in the nucleobases has led to greater functionality in nucleic acids themselves,” Ellington said. In other words, the new additions appear to improve the alphabet, at least under these conditions.

    But additional experiments are needed to determine how broadly that’s true. “I think it will take more work, and more direct comparisons, to be sure that a six-letter version generally results in ‘better’ aptamers [short DNA strands] than four-letter DNA,” Szostak said. For example, it’s unclear whether the six-letter alphabet triumphed because it provided more sequence options or because one of the new letters is simply better at binding, Szostak said.

    Benner wants to expand his genetic alphabet even further, which could enhance its functional repertoire. He’s working on creating a 10- or 12-letter system and plans to move the new alphabet into living cells. Benner’s and others’ synthetic molecules have already proved useful in medical and biotech applications, such as diagnostic tests for HIV and other diseases. Indeed, Benner’s work helped to found the burgeoning field of synthetic biology, which seeks to build new life, in addition to forming useful tools from molecular parts.

    Why Life’s Code Is Limited

    Benner’s work and that of other researchers suggests that a larger alphabet has the capacity to enhance DNA’s function. So why didn’t nature expand its alphabet in the 4 billion years it has had to work on it? It could be because a larger repertoire has potential disadvantages. Some of the structures made possible by a larger alphabet might be of poor quality, with a greater risk of misfolding, Ellington said.

    Nature was also effectively locked into the system at hand when life began. “Once [nature] has made a decision about which molecular structures to place at the core of its molecular biology, it has relatively little opportunity to change those decisions,” Benner said. “By constructing unnatural systems, we are learning not only about the constraints at the time that life first emerged, but also about constraints that prevent life from searching broadly within the imagination of chemistry.”

    The genetic code—made up of the four letters, A, T, G and C—stores the blueprint for proteins. DNA is first transcribed into RNA and then translated into proteins, which fold into specific shapes. Olena Shmahalo/Quanta Magazine

    Benner aims to make a thorough search of that chemical space, using his discoveries to make new and improved versions of both DNA and RNA. He wants to make DNA better at storing information and RNA better at catalyzing reactions. He hasn’t shown directly that the P-Z base pairs do that. But both bases have the potential to help RNA fold into more complex structures, which in turn could make proteins better catalysts. P has a place to add a “functional group,” a molecular structure that helps folding and is typically found in proteins. And Z has a nitro group, which could aid in molecular binding.

    In modern cells, RNA acts as an intermediary between DNA and proteins. But Benner ultimately hopes to show that the three-biopolymer system—DNA, RNA and proteins—that exists throughout life on Earth isn’t essential. With better-engineered DNA and RNA, he says, perhaps proteins are unnecessary.

    Indeed, the three-biopolymer system may have drawbacks, since information flows only one way, from DNA to RNA to proteins. If a DNA mutation produces a more efficient protein, that mutation will spread slowly, as organisms without it eventually die off.

    What if the more efficient protein could spread some other way, by directly creating new DNA? DNA and RNA can transmit information in both directions. So a helpful RNA mutation could theoretically be transformed into beneficial DNA. Adaptations could thus lead directly to changes in the genetic code.

    Benner predicts that a two-biopolymer system would evolve faster than our own three-biopolymer system. If so, this could have implications for life on distant planets. “If we find life elsewhere,” he said, “it would likely have the two-biopolymer system.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:31 am on July 18, 2015 Permalink | Reply
    Tags: , , Cosmology, , ,   

    From NOVA: “How Time Got Its Arrow” 



    15 Jul 2015

    Lee Smolin, Perimeter Institute for Theoretical Physics

    I believe in time.

    I haven’t always believed in it. Like many physicists and philosophers, I had once concluded from general relativity and quantum gravity that time is not a fundamental aspect of nature, but instead emerges from another, deeper description. Then, starting in the 1990s and accelerated by an eight year collaboration with the Brazilian philosopher Roberto Mangabeira Unger, I came to believe instead that time is fundamental. (How I came to this is another story.) Now, I believe that by taking time to be fundamental, we might be able to understand how general relativity and the standard model emerge from a deeper theory, why time only goes one way, and how the universe was born.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Flickr user Robert Couse-Baker, adapted under a Creative Commons license.

    The story starts with change. Science, most broadly defined, is the systematic study of change. The world we observe and experience is constantly changing. And most of the changes we observe are irreversible. We are born, we grow, we age, we die, as do all living things. We remember the past and our actions influence the future. Spilled milk is hard to clean up; a cool drink or a hot bath tend towards room temperature. The whole world, living and non-living, is dominated by irreversible processes, as captured mathematically by the second law of thermodynamics, which holds that the entropy of a closed system usually increases and seldom decreases.

    It may come as a surprise, then, that physics regards this irreversibility as a cosmic accident. The laws of nature as we know them are all reversible when you change the direction of time. Film a process described by those laws, and then run the movie backwards: the rewound version is also allowed by the laws of physics. To be more precise, you may have to change left for right and particles for antiparticles, along with reversing the direction of time, but the standard model of particle physics predicts that the original process and its reverse are equally likely.

    The same is true of Einstein’s theory of general relativity, which describes gravity and cosmology. If the whole universe were observed to run backwards in time, so that it heated up while it collapsed, rather than cooled as it expanded, that would be equally consistent with these fundamental laws, as we currently understand them.

    This leads to a fundamental question: Why, if the laws are reversible, is the universe so dominated by irreversible processes? Why does the second law of thermodynamics hold so universally?

    Gravity is one part of the answer. The second law tells us that the entropy of a closed system, which is a measure of disorder or randomness in the motions of the atoms making up that system, will most likely increase until a state of maximum disorder is reached. This state is called equilibrium. Once it is reached, the system is as mixed as possible, so all parts have the same temperature and all the elements are equally distributed.

    But on large scales, the universe is far from equilibrium. Galaxies like ours are continually forming stars, turning nuclear potential energy into heat and light, as they drive the irreversible flows of energy and materials that characterize the galactic disks. On these large scales, gravity fights the decay to equilibrium by causing matter to clump,,creating subsystems like stars and planets. This is beautifully illustrated in some recent papers by Barbour, Koslowski and Mercati.

    But this is only part of the answer to why the universe is out of equilibrium. There remains the mystery of why the universe at the big bang was not created in equilibrium to start with, for the picture of the universe given us by observations requires that the universe be created in an extremely improbable state—very far from equilibrium. Why?

    So when we say that our universe started off in a state far from equilibrium, we are saying that it started off in a state that would be very improbable, were the initial state chosen randomly from the set of all possible states. Yet we must accept this vast improbability to explain the ubiquity of irreversible processes in our world in terms of the reversible laws we know.

    In particular, the conditions present in the early universe, being far from equilibrium, are highly irreversible. Run the early universe backwards to a big crunch and they look nothing like the late universe that might be in our future.

    In 1979 Roger Penrose proposed a radical answer to the mystery of irreversibility. His proposal concerned quantum gravity, the long-searched-for unification of all the known laws, which is believed to govern the processes that created the universe in the big bang—or transformed it from whatever state it was in before the big bang.

    Penrose hypothesized that quantum gravity, as the most fundamental law, will be unlike the laws we know in that it will be irreversible. The known laws, along with their time-reversibility, emerge as approximations to quantum gravity when the universe grows large and cool and dilute, Penrose argued. But those approximate laws will act within a universe whose early conditions were set up by the more fundamental, irreversible laws. In this way the improbability of the early conditions can be explained.

    In the intervening years our knowledge of the early universe has been dramatically improved by a host of cosmological observations, but these have only deepened the mysteries we have been discussing. So a few years ago, Marina Cortes, a cosmologist from the Institute for Astronomy in Edinburgh, and I decided to revive Penrose’s suggestion in the light of all the knowledge gained since, both observationally and theoretically.

    Dr. Cortes argued that time is not only fundamental but fundamentally irreversible. She proposed that the universe is made of processes that continuously generate new events from present events. Events happen, but cannot unhappen. The reversal of an event does not erase that event, Cortes says: It is a new event, which happens after it.

    In December of 2011, Dr. Cortes began a three-month visit to Perimeter Institute, where I work, and challenged me to collaborate with her on realizing these ideas. The first result was a model we developed of a universe created by events, which we called an energetic causal set model.

    This is a version of a kind of model called a causal set model, in which the history of the universe is considered to be a discrete set of events related only by cause-and-effect. Our model was different from earlier models, though. In it, events are created by a process which maximizes their uniqueness. More precisely, the process produces a universe created by events, each of which is different from all the others. Space is not fundamental, only the events and the causal process that creates them are fundamental. But if space is not fundamental, energy is. The events each have a quantity of energy, which they gain from their predecessors and pass on to their successors. Everything else in the world emerges from these events and the energy they convey.

    We studied the model universes created by these processes and found that they generally pass through two stages of evolution. In the first stage, they are dominated by the irreversible processes that create the events, each unique. The direction of time is clear. But this gives rise to a second stage in which trails of events appear to propagate, creating emergent notions of particles. Particles emerge only when the second, approximately reversible stage is reached. These emergent particles propagate and appear to interact through emergent laws which seem reversible. In fact, we found, there are many possible models in which particles and approximately reversible laws emerge after a time from a more fundamental irreversible, particle-free system.

    This might explain how general relativity and the standard model emerged from a more fundamental theory, as Penrose hypothesized. Could we, we wondered, start with general relativity and, staying within the language of that theory, modify it to describe an irreversible theory? This would give us a framework to bridge the transition between the early, irreversible stage and the later, reversible stage.

    In a recent paper, Marina Cortes, PI postdoc Henrique Gomes and I showed one way to modify general relativity in a way that introduces a preferred direction of time, and we explored the possible consequences for the cosmology of the early universe. In particular, we showed that there were analogies of dark matter and dark energy, but which introduce a preferred direction of time, so a contracting universe is no longer the time-reverse of an expanding universe.

    To do this we had to first modify general relativity to include a physically preferred notion of time. Without that there is no notion of reversing time. Fortunately, such a modification already existed. Called shape dynamics, it had been proposed in 2011 by three young people, including Gomes. Their work was inspired by Julian Barbour, who had proposed that general relativity could be reformulated so that a relativity of size substituted for a relativity of time.

    Using the language of shape dynamics, Cortes, Gomes and I found a way to gently modify general relativity so that little is changed on the scale of stars, galaxies and planets. Nor are the predictions of general relativity regarding gravitational waves affected. But on the scale of the whole universe, and for the early universe, there are deviations where one cannot escape the consequences of a fundamental direction of time.

    Very recently I found still another way to modify the laws of general relativity to make them irreversible. General relativity incorporates effects of two fixed constants of nature, Newton’s constant, which measures the strength of the gravitational force, and the cosmological constant [usually denoted by the Greek capital letter lambda: Λ], which measures the density of energy in empty space. Usually these both are fixed constants, but I found a way they could evolve in time without destroying the beautiful harmony and consistency of the Einstein equations of general relativity.

    These developments are very recent and are far from demonstrating that the irreversibility we see around us is a reflection of a fundamental arrow of time. But they open a way to an understanding of how time got its direction that does not rely on our universe being a consequence of a cosmic accident.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 2:23 pm on March 1, 2015 Permalink | Reply
    Tags: , Cosmology,   

    From Daily Galaxy: “Our Observed Universe is a Tiny Corner of an Enormous Cosmos –‘Ruled by Dark Energy'” 

    Daily Galaxy
    The Daily Galaxy

    March 01, 2015
    No Writer Credit


    “This new concept is, potentially, as drastic an enlargement of our cosmic perspective as the shift from pre-Copernican ideas to the realization that the Earth is orbiting a typical star on the edge of the Milky Way.” Sir Martin Rees, physicist, Cambridge University, Astronomer Royal of Great Britain.

    Is our universe merely a part of an enormous universe containing diverse regions each with the right amount of the dark energy and each larger than the observed universe, according to Raphael Bousso, Professor of Theoretical Physics, U of California/Berkeley and Leonard Susskind, Felix Bloch Professor of Physics, Stanford University. The two theorize that information can leak from our causal patch into others, allowing our part of the universe to “decohere” into one state or another, resulting in the universe that we observe.

    The many worlds interpretation of quantum mechanics is the idea that all possible alternate histories of the universe actually exist. At every point in time, the universe splits into a multitude of existences in which every possible outcome of each quantum process actually happens.The reason many physicists love the many worlds idea is that it explains away all the strange paradoxes of quantum mechanics.

    Putting the many world interpretation aside for a moment, another strange idea in modern physics is the idea that our universe was born along with a large, possibly infinite, number of other universes. So our cosmos is just one tiny corner of a much larger multiverse.

    Susskind and Bousso have put forward the idea that the multiverse and the many worlds interpretation of quantum mechanics are formally equivalent, but if both quantum mechanics and the multiverse take special forms.

    Let’s take quantum mechanics first. Susskind and Bousso propose that it is possible to verify the predictions of quantum mechanics. In theory, it could be done if an observer could perform an infinite number of experiments and observe the outcome of them all, which is known as the supersymmetric multiverse with vanishing cosmological constant.

    If the universe takes this form, then it is possible to carry out an infinite number of experiments within the causal horizon of each other. At each instant in time, an infinite (or very large) number of experiments take place within the causal horizon of each other. As observers, we are capable of seeing the outcome of any of these experiments but we actually follow only one.

    Bousso and Susskind argue that since the many worlds interpretation is possible only in their supersymmetric multiverse, they must be equivalent. “We argue that the global multiverse is a representation of the many-worlds in a single geometry,” they say, calling this new idea the multiverse interpretation of quantum mechanics.

    But we have now entered the realm of what mathematical physicist Peter Woit of Columbia calls “Not Even Wrong, because the theory lacks is a testable prediction that would help physicists distinguish it experimentally from other theories of the universe. And without this crucial element, the multiverse interpretation of quantum mechanics is little more than philosophy, according to Woit.

    What this new supersymmetric multiverse interpretation does have is a simplicity– it’s neat and elegant that the many worlds and the multiverse are equivalent. Ockham’s Razor is fulfilled and no doubt, many quantum physicists delight in what appears to be an exciting. plausible interpretation of ultimate if currently untestable, reality.

    Ref: arxiv.org/abs/1105.3796: The Multiverse Interpretation of Quantum Mechanics

    The Daily Galaxy via technologyreview.com

    Image credit: hellstormde.deviantart.com

    See the full article here.

    Please help promote STEM in your local schools


    STEM Education Coalition

  • richardmitnick 5:19 am on February 25, 2015 Permalink | Reply
    Tags: , , Cosmology,   

    From NOVA: “Stephen Hawking Serves Up Scrambled Black Holes” 



    04 Feb 2014
    Greg Kestin

    Out of the firewall and into the frying pan? Credit: Flickr/Pheexies, under a Creative Commons license.

    Toast or spaghetti?

    That’s the question that physicists have been trying to answer for the last year and a half. After agreeing for decades that anything—or anyone—unlucky enough to fall into a black hole would be ripped and stretched into spaghetti-like strands by the overwhelming gravity, theorists are now contending with the possibility that infalling matter is instead incinerated by a “toasty” wall of fire at the black hole’s horizon. Now, Stephen Hawking has proposed a radical solution: nixing one of the most infamous characteristics of a black hole, its event horizon, or point of no return.

    Stephen Hawking

    The original “spaghetti” scenario follows directly from [Albert] Einstein’s theory of general relativity, which describes how gravity stretches the fabric of space and time. A black hole warps that fabric into a bottomless pit; if you get too close, you reach a point of no return called the horizon, where the slope becomes so steep that you can never climb back out. Inside, the gravity gets stronger and stronger until it tears you limb from limb.

    The first hint that there was a flaw in this picture of a black hole came in 1975, when Stephen Hawking came upon a paradox. He realized that, over a very long time, a black hole will “evaporate”—that is, its mass and energy will gradually leak out as radiation, revealing nothing of what the black hole once contained. This was a shocking conclusion because it suggested that black holes destroy information, a fundamental violation of quantum mechanics, which insists that information be conserved.

    How exactly does black hole evaporation imply that information is destroyed? Let’s say you are reading the last copy of “Romeo and Juliet,” and when you get to the end, grief overcomes you (sorry for the spoiler) and you throw the book into a black hole. After the book falls past the horizon, gravity shreds its pages, and finally it is violently compressed into the central point of the black hole. Then you wait as the black hole slowly evaporates by randomly shooting off particles from its glowing edges without any concern for Romeo or Juliet. As the black hole winks out of existence, only these random subatomic particles remain, floating in space. Where did the Montagues and Capulets go? They are lost forever. You could have thrown in “The Cat in The Hat” and the particles left after evaporation would be indistinguishable from the Shakespearian remnants.

    Hawking realized that something had to give. Either quantum mechanics had to change to accommodate information loss, or Einstein’s theory of gravity was flawed.

    Over the past 40 years theorists have battled in the “black hole wars,” trying to resolve this paradox. Two decades ago, most physicists declared a truce, agreeing to consider the inside and the outside of the black hole as separate spaces. If something falls into the black hole, it has gone to another realm, so just stop thinking about it and its fate, they counseled. This argument was largely accepted until July 2012, when UC Santa Barbara physicist Joseph Polchinski and his colleagues realized the paradox was even more puzzling.

    Polchinski began with a similar thought experiment, but instead of Shakespeare, he imagined tossing entangled particles (particles that are quantum mechanically linked) toward a black hole. What happens, he asked, if one particle falls in the black hole and the other flies out into space? This creates a big problem: We can’t think of the two realms (inside and outside of the black hole) separately because they are tied together by the entangled particles.

    Polchinski proposed a new solution that ripped apart Einstein’s idea of a black hole—literally. If there were something to prevent entanglement across the horizon, he thought, then there would be no problem. So he came up with something called a firewall: a wall of radiation at the black hole’s horizon that burns up anything that hits it. This wall is a tear in space-time that nothing can go through.

    Is incineration finally the solution to the black hole information paradox? The father of the paradox, Stephen Hawking, recently put in his two cents (two pages, actually) in a very brief paper in which he argues against not just firewalls, but also event horizons as an ultimate point-of-no-return. This argument relies on quantum fluctuations in space-time that prevent a horizon from existing at a sharp boundary. He instead proposes a temporary “apparent horizon” that stores matter/energy (and information), chaotically scrambles it, and radiates it back out. This means that, as far as quantum mechanics is concerned, information is not lost; it is just extremely garbled. As Polchinski describes it, “It almost sounds like he is replacing the firewall with a chaos-wall!”

    Are you skeptical? If so, you are in good company. Polchinski, for one, is hesitant, saying “It is not clear what [Hawking’s] picture is. There are no calculations.”

    Steve Giddings, a theoretical physicist at the University of California, Santa Barbara, shares in this reluctance:

    “The big question has been how information escapes a black hole, and what that tells us about faster-than-light signaling or a more serious breakdown of spacetime; the effects Hawking describes don’t appear sufficient to address this.”

    Hawking’s new idea will need some flesh on its bones before we can truly embrace it, but if you don’t like spaghetti or toast, at least you have a third option now: scrambled black holes.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:32 am on February 25, 2015 Permalink | Reply
    Tags: , Cosmology, ,   

    From phys.org: “How can space travel faster than the speed of light?” 


    Feb 23, 2015
    Vanessa Janek

    Light speed is often spoken of as a cosmic speed limit… but not everything plays by these rules. In fact, space itself can expand faster than a photon could ever hope to travel.

    Cosmologists are intellectual time travelers. Looking back over billions of years, these scientists are able to trace the evolution of our Universe in astonishing detail. 13.8 billion years ago, the Big Bang occurred. Fractions of a second later, the fledgling Universe expanded exponentially during an incredibly brief period of time called inflation. Over the ensuing eons, our cosmos has grown to such an enormous size that we can no longer see the other side of it.

    But how can this be? If light’s velocity marks a cosmic speed limit, how can there possibly be regions of spacetime whose photons are forever out of our reach? And even if there are, how do we know that they exist at all?

    The Expanding Universe

    Like everything else in physics, our Universe strives to exist in the lowest possible energy state possible. But around 10-36 seconds after the Big Bang, inflationary cosmologists believe that the cosmos found itself resting instead at a “false vacuum energy” – a low-point that wasn’t really a low-point. Seeking the true nadir of vacuum energy, over a minute fraction of a moment, the Universe is thought to have ballooned by a factor of 1050.

    Since that time, our Universe has continued to expand, but at a much slower pace. We see evidence of this expansion in the light from distant objects. As photons emitted by a star or galaxy propagate across the Universe, the stretching of space causes them to lose energy. Once the photons reach us, their wavelengths have been redshifted in accordance with the distance they have traveled.

    This is why cosmologists speak of redshift as a function of distance in both space and time. The light from these distant objects has been traveling for so long that, when we finally see it, we are seeing the objects as they were billions of years ago.

    The Hubble Volume

    Redshifted light allows us to see objects like galaxies as they existed in the distant past; but we cannot see all events that occurred in our Universe during its history. Because our cosmos is expanding, the light from some objects is simply too far away for us ever to see.

    The physics of that boundary rely, in part, on a chunk of surrounding spacetime called the Hubble volume. Here on Earth, we define the Hubble volume by measuring something called the Hubble parameter (H0), a value that relates the apparent recession speed of distant objects to their redshift. It was first calculated in 1929, when Edwin Hubble discovered that faraway galaxies appeared to be moving away from us at a rate that was proportional to the redshift of their light.

    Dividing the speed of light by H0, we get the Hubble volume. This spherical bubble encloses a region where all objects move away from a central observer at speeds less than the speed of light. Correspondingly, all objects outside of the Hubble volume move away from the center faster than the speed of light.

    Yes, “faster than the speed of light.” How is this possible?

    Two sources of redshift: Doppler and cosmological expansion; modeled after Koupelis & Kuhn. Bottom: Detectors catch the light that is emitted by a central star. This light is stretched, or redshifted, as space expands in between. Credit: Brews Ohare

    The answer has to do with the difference between special relativity and general relativity. Special relativity requires what is called an “inertial reference frame” – more simply, a backdrop. According to this theory, the speed of light is the same when compared in all inertial reference frames. Whether an observer is sitting still on a park bench on planet Earth or zooming past Neptune in a futuristic high-velocity rocketship, the speed of light is always the same. A photon always travels away from the observer at 300,000,000 meters per second, and he or she will never catch up.

    General relativity, however, describes the fabric of spacetime itself. In this theory, there is no inertial reference frame. Spacetime is not expanding with respect to anything outside of itself, so the the speed of light as a limit on its velocity doesn’t apply. Yes, galaxies outside of our Hubble sphere are receding from us faster than the speed of light. But the galaxies themselves aren’t breaking any cosmic speed limits. To an observer within one of those galaxies, nothing violates special relativity at all. It is the space in between us and those galaxies that is rapidly proliferating and stretching exponentially.

    The Observable Universe

    Now for the next bombshell: The Hubble volume is not the same thing as the observable Universe.

    To understand this, consider that as the Universe gets older, distant light has more time to reach our detectors here on Earth. We can see objects that have accelerated beyond our current Hubble volume because the light we see today was emitted when they were within it.

    Strictly speaking, our observable Universe coincides with something called the particle horizon. The particle horizon marks the distance to the farthest light that we can possibly see at this moment in time – photons that have had enough time to either remain within, or catch up to, our gently expanding Hubble sphere.

    And just what is this distance? A little more than 46 billion light years in every direction – giving our observable Universe a diameter of approximately 93 billion light years, or more than 500 billion trillion miles.

    (A quick note: the particle horizon is not the same thing as the cosmological event horizon. The particle horizon encompasses all the events in the past that we can currently see. The cosmological event horizon, on the other hand, defines a distance within which a future observer will be able to see the then-ancient light our little corner of spacetime is emitting today.

    In other words, the particle horizon deals with the distance to past objects whose ancient light that we can see today; the cosmological event horizon deals with the distance that our present-day light that will be able to travel as faraway regions of the Universe accelerate away from us.)

    Fit of redshift velocities to Hubble’s law. Credit: Brews Ohare

    Dark Energy

    Thanks to the expansion of the Universe, there are regions of the cosmos that we will never see, even if we could wait an infinite amount of time for their light to reach us. But what about those areas just beyond the reaches of our present-day Hubble volume? If that sphere is also expanding, will we ever be able to see those boundary objects?

    This depends on which region is expanding faster – the Hubble volume or the parts of the Universe just outside of it. And the answer to that question depends on two things: 1) whether H0 is increasing or decreasing, and 2) whether the Universe is accelerating or decelerating. These two rates are intimately related, but they are not the same.

    In fact, cosmologists believe that we are actually living at a time when H0 is decreasing; but because of dark energy, the velocity of the Universe’s expansion is increasing.

    That may sound counterintuitive, but as long as H0 decreases at a slower rate than that at which the Universe’s expansion velocity is increasing, the overall movement of galaxies away from us still occurs at an accelerated pace. And at this moment in time, cosmologists believe that the Universe’s expansion will outpace the more modest growth of the Hubble volume.

    The observable universe, more technically known as the particle horizon.

    So even though our Hubble volume is expanding, the influence of dark energy appears to provide a hard limit to the ever-increasing observable Universe.

    Our Earthly Limitations

    Cosmologists seem to have a good handle on deep questions like what our observable Universe will someday look like and how the expansion of the cosmos will change. But ultimately, scientists can only theorize the answers to questions about the future based on their present-day understanding of the Universe. Cosmological timescales are so unimaginably long that it is impossible to say much of anything concrete about how the Universe will behave in the future. Today’s models fit the current data remarkably well, but the truth is that none of us will live long enough to see whether the predictions truly match all of the outcomes.

    Disappointing? Sure. But totally worth the effort to help our puny brains consider such mind-bloggling science – a reality that, as usual, is just plain stranger than fiction.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 6:47 pm on January 8, 2015 Permalink | Reply
    Tags: , , , , , Cosmology   

    From Caltech: “Unusual Light Signal Yields Clues About Elusive Black Hole Merger” 

    Caltech Logo

    Ker Than

    The central regions of many glittering galaxies, our own Milky Way included, harbor cores of impenetrable darkness—black holes with masses equivalent to millions, or even billions, of suns. What is more, these supermassive black holes and their host galaxies appear to develop together, or “co-evolve.” Theory predicts that as galaxies collide and merge, growing ever more massive, so too do their dark hearts.

    Simulation of gravitational lensing by a black hole, which distorts the image of a galaxy in the background

    Astronomers using ESO’s Very Large Telescope have discovered a gas cloud with several times the mass of the Earth accelerating towards the black hole at the centre of the Milky Way. This is the first time ever that the approach of such a doomed cloud to a supermassive black hole has been observed. This ESOcast explains the new results and includes spectacular simulations of how the cloud will break up over the next few years.
    Credit: ESO.

    ESO VLT Interferometer
    ESO VLT Interior

    Black holes by themselves are impossible to see, but their gravity can pull in surrounding gas to form a swirling band of material called an accretion disk. The spinning particles are accelerated to tremendous speeds and release vast amounts of energy in the form of heat and powerful X-rays and gamma rays. When this process happens to a supermassive black hole, the result is a quasar—an extremely luminous object that outshines all of the stars in its host galaxy and that is visible from across the universe. “Quasars are valuable probes of the evolution of galaxies and their central black holes,” says George Djorgovski, professor of astronomy and director of the Center for Data-Driven Discovery at Caltech.

    In the January 7 issue of the journal Nature, Djorgovski and his collaborators report on an unusual repeating light signal from a distant quasar that they say is most likely the result of two supermassive black holes in the final phases of a merger—something that is predicted from theory but which has never been observed before. The discovery could help shed light on a long-standing conundrum in astrophysics called the “final parsec problem,” which refers to the failure of theoretical models to predict what the final stages of a black hole merger look like or even how long the process might take. “The end stages of the merger of these supermassive black hole systems are very poorly understood,” says the study’s first author, Matthew Graham, a senior computational scientist at Caltech. “The discovery of a system that seems to be at this late stage of its evolution means we now have an observational handle on what is going on.”

    Djorgovski and his team discovered the unusual light signal emanating from quasar PG 1302-102 after analyzing results from the Catalina Real-Time Transient Survey (CRTS), which uses three ground telescopes in the United States and Australia to continuously monitor some 500 million celestial light sources strewn across about 80 percent of the night sky. “There has never been a data set on quasar variability that approaches this scope before,” says Djorgovski, who directs the CRTS. “In the past, scientists who study the variability of quasars might only be able to follow some tens, or at most hundreds, of objects with a limited number of measurements. In this case, we looked at a quarter million quasars and were able to gather a few hundred data points for each one.”

    “Until now, the only known examples of supermassive black holes on their way to a merger have been separated by tens or hundreds of thousands of light years,” says study coauthor Daniel Stern, a scientist at NASA’s Jet Propulsion Laboratory. “At such vast distances, it would take many millions, or even billions, of years for a collision and merger to occur. In contrast, the black holes in PG 1302-102 are, at most, a few hundredths of a light year apart and could merge in about a million years or less.”

    Djorgovski and his team did not set out to find a black hole merger. Rather, they initially embarked on a systematic study of quasar brightness variability in the hopes of finding new clues about their physics. But after screening the data using a pattern-seeking algorithm that Graham developed, the team found 20 quasars that seemed to be emitting periodic optical signals. This was surprising, because the light curves of most quasars are chaotic—a reflection of the random nature by which material from the accretion disk spirals into a black hole. “You just don’t expect to see a periodic signal from a quasar,” Graham says. “When you do, it stands out.”

    Of the 20 periodic quasars that CRTS identified, PG 1302-102 was the best example. It had a strong, clean signal that appeared to repeat every five years or so. “It has a really nice smooth up-and-down signal, similar to a sine wave, and that just hasn’t been seen before in a quasar,” Graham says.

    The team was cautious about jumping to conclusions. “We approached it with skepticism but excitement as well,” says study coauthor Eilat Glikman, an assistant professor of physics at Middlebury College in Vermont. After all, it was possible that the periodicity the scientists were seeing was just a temporary ordered blip in an otherwise chaotic signal. To help rule out this possibility, the scientists pulled in data about the quasar from previous surveys to include in their analysis. After factoring in the historical observations (the scientists had nearly 20 years’ worth of data about quasar PG 1302-102), the repeating signal was, encouragingly, still there.

    The team’s confidence increased further after Glikman analyzed the quasar’s light spectrum. The black holes that scientists believe are powering quasars do not emit light, but the gases swirling around them in the accretion disks are traveling so quickly that they become heated into glowing plasma. “When you look at the emission lines in a spectrum from an object, what you’re really seeing is information about speed—whether something is moving toward you or away from you and how fast. It’s the Doppler effect,” Glikman says. “With quasars, you typically have one emission line, and that line is a symmetric curve. But with this quasar, it was necessary to add a second emission line with a slightly different speed than the first one in order to fit the data. That suggests something else, such as a second black hole, is perturbing this system.”

    Avi Loeb, who chairs the astronomy department at Harvard University, agreed with the team’s assessment that a “tight” supermassive black hole binary is the most likely explanation for the periodic signal they are seeing. “The evidence suggests that the emission originates from a very compact region around the black hole and that the speed of the emitting material in that region is at least a tenth of the speed of light,” says Loeb, who did not participate in the research. “A secondary black hole would be the simplest way to induce a periodic variation in the emission from that region, because a less dense object, such as a star cluster, would be disrupted by the strong gravity of the primary black hole.”

    In addition to providing an unprecedented glimpse into the final stages of a black hole merger, the discovery is also a testament to the power of “big data” science, where the challenge lies not only in collecting high-quality information but also devising ways to mine it for useful information. “We’re basically moving from having a few pictures of the whole sky or repeated observations of tiny patches of the sky to having a movie of the entire sky all the time,” says Sterl Phinney, a professor of theoretical physics at Caltech, who was also not involved in the study. “Many of the objects in the movie will not be doing anything very exciting, but there will also be a lot of interesting ones that we missed before.”

    It is still unclear what physical mechanism is responsible for the quasar’s repeating light signal. One possibility, Graham says, is that the quasar is funneling material from its accretion disk into luminous twin plasma jets that are rotating like beams from a lighthouse. “If the glowing jets are sweeping around in a regular fashion, then we would only see them when they’re pointed directly at us. The end result is a regularly repeating signal,” Graham says.

    Another possibility is that the accretion disk that encircles both black holes is distorted. “If one region is thicker than the rest, then as the warped section travels around the accretion disk, it could be blocking light from the quasar at regular intervals. This would explain the periodicity of the signal that we’re seeing,” Graham says. Yet another possibility is that something is happening to the accretion disk that is causing it to dump material onto the black holes in a regular fashion, resulting in periodic bursts of energy.

    “Even though there are a number of viable physical mechanisms behind the periodicity we’re seeing—either the precessing jet, warped accretion disk or periodic dumping—these are all still fundamentally caused by a close binary system,” Graham says.

    Along with Djorgovski, Graham, Stern, and Glikman, additional authors on the paper, A possible close supermassive black hole binary in a quasar with optical periodicity, include Andrew Drake, a computational scientist and co-principal investigator of the CRTS sky survey at Caltech; Ashish Mahabal, a staff scientist in computational astronomy at Caltech; Ciro Donalek, a computational staff scientist at Caltech; Steve Larson, a senior staff scientist at the University of Arizona; and Eric Christensen, an associate staff scientist at the University of Arizona. Funding for the study was provided by the National Science Foundation.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 496 other followers

%d bloggers like this: