Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:10 pm on February 5, 2016 Permalink | Reply
    Tags: , Physics, ,   

    From Quantum Diaries: “Spun out of proportion: The Proton Spin Crisis” 

    Ricky Nathvani

    We’ve known about the proton’s existence for nearly a hundred years, so you’d be forgiven for thinking that we knew all there was to know about it. For many of us, our last exposure to the word “proton” was in high school chemistry, where they were described as a little sphere of positive charge that clumps with neutrons to make atomic nuclei, around which negatively charged electrons orbit to create all the atoms, which make up Life, the Universe and Everything (1).

    Like many ideas in science, this is a simplified model that serves as a good introduction to a topic, but skips over the gory details and the bizarre, underlying reality of nature. In this article, we’ll focus on one particular aspect, the quantum mechanical spin of the proton. The quest to measure its origin has sparked discovery, controversy and speculation that has lasted 30 years, the answer to which is currently being sought at a unique particle collider in New York.

    The first thing to note is that protons, unlike electrons (2), are composite particles, made up from lots of other particles. The usual description is that the proton is made up of three smaller quarks which, as far as we know, can’t be broken down any further. This picture works remarkably well at low energies but it turns out at very high energies, like those being reached at the at the LHC, this description turns out to be inadequate.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    At that point, we have to get into the nitty-gritty and consider things like quark-antiquark pairs that live inside the proton interacting dynamically with other quarks without changing its the overall charge. Furthermore, there are particles called gluons that are exchanged between quarks, making them “stick” together in the proton and playing a crucial role in providing an accurate description for particle physics experiments.

    So on closer inspection, our little sphere of positive charge turns out to be a buzzing hive of activity, with quarks and gluons all shuffling about, conspiring to create what we call the proton. It is by inferring the nature of these particles within the proton that a successful model of the strong nuclear force, known as Quantum Chromodynamics (QCD), was developed. The gluons were predicted and verfied to be the carriers of this force between quarks. More on them later.

    That’s the proton, but what exactly is spin? It’s often compared to angular momentum, like the objects in our everyday experience might have. Everyone who’s ever messed around on an office chair knows that once you get spun around in one, it often takes you a bit of effort to stop because the angular momentum you’ve built up keeps you going. If you did this a lot, you might have noticed that if you started spinning with your legs/arms outstretched and brought them inwards while you were spinning, you’d begin to spin faster! This is because angular momentum (L) is proportional to the radial (r) distribution of matter (i.e. how far out things are from the axis of rotation) multiplied by the speed of rotation (3) (v). To put it mathematically L = m × v × r where m is just your constant mass. Since L is constant, as you decrease r (by bringing your arms/legs inwards), v (the speed at which you’re spinning) increases to compensate. All fairly simple stuff.

    So clearly, for something to have angular momentum it needs to be distributed radially. Surely r has to be greater than 0 for L to be greater than 0. This is true, but it turns out that’s not all there is to the story. A full description of angular momentum at the quantum (atomic) level is given by something we denote as “J”. I’ll skip the details, but it turns out J = L + S, where L is orbital angular momentum, in a fashion similar to what we’ve discussed, and S? S is a slightly different beast.

    Both L and S can only take on discrete values at the microscopic level, that is, they have quantised values. But whereas a point-like particle cannot have L>0 in its rest frame (since if it isn’t moving around and v = 0, then L = 0), S will have a non-zero value even when the particle isn’t moving. S is what we call Spin. For the electron and quarks, it takes on the value of ½ in natural units.

    Spin has a lot of very strange properties. You can think of it like a little arrow pointing in a direction in space but it’s not something we can truly visualise. One is tempted to think of the electron like the Earth, a sphere spinning about some kind of axis, but the electron is not a sphere, it’s a point-like particle with no “structure” in space. While an electron can have many different values of L depending on its energy (and atomic structure depends on these values), it only has one intrinsic magnitude of spin: ½. However, since spin can be thought of as an arrow, we have some flexibility. Loosely speaking, spin can point in many different directions but we’ll consider it as pointing “up” (+½) or “down” (- ½). If we try to measure it along a particular axis, we’re bound to find it in one of these states relative to our direction of measurement.

    One of the peculiar things about spin-½ is that it causes the wave-function of the electron to exhibit some mind bending properties. For example, you’d think rotating any object by 360 degrees would put it back into exactly the same state as it was, but it turns out that doesn’t hold true for electrons. For electrons, rotating them by 360 degrees introduces a negative sign into their wave-function! You have to spin it another 360 degrees to get it back into the same state! There are ways to visualise systems with similar behaviour (see right) but that’s just a sort of “metaphor” for what really happens to the electron. This links into the famous conclusion of Pauli’s that no two identical particles with spin-½ (or any other half-integer spin) can share the same quantum mechanical state.


    Spin is an important property of matter that only really manifests on the quantum scale, and while we can’t visualise it, it ends up being important for the structure of atoms and how all solid objects obtain the properties they do. The other important property it has is that the spin of a free particle likes to align with magnetic fields (4) (and the bigger the spin, the greater the magnetic coupling to the field). By using this property, it was discovered that the proton also had angular momentum J = ½. Since the proton is a stable particle, it was modelled to be in a low energy state with L = 0 and hence J = S = ½ (that is to say, the orbital angular momentum is assumed to be zero and hence we may simply call J, the “spin”). The fact the proton has spin and that spin aligns with magnetic fields, is a crucial element to what makes MRI machines work.

    Once we got a firm handle on quarks in the late 1960s, the spin structure of the proton was thought to be fairly simple. The proton has spin-½. Quarks, from scattering experiments and symmetry considerations, were also inferred to have spin-½. Therefore, if the three quarks that make up the proton were in an “up-down-up” configuration, the spin of the proton naturally comes out as ½ – ½ + ½ = ½. Not only does this add up to the measured spin, but it also gives a pleasant symmetry to the quantum description of the proton, consistent with the Pauli exclusion principle (it doesn’t matter which of the three quarks is the “down” quark). But hang on, didn’t I say that the three-quarks story was incomplete? At high energies, there should be a lot more quark-antiquark pairs (sea quarks) involved, messing everything up! Even so, theorists predicted that these quark-antiquark pairs would tend not to be polarised, that is, have a preferred direction, and hence would not contribute to the total spin of the proton.

    If you can get the entirety of the proton spinning in a particular direction (i.e. polarising it), it turns out the scattering of an electron against its constituent quarks should be sensitive to their spin! Thus, by scattering electrons at high energy, one could check the predictions of theorists about how the quarks’ spin contributes to the proton.

    In a series of perfectly conducted experiments, the theory was found to be absolutely spot on with no discrepancy whatsoever. Several Nobel prizes were handed out and the entire incident was considered resolved, now just a footnote in history. OK, not really.

    In truth, the total opposite happened. Although the experiments had a reasonable amount of uncertainty due to the inherent difficulty of polarising protons, a landmark paper by the European Muon Collaboration found results consistent with the quarks contributing absolutely no overall spin to the proton whatsoever! The measurements could be interpreted with the overall spin from the quarks being zero (5). This was a complete shock to most physicists who were expecting verification from what was supposed to be a fairly straightforward measurement. Credit where it is due, there were theorists who had predicted that the assumption about orbital angular momentum (L = 0) had been rather ad-hoc and that L>0 could account for some of the missing spin. Scarcely anyone would have expected, however, that the quarks would carry so little of the spin. Although the nuclear strong force, which governs how quarks and gluons combine to form the proton, has been tested to remarkable accuracy, the nature of its self-interaction makes it incredibly difficult to draw predictions from.

    Future experiments (led by father and son rivals, Vernon and Emlyn Hughes (6) of CERN and SLAC respectively) managed to bring this to a marginally less shocking proposal.

    SLAC Campus

    The greater accuracy of the measurements from these collaborations had found that the total spin contributions from the quarks was actually closer to ~30%. An important discovery was that the sea quarks, thought not to be important, were actually found to have measurable polarisation. Although it cleared up some of the discrepancy, it still left 60-70% of spin unaccounted for. Today, following much more experimental activity in Deep Inelastic Scattering and precision low-energy elastic scattering, the situation has not changed in terms of the raw numbers. The best estimates still peg the quarks’ spin as constituting only about 30% of the total.

    Remarkably, there are theoretical proposals to resolve the problem that were hinted at long before experiments were even conducted. As mentioned previously, although currently impossible to test experimentally, the quarks may carry orbital angular momentum (L) that could compensate for some of the missing spin. Furthermore, we have failed to mention the contribution of gluons to the proton spin. Gluons are spin-1 particles, and were thought to arrange themselves such that their total contribution to the proton spin was nearly non-existent.

    The Relativistic Heavy Ion Collider (RHIC) in New York is currently the only spin-polarised proton collider in the world.

    BNL RHIC Campus
    RHIC at Brookhaven National Lab, New York, USA

    This gives it a unique sensitivity to the spin structure of the proton. In 2014, an analysis of the data collected at RHIC indicated that the gluons (whose spin contribution can be inferred from polarised proton-proton collisions) could potentially account for up to 30 of the missing 70% of proton spin! About the same as the quarks. This would bring the “missing” amount down to about 40%, which could be accounted for by the unmeasurable orbital angular momentum of both quarks and gluons.

    As 2016 kicks into gear, RHIC will be collecting data at a much faster rate than ever after a recent technical upgrade that should double it’s luminosity (loosely speaking, the rate at which proton collisions occur). With the increased statistics, we should be able to get an even greater handle on the exact origin of proton spin.

    The astute reader, provided they have not already wandered off, dizzy from all this talk of spinning protons, may be tempted to ask “Why on earth does it matter where the total spin comes from? Isn’t this just abstract accountancy?” This is a fair question and I think the answer is a good one. Protons, like all other hadrons (similar, composite particles made of quarks and gluons) are not very well understood at all. A peculiar feature of QCD called confinement binds individual quarks together so that they are never observed in isolation, only bound up in particles such as the proton. Understanding the spin structure of the proton can inform our theoretical models for understanding this phenomenon.

    This has important implications, one being that 98% of the mass of all visible matter does not come from the Higgs Boson. It comes from the binding energy of protons! And the exact nature of confinement and precise properties of QCD have implications for the cosmology of the early universe. Finally, scattering experiments with protons have already revealed so much to fundamental physics, such as the comprehension of one of the fundamental forces of nature. As one of our most reliable probes of nature, currently in use at the LHC, understanding them better will almost certainly aid our attempts to unearth future discoveries.

    Kind regards to Sebastian Bending (UCL) for several suggestions (all mistakes are unreservedly my own).

    [1] …excluding dark matter and dark energy which constitute the dark ~95% of the universe.

    [2] To the best of our knowledge.

    [3] Strictly speaking the component of velocity perpendicular to the radial direction.

    [4] Sometimes, spins in a medium like water like to align against magnetic fields, causing an opposite magnetic moment (known as diamagnetism). Since frogs are mostly water, this effect can and has been used to levitate frogs.

    [5] A lot of the information here has been summarised from this excellent article by Robert Jaffe, whose collaboration with John Ellis on the Ellis-Jaffe rule led to many of the predictions discussed here.

    [6] Emlyn was actually the spokesperson for SLAC, though he is listed as one of the primary authors on the SLAC papers regarding the spin structure of the proton.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Participants in Quantum Diaries:



    US/LHC Blog


    Brookhaven Lab


  • richardmitnick 5:58 pm on February 4, 2016 Permalink | Reply
    Tags: , Do tetraneutrons exist?, Physics   

    From Physics: “Viewpoint: Can Four Neutrons Tango?” 

    Physics LogoAbout Physics

    Physics Logo 2


    February 3, 2016
    Nigel Orr

    Evidence that the four-neutron system known as the tetraneutron exists as a resonance has been uncovered in an experiment at the RIKEN Radioactive Ion Beam Factory.

    Neutron Tango
    Schematic representation of the reaction used by Kisamori et al. to search for the tetraneutron. In such a “double-charge exchange” reaction, a beam of high-energy radioactive 8He8He nuclei impinges on a 4He4He target to produce an energetic 8Be8Be and a low-energy four-neutron system, 4n4n. The 8Be8Be decays quickly into two alpha (α𝛼) particles. Experimentally, only the 8Be8Be is detected through the observation, in coincidence, of the two α𝛼-particles.

    The fundamental ingredient for constructing a nucleus from scratch is the force between two nucleons. The most attractive interaction occurs between the proton and neutron, as evidenced by the ground state of the deuteron, which is bound by 2.2 MeV. In contrast, bound states of two protons ( 2He2He) or the “dineutron” ( 2n2n) do not exist, although the latter falls short by only some 100 keV. Intriguingly, however, theoretical models have revealed in recent years the importance of three-body and other multinucleon forces in binding light nuclei. As such, the question of whether a four-neutron system, or tetraneutron, exists may be posed. Tantalizing evidence for the tetraneutron, in the form of a resonant, or unbound, state, has been uncovered in an experiment performed at the RIKEN Radioactive Ion Beam Factory (RIBF), in Saitama, Japan [1]. Confirming this finding has the potential to change our understanding of nuclear interactions and provide a new window into the physics of few-body systems. Further afield, it would also have ramifications for our understanding of neutron stars [2].

    Nuclear physicists first began searching for the tetraneutron ( 4n4n) more than half a century ago [3]. In the ensuing years, they employed a wide variety of techniques to detect it (indirectly), including fission, so-called pion-induced double-charge exchange on helium-4 ( 4He4He), and complex reactions involving the transfer of multiple nucleons. In more recent times, the development of energetic beams of radioactive nuclei (see, for example, 30 April, 2012 Viewpoint) has opened up new avenues for 4n4n searches. In 2002, following a long hiatus in these efforts, a French-led collaboration [4] reported the observation of a handful of events in the breakup of beryllium-14—the most neutron-rich beryllium isotope—that were consistent with the detection of a bound [4] or resonant 4n4n system [5]. Although subsequent attempts to observe the tetraneutron were unsuccessful [6], theorists began to explore the question of its existence using new tools. These included methods built on recently developed ab intio approaches, in which the nucleus is constructed from the constituent nucleons. These efforts found no realistic means to generate a bound system of four neutrons [7]. Less attention was paid to the possibility of a resonant four-neutron system—that is, an unbound system existing for long enough (typically of order 10−21s10−21s) that a well-characterized state (spin, parity, and energy) can be defined. The most sophisticated calculations, however, have suggested that a narrow resonance, with an energy near the threshold energy to bind four neutrons, is also very unlikely.

    Yet the new work suggests the tetraneutron may in fact exist as a resonance, which is only unbound by around 1 MeV. Perhaps fittingly, the researchers identified four such events. The experiment, performed by a collaboration led by Keiichi Kisamori and Susumu Shimoura from the Center for Nuclear Study of the University of Tokyo, Japan, was a tour de force, involving old and new techniques. The old was employing a double-charge-exchange reaction on a 4He4He target; the new was the use of a beam of very high-energy radioactive helium-8 ( 8He8He)—the most neutron-rich helium isotope. Specifically, the experiment focused on the 4He(8He,8Be)4n4He(8He,8Be)4n reaction (Fig. 1). This involved directing the 8He8He beam onto a liquid 4He4He target and analyzing the reaction products using RIKEN’s high-resolution SHARAQ spectrometer to identify the 8Be8Be.

    In the experiment, the researchers did not detect the four-neutron system directly, as this is essentially impossible for the very low-energy neutrons that result from the 4He(8He,8Be)4n4He(8He,8Be)4n reaction. Rather, they used the “missing mass” method in which they deduced the momentum (and hence energy) of the four-neutron system from their measurement of the momenta of the 8He8He and of the two alpha particles ( 4He4He nuclei) produced by the decay of 8Be8Be. The 4He(8He,8Be)4n4He(8He,8Be)4n reaction is of note in two respects. First, and most importantly for this study, it results in the transfer of almost no recoil momentum to the four-neutron system, thus avoiding disrupting the 4n4n. Second, the two alpha particles from the 8Be8Be have a small, well-defined relative energy and angle. While complicating the experiment, the detection of two such alpha particles provides an excellent signal to isolate the 8Be8Be against the experimental backgrounds.

    Several factors make the RIKEN experiment difficult. In particular, the beam, while being the most intense one of energetic 8He8He available (2 million particles per second), is still ten thousand times less intense than beams of stable nuclei. Moreover, the cross section, or probability, for the 4He(8He,8Be)4n4He(8He,8Be)4n reaction is extremely low (4 nanobarns), some 5 orders of magnitude below that of typical experiments with radioactive beams. Consequently, despite accumulating data for around a week—a relatively long run for the heavily overbooked RIBF—the authors only observed four events in the energy range of interest. At first glance this observation may seem to be of limited significance: the statistical uncertainty for four events is two events. However, the collaboration provided more weight to the result by performing a sophisticated statistical analysis, similar to that employed in the discovery of the Higgs boson. In simple terms, they calculated the chance of finding four counts within a few MeV of the threshold energy, given the estimated shape of the full spectrum (the experimental background plus the spectrum of events for four neutrons when no resonance occurs). This analysis indicated a statistical significance for the four events of close to 5σ5𝜎, the typical criterion for claiming a discovery. However, because the researchers made certain assumptions, including the form of the continuum, they were careful to refer to the four events as corresponding to a “candidate” resonant state.

    This result is certain to revive interest in the tetraneutron. On the theoretical side, efforts are already underway to understand if and how its existence can be explained. The role of multinucleon forces is the obvious lead to follow. Preliminary calculations, however, suggest that an unphysically strong three-neutron force is required to generate a 4n4n resonance within a few MeV of threshold [8]. On the experimental side, it is an understatement to suggest that further data are required. Motivated by the present results, the RIBF has just approved a proposal, by Shimoura and colleagues, for an improved experiment. Its goal will be to ameliorate the statistics of the measurement by an order of magnitude and reduce the uncertainty in the energy of the 4n4n resonance (currently around 1.3MeV1.3MeV) by a similar factor. Importantly, experimentalists will also attempt to observe the 4n4n using different techniques. These will include producing the 4n4n from the “breakup” of energetic beams of very neutron-rich light nuclei and then directly detecting the four neutrons from their decay. Many nuclear physicists may still be skeptical that the 4n4n exists, even as a resonance. However, the implications, including our understanding of some of the fundamental features of nuclear interactions, are such that these experiments at the limits of our capabilities must be pursued.

    This research is published in Physical Review Letters.


    K. Kisamori et al., Candidate resonant tetraneutron state populated by the 4He(8He,8Be) reaction, Phys. Rev. Lett. 116, 052501 (2016).
    K. Hebeler et al., Constraints on Neutron Star Radii Based on Chiral Effective Field Theory Interactions, Phys. Rev. Lett. 105, 161102 (2010).
    J.P. Schiffer and R. Vandenbosch, Search for a Particle-Stable Tetra Neutron, Phys. Lett. 5, 292 (1963).
    F. M. Marqués et al., Detection of Neutron Clusters, Phys. Rev. C 65, 044006 (2002).
    F. M. Marqués et al., On the Possible Detection of 4n4n Events in the Breakup of 14Be14Be, arXiv:nucl-ex/0504009.
    See, for example, S. Fortier et al., Search for Resonances in 4n4n, 7H7H and 9He9He via Transfer Reactions,” AIP Conf. Proc. 912, 3 (2007).
    Steven C. Pieper, Can Modern Nuclear Hamiltonians Tolerate a Bound Tetraneutron?, Phys. Rev. Lett. 90 (2003), and references therein.
    E Hiyama et al., Can T=3/2 Isospin 3-neutron Forces Generate a Narrow 4-Neutron Resonance? (unpublished).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

  • richardmitnick 5:29 pm on January 31, 2016 Permalink | Reply
    Tags: , Physics, The Arrow of Time,   

    From wired.com: “Following Time’s Arrow to the Universe’s Biggest Mystery” 

    Wired logo


    Frank Wilczek

    Destiny The Arrow of Time
    DESTINY – The Arrow of Time from BBC Wonders of the Universe with Brian Cox

    Few facts of experience are as obvious and pervasive as the distinction between past and future. We remember one, but anticipate the other. If you run a movie backwards, it doesn’t look realistic. We say there is an arrow of time, which points from past to future.

    One might expect that a fact as basic as the existence of time’s arrow would be embedded in the fundamental laws of physics. But the opposite is true. If you could take a movie of subatomic events, you’d find that the backward-in-time version looks perfectly reasonable. Or, put more precisely: The fundamental laws of physics—up to some tiny, esoteric exceptions, as we’ll soon discuss—will look to be obeyed, whether we follow the flow of time forward or backward. In the fundamental laws, time’s arrow is reversible.

    Logically speaking, the transformation that reverses the direction of time might have changed the fundamental laws. Common sense would suggest that it should. But it does not. Physicists use convenient shorthand—also called jargon—to describe that fact. They call the transformation that reverses the arrow of time “time reversal,” or simply T. And they refer to the (Tapproximate) fact that T does not change the fundamental laws as T invariance, or T symmetry.

    Everyday experience violates T invariance, while the fundamental laws respect it. That blatant mismatch raises challenging questions. How does the actual world, whose fundamental laws respect T symmetry, manage to look so asymmetric? Is it possible that someday we’ll encounter beings with the opposite flow—beings who grow younger as we grow older? Might we, through some physical process, turn around our own body’s arrow of time?

    Those are great questions, and I hope to write about them in a past future posting. Here, however, I want to consider a complementary question. It arises when we start from the other end, in the facts of common experience. From that perspective, the puzzle is this:

    Why should the fundamental laws have that bizarre and problem-posing property, T invariance?

    The answer we can offer today is incomparably deeper and more sophisticated than that we could offer 50 years ago. Today’s understanding emerged from a brilliant interplay of experimental discovery and theoretical analysis, which yielded several Nobel prizes. Yet our answer still contains a serious loophole. As I’ll explain, closing that loophole may well lead us, as an unexpected bonus, to identify the cosmological dark matter.


    The modern history of T invariance begins in 1956. In that year, T. D. Lee and C. N. Yang questioned a different but related feature of physical law, which until then had been taken for granted. Lee and Yang were not concerned with T itself, but with its spatial analogue, the parity transformation, “P.” Whereas T involves looking at movies run backward in time, P involves looking at movies reflected in a mirror. Parity invariance is the hypothesis that the events you see in the reflected movies follow the same laws as the originals. Lee and Yang identified circumstantial evidence against that hypothesis and suggested critical experiments to test it. Within a few months, experiments proved that P invariance fails in many circumstances. (P invariance holds for gravitational., electromagnetic and strong interactions, but generally fails in the so-called weak interactions.)

    Those dramatic developments around P (non)invariance stimulated physicists to question T invariance, a kindred assumption they had also once taken for granted. But the hypothesis of T invariance survived close scrutiny for several years. It was only in 1964 that a group led by James Cronin and Valentine Fitch discovered a peculiar, tiny effect in the decays of K mesons that violates T invariance.


    The wisdom of Joni Mitchell’s insight—that “you don’t know what you’ve got ‘til it’s gone”—was proven in the aftermath.

    If, like small children, we keep asking, “Why?” we may get deeper answers for a while, but eventually we will hit bottom, when we arrive at a truth that we can’t explain in terms of anything simpler. At that point we must call a halt, in effect declaring victory: “That’s just the way it is.” But if we later find exceptions to our supposed truth, that answer will no longer do. We will have to keep going.

    As long as T invariance appeared to be a universal truth, it wasn’t clear that our italicized question was a useful one. Why was the universe T invariant? It just was. But after Cronin and Fitch, the mystery of T invariance could not be avoided.

    Many theoretical physicists struggled with the vexing challenge of understanding how T invariance could be extremely accurate, yet not quite exact. Here the work of Makoto Kobayashi and Toshihide Maskawa proved decisive. In 1973, they proposed that approximate T invariance is an accidental consequence of other, more-profound principles.

    The time was ripe. Not long before, the outlines of the modern Standard Model of particle physics had emerged and with it a new level of clarity about fundamental interactions.

    Standard model with Higgs New
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    By 1973 there was a powerful—and empirically successful!—theoretical framework, based on a few “sacred principles.” Those principles are relativity, quantum mechanics and a mathematical rule of uniformity called “gauge symmetry.”

    It turns out to be quite challenging to get all those ideas to cooperate. Together, they greatly constrain the possibilities for basic interactions.

    Kobayashi and Maskawa, in a few brief paragraphs, did two things. First they showed that if physics were restricted to the particles then known (for experts: if there were just two families of quarks and leptons), then all the interactions allowed by the sacred principles also respect T invariance. If Cronin and Fitch had never made their discovery, that result would have been an unalloyed triumph. But they had, so Kobayashi and Maskawa went a crucial step further. They showed that if one introduces a very specific set of new particles (a third family), then those particles bring in new interactions that lead to a tiny violation of T invariance. It looked, on the face of it, to be just what the doctor ordered.

    In subsequent years, their brilliant piece of theoretical detective work was fully vindicated. The new particles whose existence Kobayashi and Maskawa inferred have all been observed, and their interactions are just what Kobayashi and Maskawa proposed they should be.

    Before ending this section, I’d like to add a philosophical coda. Are the sacred principles really sacred? Of course not. If experiments force scientists to modify those principles, they will do so. But at the moment, the sacred principles look awfully good. And evidently it’s been fruitful to take them very seriously indeed.


    So far I’ve told a story of triumph. Our italicized question, one of the most striking puzzles about how the world works, has received an answer that is deep, beautiful and fruitful.

    But there’s a worm in the rose.

    A few years after Kobayashi and Maskawa’s work, Gerard ’t Hooft discovered a loophole in their explanation of T invariance. The sacred principles allow an additional kind of interaction. The possible new interaction is quite subtle, and ’t Hooft’s discovery was a big surprise to most theoretical physicists.

    The new interaction, were it present with substantial strength, would violate T invariance in ways that are much more obvious than the effect that Cronin, Fitch and their colleagues discovered. Specifically, it would allow the spin of a neutron to generate an electric field, in addition to the magnetic field it is observed to cause. (The magnetic field of a spinning neutron is broadly analogous to that of our rotating Earth, though of course on an entirely different scale.) Experimenters have looked hard for such electric fields, but so far they’ve come up empty.

    Nature does not choose to exploit ’t Hooft’s loophole. That is her prerogative, of course, but it raises our italicized question anew: Why does Nature enforce T invariance so accurately?

    Several explanations have been put forward, but only one has stood the test of time. The central idea is due to Roberto Peccei and Helen Quinn. Their proposal, like that of Kobayashi and Maskawa, involves expanding the standard model in a fairly specific way. One introduces a neutralizing field, whose behavior is especially sensitive to ’t Hooft’s new interaction. Indeed if that new interaction is present, then the neutralizing field will adjust its own value, so as to cancel that interaction’s influence. (This adjustment process is broadly similar to how negatively charged electrons in a solid will congregate around a positively charged impurity and thereby screen its influence.) The neutralizing field thereby closes our loophole.

    Peccei and Quinn overlooked an important, testable consequence of their idea. The particles produced by their neutralizing field—its quanta—are predicted to have remarkable properties. Since they didn’t take note of these particles, they also didn’t name them. That gave me an opportunity to fulfill a dream of my adolescence.

    A few years before, a supermarket display of brightly colored boxes of a laundry detergent named Axion had caught my eye. It occurred to me that “axion” sounded like the name of a particle and really ought to be one. So when I noticed a new particle that “cleaned up” a problem with an “axial” current, I saw my chance. (I soon learned that Steven Weinberg had also noticed this particle, independently. He had been calling it the “Higglet.” He graciously, and I think wisely, agreed to abandon that name.) Thus began a saga whose conclusion remains to be written.

    In the chronicles of the Particle Data Group you will find several pages, covering dozens of experiments, describing unsuccessful axion searches.

    Yet there are grounds for optimism.

    The theory of axions predicts, in a general way, that axions should be very light, very long-lived particles whose interactions with ordinary matter are very feeble. But to compare theory and experiment we need to be quantitative. And here we meet ambiguity, because existing theory does not fix the value of the axion’s mass. If we know the axion’s mass we can predict all its other properties. But the mass itself can vary over a wide range. (The same basic problem arose for the charmed quark, the Higgs particle, the top quark and several other others. Before each of those particles was discovered, theory predicted all of its properties except for the value of its mass.) It turns out that the strength of the axion’s interactions is proportional to its mass. So as the assumed value for axion mass decreases, the axion becomes more elusive.

    In the early days physicists focused on models in which the axion is closely related to the Higgs particle. Those ideas suggested that the axion mass should be about 10 keV—that is, about one-fiftieth of an electron’s mass. Most of the experiments I alluded to earlier searched for axions of that character. By now we can be confident such axions don’t exist.

    Attention turned, therefore, toward much smaller values of the axion mass (and in consequence feebler couplings), which are not excluded by experiment. Axions of this sort arise very naturally in models that unify the interactions of the standard model. They also arise in string theory.

    Axions, we calculate, should have been abundantly produced during the earliest moments of the Big Bang. If axions exist at all, then an axion fluid will pervade the universe. The origin of the axion fluid is very roughly similar to the origin of the famous cosmic microwave background (CMB) radiation, but there are three major differences between those two entities.

    Cosmic Background Radiation Planck
    CMB per ESA/Planck

    ESA Planck

    First: The microwave background has been observed, while the axion fluid is still hypothetical. Second: Because axions have mass, their fluid contributes significantly to the overall mass density of the universe. In fact, we calculate that they contribute roughly the amount of mass astronomers have identified as dark matter! Third: Because axions interact so feebly, they are much more difficult to observe than photons from the CMB.

    The experimental search for axions continues on several fronts. Two of the most promising experiments are aimed at detecting the axion fluid. One of them, ADMX (Axion Dark Matter eXperiment) uses specially crafted, ultrasensitive antennas to convert background axions into electromagnetic pulses.

    ADMX Axion Dark Matter Experiment
    U Washington ADMX experiment

    The other, CASPEr (Cosmic Axion Spin Precession Experiment) looks for tiny wiggles in the motion of nuclear spins, which would be induced by the axion fluid. Between them, these difficult experiments promise to cover almost the entire range of possible axion masses.

    CASPEr Experiment

    Do axions exist? We still don’t know for sure. Their existence would bring the story of time’s reversible arrow to a dramatic, satisfying conclusion, and very possibly solve the riddle of the dark matter, to boot. The game is afoot.

    Frank Wilczek is a Nobel Prize-winning physicist at the Massachusetts Institute of Technology.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:31 pm on January 14, 2016 Permalink | Reply
    Tags: , , , , Physics   

    From BNL: “New Theory of Secondary Inflation Expands Options for Avoiding an Excess of Dark Matter” 

    Brookhaven Lab

    January 14, 2016
    Chelsea Whyte, (631) 344-8671
    Peter Genzer, (631) 344-3174

    Physicists suggest a smaller secondary inflationary period in the moments after the Big Bang could account for the abundance of the mysterious matter.

    Temp 1
    No image credit found

    Standard cosmology—that is, the Big Bang theory with its early period of exponential growth known as inflation—is the prevailing scientific model for our universe, in which the entirety of space and time ballooned out from a very hot, very dense point into a homogeneous and ever-expanding vastness. This theory accounts for many of the physical phenomena we observe. But what if that’s not all there was to it?

    A new theory from physicists at the U.S. Department of Energy’s Brookhaven National Laboratory, Fermi National Accelerator Laboratory, and Stony Brook University, which will publish online on January 18 in Physical Review Letters, suggests a shorter secondary inflationary period that could account for the amount of dark matter estimated to exist throughout the cosmos.

    Temp 2
    Brookhaven Lab physicist Hooman Davoudiasl published a theory that suggests a shorter secondary inflationary period that could account for the amount of dark matter estimated to exist throughout the cosmos.

    “In general, a fundamental theory of nature can explain certain phenomena, but it may not always end up giving you the right amount of dark matter,” said Hooman Davoudiasl, group leader in the High-Energy Theory Group at Brookhaven National Laboratory and an author on the paper. “If you come up with too little dark matter, you can suggest another source, but having too much is a problem.”

    Measuring the amount of dark matter in the universe is no easy task. It is dark after all, so it doesn’t interact in any significant way with ordinary matter. Nonetheless, gravitational effects of dark matter give scientists a good idea of how much of it is out there. The best estimates indicate that it makes up about a quarter of the mass-energy budget of the universe, while ordinary matter—which makes up the stars, our planet, and us—comprises just 5 percent. Dark matter is the dominant form of substance in the universe, which leads physicists to devise theories and experiments to explore its properties and understand how it originated.

    Some theories that elegantly explain perplexing oddities in physics—for example, the inordinate weakness of gravity compared to other fundamental interactions such as the electromagnetic, strong nuclear, and weak nuclear forces—cannot be fully accepted because they predict more dark matter than empirical observations can support.

    This new theory solves that problem. Davoudiasl and his colleagues add a step to the commonly accepted events at the inception of space and time.

    In standard cosmology, the exponential expansion of the universe called cosmic inflation began perhaps as early as 10-35 seconds after the beginning of time—that’s a decimal point followed by 34 zeros before a 1. This explosive expansion of the entirety of space lasted mere fractions of a fraction of a second, eventually leading to a hot universe, followed by a cooling period that has continued until the present day. Then, when the universe was just seconds to minutes old – that is, cool enough – the formation of the lighter elements began. Between those milestones, there may have been other inflationary interludes, said Davoudiasl.

    “They wouldn’t have been as grand or as violent as the initial one, but they could account for a dilution of dark matter,” he said.

    In the beginning, when temperatures soared past billions of degrees in a relatively small volume of space, dark matter particles could run into each other and annihilate upon contact, transferring their energy into standard constituents of matter—particles like electrons and quarks. But as the universe continued to expand and cool, dark matter particles encountered one another far less often, and the annihilation rate couldn’t keep up with the expansion rate.

    “At this point, the abundance of dark matter is now baked in the cake,” said Davoudiasl. “Remember, dark matter interacts very weakly. So, a significant annihilation rate cannot persist at lower temperatures. Self-annihilation of dark matter becomes inefficient quite early, and the amount of dark matter particles is frozen.”

    However, the weaker the dark matter interactions, that is, the less efficient the annihilation, the higher the final abundance of dark matter particles would be. As experiments place ever more stringent constraints on the strength of dark matter interactions, there are some current theories that end up overestimating the quantity of dark matter in the universe. To bring theory into alignment with observations, Davoudiasl and his colleagues suggest that another inflationary period took place, powered by interactions in a “hidden sector” of physics. This second, milder, period of inflation, characterized by a rapid increase in volume, would dilute primordial particle abundances, potentially leaving the universe with the density of dark matter we observe today.

    “It’s definitely not the standard cosmology, but you have to accept that the universe may not be governed by things in the standard way that we thought,” he said. “But we didn’t need to construct something complicated. We show how a simple model can achieve this short amount of inflation in the early universe and account for the amount of dark matter we believe is out there.”

    Proving the theory is another thing entirely. Davoudiasl said there may be a way to look for at least the very feeblest of interactions between the hidden sector and ordinary matter.

    “If this secondary inflationary period happened, it could be characterized by energies within the reach of experiments at accelerators such as the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider [LHC],” he said.

    BNL RHIC Campus
    RHIC with map

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN with map.

    Only time will tell if signs of a hidden sector show up in collisions within these colliders, or in other experimental facilities.

    Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 4:42 pm on January 12, 2016 Permalink | Reply
    Tags: , , , Physics,   

    From Science Friday: “10 Questions for Alan Guth, Pioneer of the Inflationary Model of the Universe” 

    Science Friday

    Science Friday

    January 7, 2016
    Christina Couch

    The theoretical physicist discusses the expanding universe and the infinite possibilities it brings.

    Buried under a mountain of papers and empty Coke Zero bottles, Alan Guth ponders the origins of the cosmos. A world-renowned theoretical physicist and professor at the Massachusetts Institute of Technology, Guth is best known for pioneering the theory of cosmic inflation, a model that explains the exponential growth of the universe mere fractions of a second after the Big Bang, and its continued expansion today.

    Cosmic inflation not only describes the underlying physics of the Big Bang, however. Guth believes it also supports the idea that our universe is one of many, with even more universes yet to form.

    Science Friday headed to MIT (where this writer also works, but in a different department) to chat with Guth in his office about the infinite possibilities in an unending cosmos, and the fortune cookie that changed his life.

    Alan Guth in 2007. Photo by Betsy Devine/Wikipedia/CC BY-SA 3.0

    Science Friday: What made you realize that you wanted to be a scientist?
    Alan Guth: I remember an event in high school, which maybe is indicative of my desires to be a theoretical physicist in particular. I was taking high school physics, and a friend of mine was doing an experiment which consisted of taking a yard stick and punching holes in it in different places and pivoting it on these different holes and seeing how the period depended on where the hole was. At this point, I had just learned enough basic physics and calculus to be able to calculate what the answer to that question is supposed to be. I remember one afternoon, we got together and compared my formula with his data using a slide rule to do the calculations. It actually worked. I was very excited about the idea that we can really calculate things, and they actually do reflect the way the real world works.

    You did your dissertation on particle physics and have said that it didn’t turn out exactly how you wanted. Could you tell me about that?
    My dissertation was about the quark model and about how quarks and anti-quarks could bind to form mesons. But it was really just before the theory of quarks underwent a major revolution [when physicists went from believing that quarks are heavy particles that have a large binding energy when they combine, to the quantum chromodynamics theory that quarks are actually very light and their binding energy [gluons] increases as they’re pulled farther apart]. I was on the wrong side of that revolution. My thesis, more or less, became totally obsolete about the time I wrote it. I certainly learned a lot by doing it.

    What got you into cosmology?
    It wasn’t really until the eighth year of my being a [particle physics] postdoc that I got into cosmology. A fellow postdoc at Cornell named Henry Tye got interested in what was then a newfangled class of particle theories called grand unified theories [particle physics models that describe how three of the four fundamental forces in the universe—electromagnetism, weak nuclear interactions, and strong nuclear interactions—act as one force at extremely high energies]. He came to me one day and asked me whether these grand unified theories would predict that there should be magnetic monopoles [particles that have a net magnetic north charge or a net magnetic south charge.]

    I didn’t know about grand unified theories at the time, so he had to teach me, which he did, very successfully. Then I knew enough to put two and two together and conclude—as I’m sure many people did around the world—that yes, grand unified theories do predict that magnetic monopoles should exist, but that they would be outrageously heavy. They would weigh something like 10 to the 16th power times as much as a proton [which means that scientists should theoretically be able to observe them in the universe, although no one has yet].

    About six months later, there was a visit to Cornell by [Nobel laureate] Steve Weinberg, who’s a fabulous physicist and someone I had known from my graduate student days at MIT. He was working on how grand unified theories might explain the excess of matter over anti-matter [in the universe], but it involved the same basic physics that determining how many monopoles existed in the early universe would involve. I decided that if it was sensible enough for Steve Weinberg to work on, why not me, too?

    After a little while, Henry Tye and I came to the conclusion that far too many magnetic monopoles would be produced if one combined conventional cosmology with conventional grand unified theories. We were scooped in publishing that, but Henry and I decided that we would continue to try to figure out if there was anything that could be changed that maybe would make it possible for grand unified theories to be consistent with cosmology as we know it.

    How did you come up with the idea of cosmic inflation?
    A little bit before I started talking to Henry Tye about monopoles, there was a lecture at Cornell by Bob Dicke, a Princeton physicist and cosmologist, in which he presented something that was called the flatness problem, a problem about the expansion rate of the early universe and how precisely fine-tuned it had to be for the universe to work to produce a universe like the one we live in [that is, one that has little or no space-time curvature and is therefore almost perfectly “flat”]. In this talk, Bob Dicke told us that if you thought about the universe at one second after the beginning, the expansion rate really had to be just right to 15 decimal places, or else the universe would either fly apart too fast for any structure to form or re-collapse too fast for any structure to form.

    At the time, I thought that was kind of amazing but didn’t even understand it. But after working on this magnetic monopole question for six months, I came to the realization one night that the kind of mechanism that we were thinking about that would suppress the amount of magnetic monopoles produced after the Big Bang [the “mechanism” being a phase transition that occurs after a large amount of super-cooling] would have the surprising effect of driving the universe into a period of exponential expansion—which is what we now call inflation—and that exponential expansion would solve this flatness problem. It would also draw the universe to exactly the right expansion rate that the Big Bang required [to create a universe like ours].

    You’ve said in previous talks that a fortune cookie played a legitimately important part in your career. How so?
    During the spring of 1980, after having come up with this idea of inflation, I decided that the best way to publicize it would be to give a lot of talks about it. I visited MIT, but MIT had not advertised any positions that year. During the very last day of this six-week trip, I was at the University of Maryland, and they took me out for a Chinese dinner, and the fortune I got in my Chinese fortune cookie said, “An exciting opportunity awaits you if you’re not too timid.” I thought about that and decided that it might be trying to tell me something. When I got back to California, I called one of the faculty members at MIT and said in some stammering way that I hadn’t applied for any jobs because there weren’t any jobs at MIT, but I wanted to tell them that if they might be interested in me, I’d be interested in coming. Then they got back to me in one day and made me an offer. It was great. I came to MIT as a faculty member, and I’ve been here ever since.

    When and where do you do your best work?
    I firmly believe that I do my best thinking in the middle of the night. I very much like to be able to have reasonably long periods of time, a few hours, when I can concentrate on something and not be interrupted, and that only happens at night. What often happens is I fall asleep at like 9:30 and wake up at 1 or 2 and start working and then fall asleep again at 5.

    Who is a dream collaborator you’d love to work with?
    I bet it would have been a lot of fun to work with [Albert] Einstein. What I really respect about Einstein is his desire to throw aside all conventional modes and just concentrate on what seems to be the closest we can get to an accurate theory of nature.

    What are you currently working on?
    The most concrete project I’m working on is a project in collaboration with a fairly large group here at MIT in which we’re trying to calculate the production of primordial black holes that might have happened with a certain version of inflation. If this works out, these primordial black holes could perhaps be the seeds for the super massive black holes in the centers of galaxies, which are very hard to explain. It would be incredibly exciting if that turns out to be the case.

    What else are you mulling over?
    A bigger question, which has been in the back of my mind for a decade, is the problem of understanding probabilities in eternally inflating universes. In an eternally inflating universe, these pocket universes [like the one we live in] go on being formed literally forever. An infinite number of pocket universes are formed, and that means that anything that’s physically allowed will ultimately happen an infinite number of times.

    Normally we interpret probabilities as relative occurrences. We think one-headed cows are more probable than two-headed cows because we think there are a lot more one-headed cows than two-headed cows. I don’t know if there are any two-headed cows on earth, but let’s pretend there are. In an eternally inflating universe, assuming that a two-headed cow is at least possible, there will be an infinite number of two-headed cows and an infinite number of one-headed cows. It’s hard to know what you mean if you try to say that one is more common than the other.

    If anything can happen in an eternally inflating universe, is there a situation in which I am the cosmologist and you are the journalist?
    [Laughs] Probably, yes. I think what we would know for sure is that anything that’s physically possible—and I don’t see why this is not physically possible—will happen an infinite number of times.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Covering the outer reaches of space to the tiniest microbes in our bodies, Science Friday is the source for entertaining and educational stories about science, technology, and other cool stuff.

    Science Friday is your trusted source for news and entertaining stories about science.

    For 25 years we’ve introduced top scientists to public radio listeners, and reminded them how much fun it is to learn something new. But we’re more than just a radio show. We produce award-winning digital videos, original web articles, and educational resources for teachers and informal educators. We like to say we’re brain fun, for curious people.

    All of our work is independently produced by the Science Friday Initiative, a non-profit organization dedicated to increasing the public’s access to science and scientific information. Public Radio International (PRI) distributes our radio show, which you can catch on public radio stations across the U.S.

  • richardmitnick 7:58 am on January 9, 2016 Permalink | Reply
    Tags: , , , Physics,   

    From The Conversation: “The race to find even more new elements to add to the periodic table” 

    The Conversation

    January 5, 2016
    David Hinde
    Director, Heavy Ion Accelerator Facility, Australian National University

    Temp 1

    In an event likely never to be repeated, four new superheavy elements were last week simultaneously added to the periodic table. To add four in one go is quite an achievement but the race to find more is ongoing.

    Back in 2012, the International Unions of Pure and Applied Chemistry (IUPAC) and Pure and Applied Physics (IUPAP) tasked five independent scientists to assess claims made for the discovery of elements 113, 115, 117 and 118. The measurements had been made at Nuclear Physics Accelerator laboratories in Russia (Dubna) and Japan (RIKEN) between 2004 and 2012.

    Late last year, on December 30, 2015, IUPAC announced that claims for the discovery of all four new elements had been accepted.

    Periodic Table 2016
    The completed seventh row in the periodic table. Wikimedia Commons

    This completes the seventh row of the periodic table, and means that all elements between hydrogen (having only one proton in its nucleus) and element 118 (having 118 protons) are now officially discovered.

    After the excitement of the discovery, the scientists now have the naming rights. The Japanese team will suggest the name for element 113. The joint Russian/US teams will make suggestions for elements 115, 117 and 118. These names will be assessed by IUPAC, and once approved, will become the new names that scientists and students will have to remember.

    Until their discovery and naming, all superheavy elements (up to 999!) have been assigned temporary names by the IUPAC. Element 113 is known as ununtrium (Uut), 115 is ununpentium (Uup), 117 is ununseptium (Uus) and 118 ununoctium (Uuo). These names are not actually used by physicists, who instead refer to them as “element 118”, for example.

    The superheavy elements

    Elements heavier than Rutherfordium (element 104) are referred to as superheavy. They are not found in nature, because they undergo radioactive decay to lighter elements.

    Those superheavy nuclei that have been created artificially have decay lifetimes between nanoseconds and minutes. But longer-lived (more neutron-rich) superheavy nuclei are expected to be situated at the centre of the so-called island of stability, a place where neutron-rich nuclei with extremely long half-lives should exist.

    Measured (boxed) and predicted (shaded) half-lives of isotopes, sorted by number of protons and neutrons. The expected location of the island of stability is circled.

    Currently, the isotopes of new elements that have been discovered are on the “shore” of this island, since we cannot yet reach the centre.

    How were these new elements created on Earth?

    Atoms of superheavy elements are made by nuclear fusion. Imagine touching two droplets of water – they will “snap together” because of surface tension to form a combined larger droplet.

    The problem in the fusion of heavy nuclei is the large numbers of protons in both nuclei. This creates an intense repulsive electric field. A heavy-ion accelerator must be used to overcome this repulsion, by colliding the two nuclei and allowing the nuclear surfaces to touch.

    This is not sufficient, as the two touching spheroidal nuclei must change their shape to form a compact single droplet of nuclear matter – the superheavy nucleus.

    It turns out that this only happens in a few “lucky” collisions, as few as one in a million.

    view or download mp4 video here .
    Superheavy reaction fails to fuse (ANU)

    There is yet another hurdle; the superheavy nucleus is very likely to decay almost immediately by fission. Again, as few as one in a million survives to become a superheavy atom, identified by its unique radioactive decay.

    The process of superheavy element creation and identification thus requires large-scale accelerator facilities, sophisticated magnetic separators, efficient detectors and time.

    Finding the three atoms of element 113 in Japan took 10 years, and that was after the experimental equipment had been developed.

    The payback from the discovery of these new elements comes in improving models of the atomic nucleus (with applications in nuclear medicine and in element formation in the universe) and testing our understanding of atomic relativistic effects (of increasing importance in the chemical properties of the heavy elements). It also helps in improving our understanding of complex and irreversible interactions of quantum systems in general.

    The Australian connection in the race to make more elements

    The race is now on to produce elements 119 and 120. The projectile nucleus Calcium-48 (Ca-48) – successfully used to form the newly accepted elements – has too few protons, and no target nuclei with more protons are currently available. The question is, which heavier projectile nucleus is the best to use.

    To investigate this, the leader and team members of the German superheavy element research group, based in Darmstadt and Mainz, recently travelled to the Australian National University.

    They made use of unique ANU experimental capabilities, supported by the Australian Government’s NCRIS program, to measure fission characteristics for several nuclear reactions forming element 120. The results will guide future experiments in Germany to form the new superheavy elements.

    It seems certain that by using similar nuclear fusion reactions, proceeding beyond element 118 will be more difficult than reaching it. But that was the feeling after the discovery of element 112, first observed in 1996. And yet a new approach using Ca-48 projectiles allowed another six elements to be discovered.

    Nuclear physicists are already exploring different types of nuclear reaction to produce superheavies, and some promising results have already been achieved. Nevertheless, it would need a huge breakthrough to see four new nuclei added to the periodic table at once, as we have just seen.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

  • richardmitnick 6:53 pm on January 8, 2016 Permalink | Reply
    Tags: , , Physics   

    From Physics: “Focus: New Crystal Type is Always in Motion” 

    Physics LogoAbout Physics

    Physics Logo 2


    January 8, 2016
    Philip Ball

    Temp 1
    Dance diagram for math geeks. In these two-dimensional choreographic crystals, the arrows show directions of particles, arrayed initially on a triangular lattice, that move in straight lines from blue to yellow to pink. The configuration of highest “choreography” χ𝜒 has the most rotations and reflections (combined with time shifts) that leave it unchanged (left, χ=12𝜒=12 ); the next highest choreography configuration has χ=6𝜒=6 (right)

    Crystals are usually defined as orderly arrays of static components, such as atoms or molecules. But researchers have now proposed a new kind of crystal in which the order comes instead from the orchestrated movements of the components, such as orbiting satellites. The team calls such systems “choreographic crystals” and developed a formal theory to describe and categorize them.

    Latham Boyle of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, says he began thinking about this problem while considering plans for a space observatory for detecting gravitational waves. The proposed observatory would use three sun-orbiting satellites, so they would always be confined to a plane. Boyle realized that four satellites could determine even more about the gravitational wave signal because they need not all lie in a common plane.

    Although no one is planning to build such a system, Boyle wondered if the orbits of four satellites could be coordinated in a symmetrical way, so that movies recorded from all of them would look identical. He and his colleagues now describe this four-satellite motion mathematically. Each satellite circles the same central point and orbits parallel to one of the faces of a regular tetrahedron; the relative timing is such that they appear at the corners of a square six times per orbit. This set of orbits “is clearly a very special and beautiful mathematical object, interesting in its own right, like a dynamical analog of the regular tetrahedron,” Boyle says.

    Temp 1

    The researchers use the theory of symmetry operations to generalize the four-satellite result and examine the orderly configurations available to “swarms” of an arbitrary number of satellites. They define a quantity called the choreography χ𝜒 as a measure of the amount of symmetry that can be captured by periodically moving particles. For example, imagine two skaters moving simultaneously north-south and east-west through the center of a square rink and repeatedly reversing course when they reach the edges. The skaters have a higher choreography if they move out of phase—one reaching the edge while the other passes through the center—than if they are in phase, passing through the center at the same instant. In the first case, the moves capture the full symmetry of a square because the same set of rotations and reflections, along with time shifts, will leave the system unchanged. The second case has fewer symmetries. In general, says Boyle, there is a very large number of choreographic crystals, but only a few have very high choreography.

    Boyle hopes that choreographic crystals might prove relevant to many mathematical problems, just as the static lattices of standard crystallographic theory have found applications ranging from pure number theory to error correction in computation. The researchers admit that they have no idea if these crystals will exist naturally, although they speculate that the motions of atomic nuclei or electrons in solids might be coordinated this way. If so, it might be possible to detect the choreography using diffraction methods similar to those used in crystallography—the choreography would impart a distinctive signature to the diffraction pattern. Choreographic crystals might alternatively be made artificially, the researchers say, for example by trapping atoms or other small particles in electromagnetic traps created by intense light fields.

    The work is “a nice meshing of group theory and periodic dynamics,” says James Crutchfield, a specialist in complex dynamics at the University of California at Davis. He would now like to see a generalization of the approach to less regular “crystals,” such as choreographic quasicrystals, which Boyle and colleagues are already considering.

    This research is published in Physical Review Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

  • richardmitnick 6:27 pm on January 7, 2016 Permalink | Reply
    Tags: , Physics, , ,   

    From Physics Today: “Three groups close the loopholes in tests of Bell’s theorem” 

    physicstoday bloc

    Physics Today

    January 2016, page 14
    Johanna L. Miller

    Until now, the quintessential demonstration of quantum entanglement has required extra assumptions.

    The predictions of quantum mechanics are often difficult to reconcile with intuitions about the classical world. Whereas classical particles have well-defined positions and momenta, quantum wavefunctions give only the probability distributions of those quantities. What’s more, quantum theory posits that when two systems are entangled, a measurement on one instantly changes the wavefunction of the other, no matter how distant.

    Might those counterintuitive effects be illusory? Perhaps quantum theory could be supplemented by a system of hidden variables that restore local realism, so every measurement’s outcome depends only on events in its past light cone. In a 1964 theorem John Bell showed that the question is not merely philosophical: By looking at the correlations in a series of measurements on widely separated systems, one can distinguish quantum mechanics from any local-realist theory. (See the article by Reinhold Bertlmann, Physics Today, July 2015, page 40.) Such Bell tests in the laboratory have come down on the side of quantum mechanics. But until recently, their experimental limitations have left open two important loopholes that require additional assumptions to definitively rule out local realism.

    Now three groups have reported experiments that close both loopholes simultaneously. First, Ronald Hanson, Bas Hensen (both pictured in figure 1), and their colleagues at Delft University of Technology performed a loophole-free Bell test using a novel entanglement-swapping scheme.1 More recently, two groups—one led by Sae Woo Nam and Krister Shalm of NIST,2 the other by Anton Zeilinger and Marissa Giustina of the University of Vienna3—used a more conventional setup with pairs of entangled photons generated at a central source.

    Temp 1
    Figure 1. Bas Hensen (left) and Ronald Hanson in one of the three labs they used for their Bell test. FRANK AUPERLE

    The results fulfill a long-standing goal, not so much to squelch any remaining doubts that quantum mechanics is real and complete, but to develop new capabilities in quantum information and security. A loophole-free Bell test demonstrates not only that particles can be entangled at all but also that a particular source of entangled particles is working as intended and hasn’t been tampered with. Applications include perfectly secure quantum key distribution and unhackable sources of truly random numbers.

    In a typical Bell test trial, Alice and Bob each possess one of a pair of entangled particles, such as polarization-entangled photons or spin-entangled electrons. Each of them makes a random and independent choice of a basis—a direction in which to measure the particle’s polarization or spin—and performs the corresponding measurement. Under quantum mechanics, the results of Alice’s and Bob’s measurements over repeated trials can be highly correlated—even though their individual outcomes can’t be foreknown. In contrast, local-realist theories posit that only local variables, such as the state of the particle, can influence the outcome of a measurement. Under any such theory, the correlation between Alice’s and Bob’s measurements is much less.

    But what if some hidden signal informs Bob’s experiment about Alice’s choice of basis, or vice versa? If such a signal can change the state of Bob’s particle, it can create quantum-like correlations in a system without actual quantum entanglement. That possibility is at the heart of the so-called locality loophole. The loophole can be closed by arranging the experiment, as shown in figure 2, so that no light-speed signal with information about Alice’s choice of basis can reach Bob until after his measurement is complete.

    Temp 2
    Figure 2. The locality loophole arises from the possibility that hidden signals between Alice and Bob can influence the results of their measurements. This space–time diagram represents an entangled-photon experiment for which the loophole is closed. The diagonal lines denote light-speed trajectories: The paths of the entangled photons are shown in red, and the forward light cones of the measurement-basis choices are shown in blue. Note that Bob cannot receive information about Alice’s chosen basis until after his measurement is complete, and vice versa.

    In practice, under that arrangement, for Alice and Bob to have enough time to choose their bases and make their measurements, they must be positioned at least tens of meters apart. That requirement typically means that the experiments are done with entangled photons, which can be transported over such distances without much damage to their quantum state. But the inefficiencies in handling and detecting single photons introduce another loophole, called the fair-sampling or detection loophole: If too many trials go undetected by Alice, Bob, or both, it’s possible for the detected trials to display quantum-like correlations even when the set of all trials does not.

    In Bell tests that are implemented honestly, there’s little reason to think that the detected trials are anything other than a representative sample of all trials. But one can exploit the detection loophole to fool the test on purpose by causing trials to go undetected for reasons other than random chance. For example, manifestly classical states of light can mimic single photons in one basis but go entirely undetected in another (see Physics Today, December 2011, page 20). Furthermore, similar tricks can be used for hacking quantum cryptography systems. The only way to guarantee that a hacker is not present is to close the loopholes.

    Instead of the usual entangled photons, the Delft group based their experiment on entangled diamond nitrogen–vacancy (NV) centers, electron spins associated with point defects in the diamond’s crystal lattice and prized for their long quantum coherence times. The scheme is sketched in figure 3: Each NV center is first entangled with a photon, then the photons are sent to a central location and jointly measured. A successful joint measurement, which transfers the entanglement to the two NV centers, signals Alice and Bob that the Bell test trial is ready to proceed.

    Temp 3
    Figure 3. Entanglement swapping between diamond nitrogen–vacancy (NV) centers. Alice and Bob entangle their NV spins with photons, then transmit the photons to a central location to be jointly measured. After a successful joint measurement, which signals that the NV spins are entangled with each other, each spin is measured in a basis chosen by a random-number generator (RNG). (Adapted from ref. 1.)

    n 2013 the team carried out a version of that experiment4 with the NV spins separated by 3 m. “It was at that moment,” says Hanson, “that I realized that we could do a loophole-free Bell test—and also that we could be the first.” A 3-m separation is not enough to close the locality loophole, so the researchers set about relocating the NV-center equipment to two separate labs 1.3 km apart and fiber-optically linking them to the joint-measurement apparatus at a third lab in between.

    A crucial aspect of the entanglement-swapping scheme is that the Bell test trial doesn’t begin until the joint measurement is made. As far as the detection loophole is concerned, attempted trials without a successful joint measurement don’t count. That’s fortunate, because the joint measurement succeeds in just one out of every 156 million attempts—a little more than once per hour.

    That inefficiency stems from two main sources. First, the initial spin–photon entanglement succeeds just 3% of the time at each end. Second, photon loss in the optical fibers is substantial: The photons entangled with the NV centers have a wavelength of 637 nm, well outside the so-called telecom band, 1520–1610 nm, where optical fibers work best. In contrast, once the NV spins are entangled, they can be measured efficiently and accurately. So of the Bell test trials that the researchers are able to perform, none are lost to nondetection.

    Early in the summer of 2015, Hanson and colleagues ran their experiment for 220 hours over 18 days and obtained 245 useful trials. They saw clear evidence of quantum correlations—although with so few trials, the likelihood of a nonquantum system producing the same correlations by chance is as much as 4%.

    The Delft researchers are working on improving their system by converting their photons into the telecom band. Hanson estimates that they could then extend the separation between the NV centers from 1.3 km up to 100 km. That distance makes feasible a number of quantum network applications, such as quantum key distribution.

    In quantum key distribution—as in a Bell test—Alice and Bob perform independently chosen measurements on a series of entangled particles. On trials for which Alice and Bob have fortuitously chosen to measure their particles in the same basis, their results are perfectly correlated. By conferring publicly to determine which trials those were, then looking privately at their measurement results for those trials, they can obtain a secret string of ones and zeros that only they know. (See article by Daniel Gottesman and Hoi-Kwong Lo, Physics Today, November 2000, page 22.)

    The NIST and Vienna groups both performed their experiments with photons, and both used single-photon detectors developed by Nam and his NIST colleagues. The Vienna group used so-called transition-edge sensors that are more than 98% efficient;5 the NIST group used superconducting nanowire single-photon detectors (SNSPDs), which are not as efficient but have far better timing resolution. Previous SNSPDs had been limited to 70% efficiency at telecom wavelengths—in part because the polycrystalline superconductor of choice doesn’t couple well to other optical elements. By switching to an amorphous superconducting material, Nam and company increased the detection efficiency to more than 90%.6

    Shalm realized that the new SNSPDs might be good enough for a loophole-free Bell test. “We had the detectors that worked at telecom wavelengths, so we had to generate entangled photons at the same wavelengths,” he says. “That was a big engineering challenge.” Another challenge was to boost the efficiency of the mazes of optics that carry the entangled photons from the source to the detector. “Normally, every time photons enter or exit an optical fiber, the coupling is only about 80% efficient,” explains Shalm. “We needed to get that up to 95%. We were worrying about every quarter of a percent.”

    In September 2015 the NIST group conducted its experiment between two laboratory rooms separated by 185 m. The Vienna researchers positioned their detectors 60 m apart in the subbasement of the Vienna Hofburg Castle. Both groups had refined their overall system efficiencies so that each detector registered 75% or more of the photons created by the source—enough to close the detection loophole.

    In contrast to the Delft group’s rate of one trial per hour, the NIST and Vienna groups were able to conduct thousands of trials per second; they each collected enough data in less than one hour to eliminate any possibility that their correlations could have arisen from random chance.

    It’s not currently feasible to extend the entangled-photon experiments into large-scale quantum networks. Even at telecom wavelengths, photons traversing the optical fibers are lost at a nonnegligible rate, so lengthening the fibers would lower the fraction of detected trials and reopen the detection loophole. The NIST group is working on using its experiment for quantum random-number generation, which doesn’t require the photons to be conveyed over such vast distances.

    Random numbers are widely used in security applications. For example, one common system of public-key cryptography involves choosing at random two large prime numbers, keeping them private, but making their product public. Messages can be encrypted by anyone who knows the product, but they can be decrypted only by someone who knows the two prime factors.

    The scheme is secure because factorizing a large number is a computationally hard problem. But it loses that security if the process used to choose the prime numbers can be predicted or reproduced. Numbers chosen by computer are at best pseudorandom because computers can run only deterministic algorithms. But numbers derived from the measurement of quantum states—whose quantum nature is verified through a loophole-free Bell test—can be truly random and unpredictable.

    The NIST researchers plan to make their random-number stream publicly available to everyone, so it can’t be used for encryption keys that need to be kept private. But a verified public source of tamperproof random numbers has other uses, such as choosing unpredictable samples of voters for opinion polling, taxpayers for audits, or products for safety testing.


    B. Hensen et al., Nature 526, 682 (2015). http://dx.doi.org/10.1038/nature15759
    L. K. Shalm et al., Phys. Rev. Lett. (in press), http://arxiv.org/abs/1511.03189.
    M. Giustina et al., Phys. Rev. Lett. (in press), http://arxiv.org/abs/1511.03190.
    H. Bernien et al., Nature 497, 86 (2013). http://dx.doi.org/10.1038/nature12016
    A. E. Lita, A. J. Miller, S. W. Nam, Opt. Express 16, 3032 (2008). http://dx.doi.org/10.1364/OE.16.003032
    F. Marsili et al., Nat. Photonics 7, 210 (2013).http://dx.doi.org/10.1038/nphoton.2013.13

    © 2016 American Institute of Physics
    DOI: http://dx.doi.org/10.1063/PT.3.3039

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The American Physical Society strives to:

    Be the leading voice for physics and an authoritative source of physics information for the advancement of physics and the benefit of humanity;
    Provide effective programs in support of the physics community and the conduct of physics;
    Collaborate with national scientific societies for the advancement of science, science education and the science community;
    Cooperate with international physics societies to promote physics, to support physicists worldwide and to foster international collaboration;
    Promote an active, engaged and diverse membership, and support the activities of its units and members.

  • richardmitnick 8:54 pm on January 6, 2016 Permalink | Reply
    Tags: , , Physics, Shape dynamics   

    From NOVA: “A Radical Reinterpretation of Einstein’s Theory” 



    06 Jan 2016
    Dan Falk

    “It is not easy to walk alone in the country without musing upon something,” Charles Dickens once observed. For Julian Barbour, those musings most often involve the nature of space and time. Barbour, 78, is an independent physicist who contemplates the cosmos from College Farm, a rustic thatched-roof country house some twenty miles north of Oxford. He is perhaps best know for his 1999 book The End of Time: The Next Revolution in Physics, in which he argues that time is an illusion.

    While country walks may be best enjoyed on one’s own, musings about theoretical physics can benefit from good, smart company—and Barbour has made a point of inviting a handful of bright young physicists to join him for periodic brainstorming sessions at College Farm—think Plato’s Academy in the English countryside.

    Their latest offering is something called shape dynamics. (If you’ve never heard of shape dynamics, that’s OK—neither have most physicists.) It could, of course, be a dead end, as most bold new ideas in physics are. Or it could be the next great revolution in our conception of the cosmos. Its supporters describe it as a new way of looking at gravity, although it could end up being quite a bit more than that. It appears to give a radical new picture of space and time—and of black holes in particular. It could even alter our view of what’s “real” in the universe.

    Temp 1
    The shape of an object is a real, objective quality according to the theory of shape dynamics. No image credit found.

    Last summer, Barbour and his colleagues gathered for a workshop at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, to hash out the ideas behind shape dynamics. During a break in the workshop, I sat down with a young physicist named Sean Gryb, one of Barbour’s protégés.

    “We’re trying to re-evaluate the basic assumptions of Einstein’s theory of relativity—in particular, what it has to say about gravity,” Gryb says. “It’s a shift in what we view as the fundamental elements of reality.”

    Gryb, 33, is a tall and athletic figure; he’s affable and good-humored. He’s now a postdoc at Radboud University in the Netherlands, but he grew up in London, Ontario, and did his PhD down the road from Perimeter, at the University of Waterloo. The fact that he travels so much—the Netherlands, England, Canada—may explain why Gryb’s accent is so hard to pin down. “If I’m in the UK, it turns more British,” he says.

    His PhD supervisor was Lee Smolin, one of Perimeter’s superstar scientists. (Perimeter isn’t a degree-granting institution, so students who work with the institute’s scientists earn their degrees from Waterloo.) Smolin, like Barbour, is known for his outside-the-box ideas; he’s the author of The Trouble With Physics and several other provocative books and has been a vocal critic of string theory, the leading contender for a theory of quantum gravity, a framework that unites Einstein’s theory of gravity, known as general relativity, with quantum mechanics. Gryb, too, seems most comfortable outside the box. Sure, he could work on problems where the questions are well defined and the strategies clearly mapped, slowly adding to what we know about the universe. There’s no shame in that; it’s what most physicists do. Instead, like Barbour and Smolin, he focuses on the very foundations of physics—space, time, gravity.

    Shape, Scale, and Gravity

    Let’s stick with gravity for a moment. It’s surely the most basic of nature’s forces. You drop a hammer, it falls down. Of course, there’s a bit more to it than that: Three and a half centuries ago, Isaac Newton showed that the force that pulls the hammer to the ground is the same force that keeps the moon in its orbit around the earth—a pretty impressive leap of logic, but one that Newton was able to prove with hard data and mathematical rigor.

    Then we come to [Albert] Einstein, who tackled gravity in his masterpiece, general relativity—a theory that’s just celebrated its 100th anniversary. Back in 1915, Einstein showed how gravity and geometry were linked, that what we imagine as the “force” of gravity can be thought of as a curvature in space and time. Ten years earlier, Einstein had shaken things up by showing that space and time are relative: What we measure with our clocks and yardsticks depends on the relative motion of us and the object being measured.

    But even though space and time are relative in Einstein’s theory, scale remains absolute. A mouse and elephant can roam the cosmos, but if the elephant is bigger somewhere, it’s bigger everywhere. The elephant is “really” bigger than the mouse. In shape dynamics, though, size is relative, but the shape of objects becomes a real, objective quality. From the shape dynamics perspective, we’d say that we can only be sure that the elephant is bigger than the mouse if they’re right next to each other, and we’re there too, with our yardstick. Should either beast stray from our location, we can no longer be certain of their true sizes. Whenever they reunite, we can once again measure their relative sizes; that ratio won’t change—but again, we can only perform the measurement if we’re all next to one another. Shape, unlike size, doesn’t suffer from such uncertainty.

    “Absolute size is something that seems to be built into Einstein’s theory of relativity,” says Gryb. “But it’s something that actually we don’t see. If I want to measure the length of something, I’m always comparing it against a meter stick. It’s the comparison that’s important.”

    Perhaps the best way to understand what Gryb is saying is to imagine that we double the size of everything in the universe. But wait: If we double the size of everything, then we’re also doubling the size of the yardsticks—which means the actual measurements we make don’t change.

    This suggests that “size” isn’t real in any absolute sense; it’s not an objective quantity. With shape dynamics, says Gryb, “we’re taking this very simple idea and trying to push it as far as we can. And what we realized—which was a surprise to me, actually—is that you can have relativity of scale and reproduce a theory of gravity which is equivalent to Einstein’s theory—but you have to abandon the notion of relative time.”

    Does this mean that Einstein was wrong about time being relative? Surely we’re not heading back to Isaac Newton’s notion of absolute space and time? Gryb assures me that we’re not. “We’re not going all the way back to Newton,” Gryb says.


    Even though Newton’s conception of space and time turned out to be flawed, his ideas have continued to serve as an inspiration—or at least a jumping-off point—for countless scientists following in his footsteps. In fact, Julian Barbour tells me that his own thinking on shape dynamics began with an analysis of exactly how and why the Newtonian picture fails. Some 50 years ago, Barbour picked up a book called The Science of Mechanics by Ernst Mach, the 19th-century Austrian physicist and philosopher. In the book, Barbour found Mach’s nuanced critique of Newton’s conception of space and time. (I interviewed Barbour at length for a 2008 radio documentary called Living on Oxford Time, which aired on the CBC.)

    Newton had imagined that space was laced with invisible grid-lines—something like the lines of latitude and longitude on a globe—that specify exactly where every object is located in the universe. Similarly, he imagined a “universal clock” that ticks away the hours, minutes, and seconds for all observers at a single, uniform rate. But Mach saw that this was wishful thinking. In real life, there are no grid lines and no universal clock.

    “What happens in the real universe is that everything is moving relative to everything else,” Barbour says. It is the set of relative positions that matters. Only that, Mach concluded, can serve as a foundation for physics. Einstein, as a youngster, was deeply influenced by Mach’s thinking. Now Barbour, too, was hooked—and he’s devoted his life to expanding on Mach’s ideas.

    Barbour isn’t alone. “Julian’s interpretation of Mach’s ideas are at the bedrock of what we’re doing,” Gryb says.

    About 16 years ago, Barbour started collaborating with an Irish physicist, Niall Ó Murchadha. Together they struggled to work out a theory in which only angles and ratios count. Size would have no absolute meaning. (To see why angles are important, think of a triangle: As it moves through space, we can misjudge its size, but can’t misjudge the angles of its three vertices; those angles, which determine the triangle’s shape, will not change.) Ideas like these—together with a good deal of advanced mathematics—would eventually evolve into shape dynamics.

    Intriguingly, shape dynamics reproduces all of the peculiar effects found in general relativity: Massive objects still warp the space around them, clocks still run more slowly in a strong gravitational field, just like in Einstein’s theory. Physicists call this a “duality”—a different mathematical description, but the same end results.

    “In many ways, it’s just Einstein’s theory in a radically different description,” says Barbour. “It’s a radical reinterpretation.”
    Identical, Almost

    In most situations, shape dynamics predicts what Einstein’s theory predicts. “For the vast majority of physical situations, the theories are equivalent,” Gryb says. In other words, the two frameworks are almost identical—but not quite.

    Imagine dividing space-time up into billions upon billions of little patches. Within each patch, shape dynamics and general relativity tell the same story, Gryb says. But glue them all together, and a new kind of structure can emerge. For a concrete example of how this can happen, think of pulling together the two ends of a long, narrow strip of paper: Do it the usual way, and you get a loop; do it with a twist and you get a Möbius strip. “If you glue all the regions together to form a kind of global picture of space and time, then that global picture might actually be different.” So while shape dynamics may recreate Einstein’s theory on a small scale, the big-picture view of space and time may be novel.

    There is one kind of object where the shape dynamics picture differs starkly from the traditional view—the black hole. In the standard picture, a black hole forms when a massive star exhausts its nuclear fuel supply and collapses. If the star is large enough, nothing can stop that collapse, and the star shrinks until it’s smaller than its own event horizon—the point of no return for matter falling toward it. A black hole’s gravitation field is so intense that nothing—not even light—can escape from within the event horizon. At the black hole’s core, a singularity forms—a point where the gravitational field is infinitely strong, where space and time are infinitely curved. The unlucky astronaut who reaches this point will be spaghettified, as Stephen Hawking3 has put it, or burned to a crisp. Singularities don’t sit well with physicists. They’re usually seen as a sign that something is not quite right with the underlying theory.

    But according to shape dynamics’ proponents, the theory does away with singularities—a definite selling point. But the picture of black holes in shape dynamics is more radical than that. “It looks like black holes—in shape dynamics—are qualitatively different from what happens in general relativity,” Gryb says.

    At first, the astronaut approaching the black hole sees nothing that’s different from the Einsteinian description; outside of the event horizon, general relativity and shape dynamics give the same picture. But beyond the horizon, the story changes dramatically.

    Not only is there no singularity in a shape dynamics universe, there’s no head-long rush toward the place where you’d expect it to be. In fact, an astronaut who sails past the event horizon finds herself not in a shrinking world but an expanding one. The astronaut “comes into this new region of space—which was formed effectively by the collapse of a star—and is now free to wander around in that space.” You can think of the black hole as a wormhole into that new space, Gryb says.

    True, the astronaut can never exit back to the region outside the event horizon—but in this new space “he or she is free to wander around wherever they would like. And that’s a very different picture,” Gryb says. “But it’s still very early, and we’re trying to understand better what that means.”

    Is it a parallel world? “I wouldn’t necessarily call it that—it’s just a pocket of space that was created by the collapse of the star,” he says. It’s “basically the region between the horizon and the surface of the collapsed star. And that region gets larger and larger as the star starts to collapse more and more.”

    In other words, space—dare we say it—has been turned inside out. The region inside the event horizon, which had seemed tiny, now appears huge. What had been the surface of the collapsing star is now the “sky,” and rather than shrinking, it’s getting larger. The space inside the event horizon “is the mirror image” of the space that our traveller left behind, outside the horizon, Gryb says.

    In shape dynamics, falling into a black hole seems an awful lot like falling into a rabbit hole and discovering a strange new world on the other side, just like Alice did in Wonderland. The only problem is that we can’t see down the rabbit hole. Whatever may happen within the event horizon, we have no hope of observing it from the outside. Of course, you could jump into a black hole, and see what’s there—but you could never communicate your findings to those outside.

    Putting It To the Test

    But Gryb is hopeful. We’ve known since the 1970s that black holes don’t stick around forever—Stephen Hawking showed that, given enough time, they evaporate by a mechanism known as Hawking radiation. “It’s possible that the story about what happens on the other side of the horizon might change the story of what happens when the black hole evaporates,” he says. “If we can make definite predictions for this, then it might provide a way to test our scenario against general relativity.”

    Such tests are “just wild fantasies” at the moment, Gryb admits—but then, he notes, so are some of the predictions of other novel approaches, such as the recently-popular firewall hypothesis.

    The physicists that I spoke with—the few who have been following what the shape dynamics crew have been up to—are understandably cautious. This new picture of black holes is interesting, of course, but the critical question is whether it can be tested.

    “What do black holes look like in their picture?” says Astrid Eichhorn, a physicist at Imperial College London and also a visiting fellow at Perimeter. “Is it just mathematical differences? Or is there something we can really observe—for instance with the Event Horizon Telescope—where we can see a physical difference and make an observation or experiment to see which of the two [shape dynamics or general relativity] is correct?”

    Eichhorn has other concerns, too. “I’m skeptical of how this will work out, both on the conceptual side and also on the technical side,” she says. “It seems that, by giving up the space-time picture, they have a lot of technical complications in formulating the theory.” Figuring out how to handle quantum effects, for example, “seems to become much more challenging in their framework than it already is in the standard approach to quantum gravity.”

    Indeed, the word “quantum” rarely came up at the Perimeter workshop—although the hope is that the new framework will provide some insight into reconciling gravity and quantum theory.

    Gryb, for his part, admits that the problem of unifying these two pillars of modern physics is a daunting one—perhaps as daunting in shape dynamics as it has been in earlier approaches. “We’ve made progress on trying to understand what shape dynamics might have to say about quantum gravity—but we’ve also run into a bunch of dead ends.”
    Looking for Clarity

    Also attending the workshop was physicist Paul Steinhardt of Princeton University, known for his work on the inflation model of the Big Bang and on alternative cosmological models. Several times during the workshop, Steinhardt would call on a speaker to be more clear, more explicit. Like Eichhorn, Steinhardt is concerned about the seeming lack of anything quantum-mechanical in the shape dynamics picture. And of course there’s the issue of falsifiability—that is, putting the theory to the test.

    “My question was, what is scientifically meaningful that you expect to come out of this?” he says. “What’s different about this approach to gravity—as opposed to others—that you could test and experiment with and verify that would change our view about anything?”

    The answers he got during the workshop didn’t satisfy him. “Some people said, ‘The discipline is too young, so we don’t know yet. It might bring us something new.’ And my brain is thinking, ‘OK, good—come back when you’ve got that something.’ ”

    Others, meanwhile, spoke of the new ontology that shape dynamics offers. Ontology is a word that crops up frequently in the philosophy of science. It refers to the labeling of what’s “real” in a scientific theory, but it doesn’t necessarily change what you actually see when you observe nature. To Steinhardt, a change in ontology isn’t very exciting on its own. It’s just a way of describing something in a different way—a change of narrative, as it were, rather than a change in what we’d expect to see or measure. “Sometimes that’s useful,” Steinhardt says, “but it doesn’t obviously give you anything really new.”

    And yet, in the history of physics—and of cosmology in particular—changes in narrative sometimes seem rather profound. Think of the change from the Earth-centered cosmos of the ancient Greeks to the sun-centered cosmos of Copernicus. They were the same observations, but a radically different “story.”

    Still, Steinhardt sticks to his guns. Switching from a sun-centered to an Earth-centered description of the cosmos didn’t immediately bring any “new science.” Yes, it gave us a new story, but the new model wasn’t much better than the old one in terms of explaining the observed motion of the planets. That didn’t come until a half century later, when Johannes Kepler worked out the true shape of planetary orbits (they’re ellipses, it turns out, not circles). “I would have been skeptical of Copernicus—but I would have been really blown away by Kepler,” Steinhardt says.

    A Risky Pursuit

    The resistance to shape dynamics—like the skepticism that surrounds any new idea in physics—is par for the course. Science is, by its nature, a skeptical pursuit. The onus is on those who believe they’ve found something new to convince the community that they’ve really done so. In theoretical particle physics and cosmology, in particular, new ideas are always bubbling up like a tea kettle on the boil. There’s no way to read everything that gets published, so one reads only what seems genuinely promising.

    For those with the necessary physics background, Mercati has published a 67-page shape dynamics tutorial online; Gryb, meanwhile, has a short introductory essay on his Web page. There’s also a brief description of the theory in Smolin’s recent book, Time Reborn.)

    Even for those who find shape dynamics compelling, it may be risky to pursue it.

    Most of those working on shape dynamics are young, and shape dynamics, at least for now, lies somewhat toward the fringes of mainstream physics—which means that junior researchers are taking a risk by pursuing it.

    Flavio Mercati is currently a post-doc at Perimeter; he did his PhD at the Sapienza University of Rome. But when he first expressed an interest in working with Barbour on fundamental physics, his professors tried to talk him out of it. “They said, ‘Look, I suggest you don’t,’” he recalls. “Try something more down to earth.” Because of the vagaries of the job market for academic physicists, there’s pressure to steer clear of deep, foundational issues, Mercati says. Pursue matters that are too esoteric and “you pay a price, career-wise.” Most of these researchers have yet to secure tenured academic positions—and it’s not clear if working on shape dynamics helps or hinders that quest. (At least Mercati will soon have a book to show for his efforts—the first textbook on shape dynamics, to be published by Oxford University Press.)

    All of this leaves these young shape dynamics researchers poised uncomfortably on the knife-edge between excitement (a new paradigm!) and humility (we’re probably wrong).

    In the end, Barbour, Gryb, Mercati, and their colleagues are taking the only route possible—they’re going where their equations lead them.

    “We’re saying something totally different from what everyone else is saying,” Gryb says toward the end of our interview. “Can it possibly be right?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 3:30 pm on January 1, 2016 Permalink | Reply
    Tags: , , Physics, ,   

    From space.com: “Time Warps and Black Holes: The Past, Present & Future of Space-Time” 

    space-dot-com logo


    December 31, 2015
    Nola Taylor Redd

    Temp 1
    A massive object like the Earth will bend space-time, and cause objects to fall toward it. Credit: Science@NASA

    When giving the coordinates for a location, most people provide the latitude, longitude and perhaps altitude. But there is a fourth dimension often neglected: time. The combination of the physical coordinates with the temporal element creates a concept known as space-time, a background for all events in the universe.

    “In physics, space-time is the mathematical model that combines space and time into a single interwoven continuum throughout the universe,” Eric Davis, a physicist who works at the Institute for Advanced Studies at Austin and with the Tau Zero Foundation, told Space.com by email. Davis specializes in faster-than-light space-time and anti-gravity physics, both of which use Albert Einstein’s general relativity theory field equations and quantum field theory, as well as quantum optics, to conduct lab experiments.

    “Einstein’s special theory of relativity, published in 1905, adapted [German mathematician] Hermann Minkowski’s unified space-and-time model of the universe to show that time should be treated as a physical dimension on par with the three physical dimensions of space — height, width and length — that we experience in our lives,” Davis said.

    “Space-time is the landscape over which phenomena take place,” added Luca Amendola, a member of the Euclid Theory Working Group (a team of theoretical scientists working with the European Space Agency’s Euclid satellite) and a professor at Heidelberg University in Germany.

    ESA Euclid

    “Just as any landscape is not set in stone, fixed forever, it changes just because things happen — planets move, particles interact, cells reproduce,” he told Space.com via email.

    The history of space-time

    The idea that time and space are united is a fairly recent development in the history of science.

    “The concepts of space remained practically the same from the early Greek philosophers until the beginning of the 20th century — an immutable stage over which matter moves,” Amendola said. “Time was supposed to be even more immutable because, while you can move in space the way you like, you cannot travel in time freely, since it runs the same for everybody.”

    In the early 1900s, Minkowski built upon the earlier works of Dutch physicist Hendrik Lorentz and French mathematician and theoretical physicist Henri Poincare to create a unified model of space-time. Einstein, a student of Minkowski, adapted Minkowski’s model when he published his special theory of relativity in 1905.

    “Einstein had brought together Poincare’s, Lorentz’s and Minkowski’s separate theoretical works into his overarching special relativity theory, which was much more comprehensive and thorough in its treatment of electromagnetic forces and motion, except that it left out the force of gravity, which Einstein later tackled in his magnum opus general theory of relativity,” Davis said.

    Space-time breakthroughs

    In special relativity, the geometry of space-time is fixed, but observers measure different distances or time intervals according to their own relative velocity. In general relativity, the geometry of space-time itself changes depending on how matter moves and is distributed.

    “Einstein’s general theory of relativity is the first major theoretical breakthrough that resulted from the unified space-time model,” Davis said.

    General relativity led to the science of cosmology, the next major breakthrough that came thanks to the concept of unified space-time.

    “It is because of the unified space-time model that we can have a theory for the creation and existence of our universe, and be able to study all the consequences that result thereof,” Davis said.

    He explained that general relativity predicted phenomena such as black holes and white holes. It also predicts that they have an event horizon, the boundary that marks where nothing can escape, and the point of singularities at their center, a one dimensional point where gravity becomes infinite. General relativity could also explain rotating astronomical bodies that drag space-time with them, the Big Bang and the inflationary expansion of the universe, gravity waves, time and space dilation associated with curved space-time, gravitational lensing caused by massive galaxies, and the shifting orbit of Mercury and other planetary bodies, all of which science has shown true. It also predicts things such as warp-drive propulsions and traversable wormholes and time machines.

    “All of these phenomena rely on the unified space-time model,” he said, “and most of them have been observed.”

    An improved understanding of space-time also led to quantum field theory. When quantum mechanics, the branch of theory concerned with the movement of atoms and photons, was first published in 1925, it was based on the idea that space and time were separate and independent. After World War II, theoretical physicists found a way to mathematically incorporate Einstein’s special theory of relativity into quantum mechanics, giving birth to quantum field theory.

    “The breakthroughs that resulted from quantum field theory are tremendous,” Davis said.

    The theory gave rise to a quantum theory of electromagnetic radiation and electrically charged elementary particles — called quantum electrodynamics theory (QED theory) — in about 1950. In the 1970s, QED theory was unified with the weak nuclear force theory to produce the electroweak theory, which describes them both as different aspects of the same force. In 1973, scientists derived the quantum chromodynamics theory (QCD theory), the nuclear strong force theory of quarks and gluons, which are elementary particles.

    In the 1980s and the 1990s, physicists united the QED theory, the QCD theory and the electroweak theory to formulate the Standard Model of Particle Physics, the megatheory that describes all of the known elementary particles of nature and the fundamental forces of their interactions.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Later on, Peter Higgs‘ 1960s prediction of a particle now known as the Higgs boson, which was discovered in 2012 by the Large Hadron Collider at CERN, was added to the mix.

    Peter Higgs

    CERN CMS Higgs Event
    Higgs event in CMS at the CERN/LHC

    CERN CMS Detector

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN with map

    Experimental breakthroughs include the discovery of many of the elementary particles and their interaction forces known today, Davis said. They also include the advancement of condensed matter theory to predict two new states of matter beyond those taught in most textbooks. More states of matter are being discovered using condensed matter theory, which uses the quantum field theory as its mathematical machinery.

    “Condensed matter has to do with the exotic states of matter, such as those found in metallic glass, photonic crystals, metamaterials, nanomaterials, semiconductors, crystals, liquid crystals, insulators, conductors, superconductors, superconducting fluids, etc.,” Davis said. “All of this is based on the unified space-time model.”

    The future of space-time

    Scientists are continuing to improve their understanding of space-time by using missions and experiments that observe many of the phenomena that interact with it. The Hubble Space Telescope, which measured the accelerating expansion of the universe, is one instrument doing so.

    NASA Hubble Telescope
    NASA/ESA Hubble

    NASA’s Gravity Probe B mission, which launched in 2004, studied the twisting of space-time by a rotating body — the Earth.

    NASA Gravity Probe B
    NASA Gravity Probe B

    NASA’s NuSTAR mission, launched in 2012, studies black holes. Many other telescopes and missions have also helped to study these phenomena.


    On the ground, particle accelerators have studied fast-moving particles for decades.

    “One of the best confirmations of special relativity is the observations that particles, which should decay after a given time, take in fact much longer when traveling very fast, as, for instance, in particle accelerators,” Amendola said. “This is because time intervals are longer when the relative velocity is very large.”

    Future missions and experiments will continue to probe space-time as well. The European Space Agency-NASA satellite Euclid, set to launch in 2020, will continue to test the ideas at astronomical scales as it maps the geometry of dark energy and dark matter, the mysterious substances that make up the bulk of the universe. On the ground, the LIGO and VIRGO observatories continue to study gravitational waves, ripples in the curvature of space-time.

    Caltech Ligo
    MIT/Caltech Advanced LIGO

    VIRGO interferometer EGO Campus
    VIRGO interferometer

    “If we could handle black holes the same way we handle particles in accelerators, we would learn much more about space-time,” Amendola said.

    Merging Black Holes

    Merging black holes create ripples in space-time in this artist’s concept. Experiments are searching for these ripples, known as gravitational waves, but none have been detected. Credit: Swinburne Astronomy Productions

    Understanding space-time

    Will scientists ever get a handle on the complex issue of space-time? That depends on precisely what you mean.

    “Physicists have an excellent grasp of the concept of space-time at the classical levels provided by Einstein’s two theories of relativity, with his general relativity theory being the magnum opus of space-time theory,” Davis said. “However, physicists do not yet have a grasp on the quantum nature of space-time and gravity.”

    Amendola agreed, noting that although scientists understand space-time across larger distances, the microscopic world of elementary particles remains less clear.

    “It might be that space-time at very short distances takes yet another form and perhaps is not continuous,” Amendola said. “However, we are still far from that frontier.”

    Today’s physicists cannot experiment with black holes or reach the high energies at which new phenomena are expected to occur. Even astronomical observations of black holes remain unsatisfactory due to the difficulty of studying something that absorbs all light, Amendola said. Scientists must instead use indirect probes.

    “To understand the quantum nature of space-time is the holy grail of 21st century physics,” Davis said. “We are stuck in a quagmire of multiple proposed new theories that don’t seem to work to solve this problem.”

    Amendola remained optimistic. “Nothing is holding us back,” he said. “It’s just that it takes time to understand space-time.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 536 other followers

%d bloggers like this: