Tagged: Quantum Mechanics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:04 am on January 31, 2016 Permalink | Reply
    Tags: , Bose-Einstein-condensates, Quantum Mechanics, Quantum randomness, Techniche Universitat Wein (Vienna)   

    From TUW: “Solving Hard Quantum Problems: Everything is Connected” 

    Techniche Universitat Wein (Vienna)

    Techniche Universitat Wein (Vienna)

    2016-01-26
    Florian Aigner

    Further Information:
    Dr. Kaspar Sakmann
    Institute for Atomic and Subatomic Physics
    TU Wien
    Stadionallee 2, 1020 Vienna, Austria
    T: +43-1-58801-141889
    kaspar.sakmann@ati.ac.at

    Bose-Einstein-condensates making waves a many-particle phenomenon

    Quantum systems are extremely hard to analyse if they consist of more than just a few parts. It is not difficult to calculate a single hydrogen atom, but in order to describe an atom cloud of several thousand atoms, it is usually necessary to use rough approximations. The reason for this is that quantum particles are connected to each other and cannot be described separately. Kaspar Sakmann (TU Wien, Vienna) and Mark Kasevich (Stanford, USA) have now shown in an article published in Nature Physics that this problem can be overcome. They succeeded in calculating effects in ultra-cold atom clouds which can only be explained in terms of the quantum correlations between many atoms. Such atom clouds are known as Bose-Einstein condensates and are an active field of research.

    Quantum Correlations

    Quantum physics is a game of luck and randomness. Initially, the atoms in a cold atom cloud do not have a predetermined position. Much like a die whirling through the air, where the number is yet to be determined, the atoms are located at all possible positions at the same time. Only when they are measured, their positions are fixed. “We shine light on the atom cloud, which is then absorbed by the atoms”, says Kaspar Sakmann. “The atoms are photographed, and this is what determines their position. The result is completely random.”

    There is, however, an important difference between quantum randomness and a game of dice: if different dice are thrown at the same time, they can be seen as independent from each other. Whether or not we roll a six with die number one does not influence the result of die number seven. The atoms in the atom cloud on the other hand are quantum physically connected. It does not make sense to analyse them individually, they are one big quantum object. Therefore, the result of every position measurement of any atom depends on the positions of all the other atoms in a mathematically complicated way.

    “It is not hard to determine the probability that a particle will be found at a specific position”, says Kaspar Sakmann. “The probability is highest in the centre of the cloud and gradually diminishes towards the outer fringes.” In a classically random system, this would be all the information that is needed. If we know that in a dice roll, any number has the probability of one sixth, then we can also determine the probability of rolling three ones with three dice. Even if we roll five ones consecutively, the probability remains the same the next time. With quantum particles, it is more complicated than that.

    “We solve this problem step by step”, says Sakmann. “First we calculate the probability of the first particle being measured on a certain position. The probability distribution of the second particle depends on where the first particle has been found. The position of the third particle depends on the first two, and so on.” In order to be able to describe the position of the very last particle, all the other positions have to be known. This kind of quantum entanglement makes the problem mathematically extremely challenging.

    Only Correlations Can Explain the Experimental Data

    But these correlations between many particles are extremely important – for example for calculating the behaviour of colliding Bose-Einstein-condensates. “The experiment shows that such collisions can lead to a special kind of quantum waves. On certain positions we find many particles, on an adjacent position we do not find any”, says Kaspar Sakmann. “If we consider the atoms separately, this cannot be explained. Only if we take the full quantum distribution into account, with all its higher correlations, these waves can be reproduced by our calculations.”

    Also other phenomena have been calculated with the same method, for instance Bose-Einstein-condensates which are stirred with a laser beam, so that little vortices emerge – another typical quantum many-particle-effect. “Our results show how important theses correlations are and that it is possible to include them in quantum calculations, in spite of all mathematical difficulties”, says Sakmann. With certain modifications, the approach can be expected to be useful for many other quantum systems as well.

    Original paper: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3631.htmlhttp://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3631.html

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Techniche Universitat Wein (Vienna) campus

    Our mission is “technology for people”. Through our research we “develop scientific excellence”,
    through our teaching we “enhance comprehensive competence”.

    TU Wien (TUW) is located in the heart of Europe, in a cosmopolitan city of great cultural diversity. For nearly 200 years, TU Wien has been a place of research, teaching and learning in the service of progress. TU Wien is among the most successful technical universities in Europe and is Austria’s largest scientific-technical research and educational institution.

     
  • richardmitnick 6:27 pm on January 7, 2016 Permalink | Reply
    Tags: , , , , Quantum Mechanics   

    From Physics Today: “Three groups close the loopholes in tests of Bell’s theorem” 

    physicstoday bloc

    Physics Today

    January 2016, page 14
    Johanna L. Miller

    Until now, the quintessential demonstration of quantum entanglement has required extra assumptions.

    The predictions of quantum mechanics are often difficult to reconcile with intuitions about the classical world. Whereas classical particles have well-defined positions and momenta, quantum wavefunctions give only the probability distributions of those quantities. What’s more, quantum theory posits that when two systems are entangled, a measurement on one instantly changes the wavefunction of the other, no matter how distant.

    Might those counterintuitive effects be illusory? Perhaps quantum theory could be supplemented by a system of hidden variables that restore local realism, so every measurement’s outcome depends only on events in its past light cone. In a 1964 theorem John Bell showed that the question is not merely philosophical: By looking at the correlations in a series of measurements on widely separated systems, one can distinguish quantum mechanics from any local-realist theory. (See the article by Reinhold Bertlmann, Physics Today, July 2015, page 40.) Such Bell tests in the laboratory have come down on the side of quantum mechanics. But until recently, their experimental limitations have left open two important loopholes that require additional assumptions to definitively rule out local realism.

    Now three groups have reported experiments that close both loopholes simultaneously. First, Ronald Hanson, Bas Hensen (both pictured in figure 1), and their colleagues at Delft University of Technology performed a loophole-free Bell test using a novel entanglement-swapping scheme.1 More recently, two groups—one led by Sae Woo Nam and Krister Shalm of NIST,2 the other by Anton Zeilinger and Marissa Giustina of the University of Vienna3—used a more conventional setup with pairs of entangled photons generated at a central source.

    Temp 1
    Figure 1. Bas Hensen (left) and Ronald Hanson in one of the three labs they used for their Bell test. FRANK AUPERLE

    The results fulfill a long-standing goal, not so much to squelch any remaining doubts that quantum mechanics is real and complete, but to develop new capabilities in quantum information and security. A loophole-free Bell test demonstrates not only that particles can be entangled at all but also that a particular source of entangled particles is working as intended and hasn’t been tampered with. Applications include perfectly secure quantum key distribution and unhackable sources of truly random numbers.

    In a typical Bell test trial, Alice and Bob each possess one of a pair of entangled particles, such as polarization-entangled photons or spin-entangled electrons. Each of them makes a random and independent choice of a basis—a direction in which to measure the particle’s polarization or spin—and performs the corresponding measurement. Under quantum mechanics, the results of Alice’s and Bob’s measurements over repeated trials can be highly correlated—even though their individual outcomes can’t be foreknown. In contrast, local-realist theories posit that only local variables, such as the state of the particle, can influence the outcome of a measurement. Under any such theory, the correlation between Alice’s and Bob’s measurements is much less.

    But what if some hidden signal informs Bob’s experiment about Alice’s choice of basis, or vice versa? If such a signal can change the state of Bob’s particle, it can create quantum-like correlations in a system without actual quantum entanglement. That possibility is at the heart of the so-called locality loophole. The loophole can be closed by arranging the experiment, as shown in figure 2, so that no light-speed signal with information about Alice’s choice of basis can reach Bob until after his measurement is complete.

    Temp 2
    Figure 2. The locality loophole arises from the possibility that hidden signals between Alice and Bob can influence the results of their measurements. This space–time diagram represents an entangled-photon experiment for which the loophole is closed. The diagonal lines denote light-speed trajectories: The paths of the entangled photons are shown in red, and the forward light cones of the measurement-basis choices are shown in blue. Note that Bob cannot receive information about Alice’s chosen basis until after his measurement is complete, and vice versa.

    In practice, under that arrangement, for Alice and Bob to have enough time to choose their bases and make their measurements, they must be positioned at least tens of meters apart. That requirement typically means that the experiments are done with entangled photons, which can be transported over such distances without much damage to their quantum state. But the inefficiencies in handling and detecting single photons introduce another loophole, called the fair-sampling or detection loophole: If too many trials go undetected by Alice, Bob, or both, it’s possible for the detected trials to display quantum-like correlations even when the set of all trials does not.

    In Bell tests that are implemented honestly, there’s little reason to think that the detected trials are anything other than a representative sample of all trials. But one can exploit the detection loophole to fool the test on purpose by causing trials to go undetected for reasons other than random chance. For example, manifestly classical states of light can mimic single photons in one basis but go entirely undetected in another (see Physics Today, December 2011, page 20). Furthermore, similar tricks can be used for hacking quantum cryptography systems. The only way to guarantee that a hacker is not present is to close the loopholes.

    Instead of the usual entangled photons, the Delft group based their experiment on entangled diamond nitrogen–vacancy (NV) centers, electron spins associated with point defects in the diamond’s crystal lattice and prized for their long quantum coherence times. The scheme is sketched in figure 3: Each NV center is first entangled with a photon, then the photons are sent to a central location and jointly measured. A successful joint measurement, which transfers the entanglement to the two NV centers, signals Alice and Bob that the Bell test trial is ready to proceed.

    Temp 3
    Figure 3. Entanglement swapping between diamond nitrogen–vacancy (NV) centers. Alice and Bob entangle their NV spins with photons, then transmit the photons to a central location to be jointly measured. After a successful joint measurement, which signals that the NV spins are entangled with each other, each spin is measured in a basis chosen by a random-number generator (RNG). (Adapted from ref. 1.)

    n 2013 the team carried out a version of that experiment4 with the NV spins separated by 3 m. “It was at that moment,” says Hanson, “that I realized that we could do a loophole-free Bell test—and also that we could be the first.” A 3-m separation is not enough to close the locality loophole, so the researchers set about relocating the NV-center equipment to two separate labs 1.3 km apart and fiber-optically linking them to the joint-measurement apparatus at a third lab in between.

    A crucial aspect of the entanglement-swapping scheme is that the Bell test trial doesn’t begin until the joint measurement is made. As far as the detection loophole is concerned, attempted trials without a successful joint measurement don’t count. That’s fortunate, because the joint measurement succeeds in just one out of every 156 million attempts—a little more than once per hour.

    That inefficiency stems from two main sources. First, the initial spin–photon entanglement succeeds just 3% of the time at each end. Second, photon loss in the optical fibers is substantial: The photons entangled with the NV centers have a wavelength of 637 nm, well outside the so-called telecom band, 1520–1610 nm, where optical fibers work best. In contrast, once the NV spins are entangled, they can be measured efficiently and accurately. So of the Bell test trials that the researchers are able to perform, none are lost to nondetection.

    Early in the summer of 2015, Hanson and colleagues ran their experiment for 220 hours over 18 days and obtained 245 useful trials. They saw clear evidence of quantum correlations—although with so few trials, the likelihood of a nonquantum system producing the same correlations by chance is as much as 4%.

    The Delft researchers are working on improving their system by converting their photons into the telecom band. Hanson estimates that they could then extend the separation between the NV centers from 1.3 km up to 100 km. That distance makes feasible a number of quantum network applications, such as quantum key distribution.

    In quantum key distribution—as in a Bell test—Alice and Bob perform independently chosen measurements on a series of entangled particles. On trials for which Alice and Bob have fortuitously chosen to measure their particles in the same basis, their results are perfectly correlated. By conferring publicly to determine which trials those were, then looking privately at their measurement results for those trials, they can obtain a secret string of ones and zeros that only they know. (See article by Daniel Gottesman and Hoi-Kwong Lo, Physics Today, November 2000, page 22.)

    The NIST and Vienna groups both performed their experiments with photons, and both used single-photon detectors developed by Nam and his NIST colleagues. The Vienna group used so-called transition-edge sensors that are more than 98% efficient;5 the NIST group used superconducting nanowire single-photon detectors (SNSPDs), which are not as efficient but have far better timing resolution. Previous SNSPDs had been limited to 70% efficiency at telecom wavelengths—in part because the polycrystalline superconductor of choice doesn’t couple well to other optical elements. By switching to an amorphous superconducting material, Nam and company increased the detection efficiency to more than 90%.6

    Shalm realized that the new SNSPDs might be good enough for a loophole-free Bell test. “We had the detectors that worked at telecom wavelengths, so we had to generate entangled photons at the same wavelengths,” he says. “That was a big engineering challenge.” Another challenge was to boost the efficiency of the mazes of optics that carry the entangled photons from the source to the detector. “Normally, every time photons enter or exit an optical fiber, the coupling is only about 80% efficient,” explains Shalm. “We needed to get that up to 95%. We were worrying about every quarter of a percent.”

    In September 2015 the NIST group conducted its experiment between two laboratory rooms separated by 185 m. The Vienna researchers positioned their detectors 60 m apart in the subbasement of the Vienna Hofburg Castle. Both groups had refined their overall system efficiencies so that each detector registered 75% or more of the photons created by the source—enough to close the detection loophole.

    In contrast to the Delft group’s rate of one trial per hour, the NIST and Vienna groups were able to conduct thousands of trials per second; they each collected enough data in less than one hour to eliminate any possibility that their correlations could have arisen from random chance.

    It’s not currently feasible to extend the entangled-photon experiments into large-scale quantum networks. Even at telecom wavelengths, photons traversing the optical fibers are lost at a nonnegligible rate, so lengthening the fibers would lower the fraction of detected trials and reopen the detection loophole. The NIST group is working on using its experiment for quantum random-number generation, which doesn’t require the photons to be conveyed over such vast distances.

    Random numbers are widely used in security applications. For example, one common system of public-key cryptography involves choosing at random two large prime numbers, keeping them private, but making their product public. Messages can be encrypted by anyone who knows the product, but they can be decrypted only by someone who knows the two prime factors.

    The scheme is secure because factorizing a large number is a computationally hard problem. But it loses that security if the process used to choose the prime numbers can be predicted or reproduced. Numbers chosen by computer are at best pseudorandom because computers can run only deterministic algorithms. But numbers derived from the measurement of quantum states—whose quantum nature is verified through a loophole-free Bell test—can be truly random and unpredictable.

    The NIST researchers plan to make their random-number stream publicly available to everyone, so it can’t be used for encryption keys that need to be kept private. But a verified public source of tamperproof random numbers has other uses, such as choosing unpredictable samples of voters for opinion polling, taxpayers for audits, or products for safety testing.

    REFERENCES

    B. Hensen et al., Nature 526, 682 (2015). http://dx.doi.org/10.1038/nature15759
    L. K. Shalm et al., Phys. Rev. Lett. (in press), http://arxiv.org/abs/1511.03189.
    M. Giustina et al., Phys. Rev. Lett. (in press), http://arxiv.org/abs/1511.03190.
    H. Bernien et al., Nature 497, 86 (2013). http://dx.doi.org/10.1038/nature12016
    A. E. Lita, A. J. Miller, S. W. Nam, Opt. Express 16, 3032 (2008). http://dx.doi.org/10.1364/OE.16.003032
    F. Marsili et al., Nat. Photonics 7, 210 (2013).http://dx.doi.org/10.1038/nphoton.2013.13

    © 2016 American Institute of Physics
    DOI: http://dx.doi.org/10.1063/PT.3.3039

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The American Physical Society strives to:

    Be the leading voice for physics and an authoritative source of physics information for the advancement of physics and the benefit of humanity;
    Provide effective programs in support of the physics community and the conduct of physics;
    Collaborate with national scientific societies for the advancement of science, science education and the science community;
    Cooperate with international physics societies to promote physics, to support physicists worldwide and to foster international collaboration;
    Promote an active, engaged and diverse membership, and support the activities of its units and members.

     
  • richardmitnick 5:32 pm on January 2, 2016 Permalink | Reply
    Tags: , , , Quantum Mechanics   

    From ETH Zürich: “Faster entanglement of distant quantum dots” 

    ETH Zurich bloc

    ETH Zürich

    21.12.2015
    Oliver Morsch

    Entanglement between distant quantum objects is an important ingredient for future information technologies. Researchers at the ETH have now developed a method with which such states can be created a thousand times faster than before.

    Temp 1
    In two entangled quantum objects the spins are in a superposition of the states “up/down” and “down/up”. Researchers at the ETH have created such states in quantum dots that are five meters apart. (Visualisations: ETH Zürich / Aymeric Delteil)

    In many future information and telecommunication technologies, a remarkable quantum effect called entanglement will likely play an important role. The entanglement of two quantum objects means that measurements on one of the objects instantaneously determine the properties of the other one – without any exchange of information between them.

    Disapprovingly, Albert Einstein called this strange non-locality “spooky action at a distance”. In the meantime, physicists have warmed to it and are now trying to put it to good use, for instance in order to make data transmission immune to eavesdropping. To that end, the creation of entanglement between spatially distant quantum particles is indispensable. That, however, is not easy and typically works rather slowly. A group of physicists led by Atac Imamoglu, a professor at the Institute for Quantum Electronics at the ETH in Zurich, have now demonstrated a method that allows the creation of a thousand times more entangled states per second than was possible before.

    Distant quantum dots

    In their experiments, the young researchers Aymeric Delteil, Zhe Sun und Wei-bo Gao used two so-called quantum dots that were placed five metres apart in the laboratory. Quantum dots are tiny structures, measuring only a few nanometres, inside a semiconductor material and in which electrons are trapped in a sort of cage. The quantum mechanical energy states of those electrons can be represented by spins, i.e., little arrows pointing up or down. When the spin states are entangled, it is possible to deduce from a measurement performed on one of the quantum dots which state the other one will be found in. If the spin of the first quantum dot points up, the other one points down, and vice versa. Before the measurement, however, the directions of the two spins are both unknown: they are in a quantum mechanical superposition of both spin combinations.

    Entanglement by scattershot

    In order to entangle the two quantum dots with each other the researchers at ETH used the principle of heralding. “Unfortunately, at the moment it is practically impossible to entangle quantum objects that are far apart with certainty and on demand”, explains Imamoglu. Instead, it is necessary to create the entangled states using a scattershot approach in which the quantum dots are constantly bombarded with light particles, which are then scattered back. Every so often this will result in a fluke: one of the scattered light particles makes a detector click, and the resulting spin states are actually entangled.

    Imamoglu and his colleagues make use of this trick. They send laser pulses simultaneously to the two quantum dots and measure the light particles subsequently emitted by them. Before doing so, they carefully eliminated any possibility to find out which quantum dot the light particles originated from. The click in the light detector then “heralds” the actual entanglement of the quantum dots and signals that they can now be used further, e.g., for transmitting quantum information.

    Possible improvements

    The researchers tested their method by continuously shooting around ten million laser pulses per second at the quantum dots. This high repetition rate was possible because the spin states of quantum dots can be controlled within just a few nanoseconds. The measurements showed that in this way 2300 entangled states were produced per second.

    “That’s already a good start”, says Imamoglu, adding that the method certainly has room for improvement. Entangling quantum dots that are more than five metres apart, for instance, would require an enhancement of their coherence time. This time indicates how long a quantum state survives before it is destroyed through the influence of its environment (such as electric or magnetic fields). If the heralding light particle takes longer than one coherence time to fly to the detector, then a click no longer heralds entanglement. In future experiments the physicists want, therefore, to replace the quantum dots by so-called quantum dot molecules, whose coherence time are a hundred times longer. Furthermore, improvements of the detection probability of the light particles could lead to an even higher entanglement yield.
    Reference

    Delteil A, Sun Z, Gao W, Togan E, Faelt S, Imamoglu A: Generation of heralded entanglement between distant hole spins, Nature Physics, 21 December 2015, doi: 10.1038/nphys3605

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ETH Zurich campus
    ETH Zurich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zurich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zurich, underlining the excellent reputation of the university.

     
  • richardmitnick 4:24 pm on December 24, 2015 Permalink | Reply
    Tags: , , , Quantum Mechanics, ,   

    From Ethan Siegel: “What Are Quantum Gravity’s Alternatives To String Theory?” 

    Starts with a bang
    Starts with a Bang

    12.24.15
    Ethan Siegel

    1
    Image credit: CPEP (Contemporary Physics Education Project), NSF/DOE/LBNL.

    If there is a quantum theory of gravity, is String Theory the only game in town?

    “I just think too many nice things have happened in string theory for it to be all wrong. Humans do not understand it very well, but I just don’t believe there is a big cosmic conspiracy that created this incredible thing that has nothing to do with the real world.” –Edward Witten

    The Universe we know and love — with [Albert] Einstein’s General Relativity as our theory of gravity and quantum field theories of the other three forces — has a problem that we don’t often talk about: it’s incomplete, and we know it. Einstein’s theory on its own is just fine, describing how matter-and-energy relate to the curvature of space-and-time. Quantum field theories on their own are fine as well, describing how particles interact and experience forces. Normally, the quantum field theory calculations are done in flat space, where spacetime isn’t curved. We can do them in the curved space described by Einstein’s theory of gravity as well (although they’re harder — but not impossible — to do), which is known as semi-classical gravity. This is how we calculate things like Hawking radiation and black hole decay.

    2
    Image credit: NASA, via http://www.nasa.gov/topics/universe/features/smallest_blackhole.html

    But even that semi-classical treatment is only valid near and outside the black hole’s event horizon, not at the location where gravity is truly at its strongest: at the singularities (or the mathematically nonsensical predictions) theorized to be at the center. There are multiple physical instances where we need a quantum theory of gravity, all having to do with strong gravitational physics on the smallest of scales: at tiny, quantum distances. Important questions, such as:

    What happens to the gravitational field of an electron when it passes through a double slit?
    What happens to the information of the particles that form a black hole, if the black hole’s eventual state is thermal radiation?
    And what is the behavior of a gravitational field/force at and around a singularity?

    3
    Image credit: Nature 496, 20–23 (04 April 2013) doi:10.1038/496020a, via http://www.nature.com/news/astrophysics-fire-in-the-hole-1.12726.

    In order to explain what happens at short distances in the presence of gravitational sources — or masses — we need a quantum, discrete, and hence particle-based theory of gravity. The known quantum forces are mediated by particles known as bosons, or particles with integer spin. The photon mediates the electromagnetic force, the W-and-Z bosons mediate the weak force, while the gluons mediate the strong force. All these types of particles have a spin of 1, which for massive (W-and-Z) particles mean they can take on spin values of -1, 0, or +1, while for massless ones (like gluons and photons), they can take on values of -1 or +1 only.

    The Higgs boson is also a boson, although it doesn’t mediate any forces, and has a spin of 0. Because of what we know about gravitation — General Relativity is a tensor theory of gravity — it must be mediated by a massless particle with a spin of 2, meaning it can take on a spin value of -2 or +2 only.

    This is fantastic! It means that we already know a few things about a quantum theory of gravity before we even try to formulate one! We know this because whatever the true quantum theory of gravity turns out to be, it must be consistent with General Relativity when we’re not at very small distances from a massive particle or object, just as — 100 years ago — we knew that General Relativity needed to reduce to Newtonian gravity in the weak-field regime.

    4
    Image credit: NASA, of an artist’s concept of Gravity Probe B orbiting the Earth to measure space-time curvature.

    NASA Gravity Probe B
    Gravity Probe B

    The big question, of course is how? How do you quantize gravity in a way that’s correct (at describing reality), consistent (with both GR and QFT), and hopefully leads to calculable predictions for new phenomena that might be observed, measured or somehow tested. The leading contender, of course, is something you’ve long heard of: String Theory.

    String Theory is an interesting framework — it can include all of the standard model fields and particles, both the fermions and the bosons.

    0
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    It includes also a 10-dimensional Tensor-Scalar theory of gravity: with 9 space and 1 time dimensions, and a scalar field parameter. If we erase six of those spatial dimensions (through an incompletely defined process that people just call compactification) and let the parameter (ω) that defines the scalar interaction go to infinity, we can recover General Relativity.

    5
    Image credit: NASA/Goddard/Wade Sisler, of Brian Greene presenting on String Theory.

    But there are a whole host of phenomenological problems with String Theory. One is that it predicts a large number of new particles, including all the supersymmetric ones, none of which have been found.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    It claims to not need to need “free parameters” like the standard model has (for the masses of the particles), but it replaces that problem with an even worse one. String theory refers to “10⁵⁰⁰ possible solutions,” where these solutions refer to the vacuum expectation values of the string fields, and there’s no mechanism to recover them; if you want String Theory to work, you need to give up on dynamics, and simply say, “well, it must’ve been anthropically selected.” There are frustrations, drawbacks, and problems with the very idea of String Theory. But the biggest problem with it may not be these mathematical ones. Instead, it may be that there are four other alternatives that may lead us to quantum gravity instead; approaches that are completely independent of String Theory.

    6
    Image credit: Wikimedia Commons user Linfoxman, of an illustration of a quantized “fabric of space.”

    1.) Loop Quantum Gravity [reader, please take the time to visit this link and read the article]. LQG is an interesting take on the problem: rather than trying to quantize particles, LQG has as one of its central features that space itself is discrete. Imagine a common analogy for gravity: a bedsheet pulled taut, with a bowling ball in the center. Rather than a continuous fabric, though, we know that the bedsheet itself is really quantized, in that it’s made up of molecules, which in turn are made of atoms, which in turn are made of nuclei (quarks and gluons) and electrons.

    Space might be the same way! Perhaps it acts like a fabric, but perhaps it’s made up of finite, quantized entities. And perhaps it’s woven out of “loops,” which is where the theory gets it name from. Weave these loops together and you get a spin network, which represents a quantum state of the gravitational field. In this picture, not just the matter itself but space itself is quantized. The way to go from this idea of a spin network to a perhaps realistic way of doing gravitational computations is an active area of research, one that saw a tremendous leap forward made in just 2007/8, so this is still actively advancing.

    7
    Image credit: Wikimedia Commons user & reasNink, generated with Wolfram Mathematica 8.0.

    2.) Asymptotically Safe Gravity. This is my personal favorite of the attempts at a quantum theory of gravity. Asymptotic freedom was developed in the 1970s to explain the unusual nature of the strong interaction: it was a very weak force at extremely short distances, then got stronger as (color) charged particles got farther and farther apart. Unlike electromagnetism, which had a very small coupling constant, the strong force has a large one. Due to some interesting properties of QCD, if you wound up with a (color) neutral system, the strength of the interaction fell off rapidly. This was able to account for properties like the physical sizes of baryons (protons and neutrons, for example) and mesons (pions, for example).

    Asymptotic safety, on the other hand, looks to solve a fundamental problem that’s related to this: you don’t need small couplings (or couplings that tend to zero), but rather for the couplings to simply be finite in the high-energy limit. All coupling constants change with energy, so what asymptotic safety does is pick a high-energy fixed point for the constant (technically, for the renormalization group, from which the coupling constant is derived), and then everything else can be calculated at lower energies.

    At least, that’s the idea! We’ve figured out how to do this in 1+1 dimensions (one space and one time), but not yet in 3+1 dimensions. Still, progress has been made, most notably by Christof Wetterich, who had two ground breaking papers in the 1990s. More recently, Wetterich used asymptotic safety — just six years ago — to calculate a prediction for the mass of the Higgs boson before the LHC found it. The result?

    9
    Image credit: Mikhail Shaposhnikov & Christof Wetterich.

    Amazingly, what it indicated was perfectly in line with what the LHC wound up finding.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    It’s such an amazing prediction that if asymptotic safety is correct, and — when the error bars are beaten down further — the masses of the top quark, the W-boson and the Higgs boson are finalized, there may not even be a need for any other fundamental particles (like SUSY particles) for physics to be stable all the way up to the Planck scale. It’s not only very promising, it has many of the same appealing properties of string theory: quantizes gravity successfully, reduces to GR in the low energy limit, and is UV-finite. In addition, it beats string theory on at least one account: it doesn’t need the addition of new particles or parameters that we have no evidence for! Of all the string theory alternatives, this one is my favorite.

    3.) Causal Dynamical Triangulations. This idea, CDT, is one of the new kids in town, first developed only in 2000 by Renate Loll and expanded on by others since. It’s similar to LQG in that space itself is discrete, but is primarily concerned with how that space itself evolves. One interesting property of this idea is that time must be discrete as well! As an interesting feature, it gives us a 4-dimensional spacetime (not even something put in a priori, but something that the theory gives us) at the present time, but at very, very high energies and small distances (like the Planck scale), it displays a 2-dimensional structure. It’s based on a mathematical structure called a simplex, which is a multi-dimensional analogue of a triangle.

    10
    Image credit: screenshot from the Wikipedia page for Simplex, via https://en.wikipedia.org/wiki/Simplex.

    A 2-simplex is a triangle, a 3-simplex is a tetrahedron, and so on. One of the “nice” features of this option is that causality — a notion held sacred by most human beings — is explicitly preserved in CDT. (Sabine has some words on CDT here, and its possible relation to asymptotically safe gravity.) It might be able to explain gravity, but it isn’t 100% certain that the standard model of elementary particles can fit suitably into this framework. It’s only major advances in computation that have enabled this to become a fairly well-studied alternative of late, and so work in this is both ongoing and relatively young.

    4.) Emergent gravity. And finally, we come to what’s probably the most speculative, recent of the quantum gravity possibilities. Emergent gravity only gained prominence in 2009, when Erik Verlinde proposed entropic gravity, a model where gravity was not a fundamental force, but rather emerged as a phenomenon linked to entropy. In fact, the seeds of emergent gravity go back to the discoverer of the conditions for generating a matter-antimatter asymmetry, Andrei Sakharov, who proposed the concept back in 1967. This research is still in its infancy, but as far as developments in the last 5–10 years go, it’s hard to ask for more than this.

    11
    Image credit: flickr gallery of J. Gabas Esteban.

    We’re sure we need a quantum theory of gravity to make the Universe work at a fundamental level, but we’re not sure what that theory looks like or whether any of these five avenues (string theory included) are going to prove fruitful or not. String Theory is the best studied of all the options, but Loop Quantum Gravity is a rising second, with the others being given serious consideration at long last. They say the answer’s always in the last place you look, and perhaps that’s motivation enough to start looking, seriously, in newer places.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 7:00 pm on December 23, 2015 Permalink | Reply
    Tags: , , , , , , Quantum Mechanics   

    From AAAS: “Physicists figure out how to retrieve information from a black hole” 

    AAAS

    AAAS

    23 December 2015
    Adrian Cho

    Temp 1
    It would take technologies beyond our wildest dreams to extract the tiniest amount of quantum information from a black hole like this one. NASA; M. Weiss/Chandra X-Ray Center

    Black holes earn their name because their gravity is so strong not even light can escape from them. Oddly, though, physicists have come up with a bit of theoretical sleight of hand to retrieve a speck of information that’s been dropped into a black hole. The calculation touches on one of the biggest mysteries in physics: how all of the information trapped in a black hole leaks out as the black hole “evaporates.” Many theorists think that must happen, but they don’t know how.

    Unfortunately for them, the new scheme may do more to underscore the difficulty of the larger “black hole information problem” than to solve it. “Maybe others will be able to go further with this, but it’s not obvious to me that it will help,” says Don Page, a theorist at the University of Alberta in Edmonton, Canada, who was not involved in the work.

    You can shred your tax returns, but you shouldn’t be able to destroy information by tossing it into a black hole. That’s because, even though quantum mechanics deals in probabilities—such as the likelihood of an electron being in one location or another—the quantum waves that give those probabilities must still evolve predictably, so that if you know a wave’s shape at one moment you can predict it exactly at any future time. Without such “unitarity” quantum theory would produce nonsensical results such as probabilities that don’t add up to 100%.

    But suppose you toss some quantum particles into a black hole. At first blush, the particles and the information they encode is lost. That’s a problem, as now part of the quantum state describing the combined black hole-particles system has been obliterated, making it impossible to predict its exact evolution and violating unitarity.

    Physicists think they have a way out. In 1974, British theorist Stephen Hawking argued that black holes can radiate particles and energy. Thanks to quantum uncertainty, empty space roils with pairs of particles flitting in and out of existence. Hawking realized that if a pair of particles from the vacuum popped into existence straddling the black hole’s boundary then one particle could fly into space, while the other would fall into the black hole. Carrying away energy from the black hole, the exiting Hawking radiation should cause a black hole to slowly evaporate. Some theorists suspect information reemerges from the black hole encoded in the radiation—although how remains unclear as the radiation is supposedly random.

    Now, Aidan Chatwin-Davies, Adam Jermyn, and Sean Carroll of the California Institute of Technology in Pasadena have found an explicit way to retrieve information from one quantum particle lost in a black hole, using Hawking radiation and the weird concept of quantum teleportation.

    Quantum teleportation enables two partners, Alice and Bob, to transfer the delicate quantum state of one particle such as an electron to another. In quantum theory, an electron can spin one way (up), the other way (down), or literally both ways at once. In fact, its state can be described by a point on a globe in which north pole signifies up and the south pole signifies down. Lines of latitude denote different mixtures of up and down, and lines of longitude denote the “phase,” or how the up and down parts mesh. However, if Alice tries to measure that state, it will “collapse” one way or the other, up or down, squashing information such as the phase. So she can’t measure the state and send the information to Bob, but must transfer it intact.

    To do that Alice and Bob can share an additional pair of electrons connected by a special quantum link called entanglement. The state of either particle in the entangled pair is uncertain—it simultaneously points everywhere on the globe—but the states are correlated so that if Alice measures her particle from the pair and finds it spinning, say, up, she’ll know instantly that Bob’s electron is spinning down. So Alice has two electrons—the one whose state she wants to teleport and her half of the entangled pair. Bob has just the one from the entangled pair.

    To perform the teleportation, Alice takes advantage of one more strange property of quantum mechanics: that measurement not only reveals something about a system, it also changes its state. So Alice takes her two unentangled electrons and performs a measurement that “projects” them into an entangled state. That measurement breaks the entanglement between the pair of electrons that she and Bob share. But at the same time, it forces Bob’s electron into the state that her to-be-teleported electron was in. It’s as if, with the right measurement, Alice squeezes the quantum information from one side of the system to the other.

    Chatwin-Davies and colleagues realized that they could teleport the information about the state of an electron out of a black hole, too. Suppose that Alice is floating outside the black hole with her electron. She captures one photon from a pair born from Hawking radiation. Much like an electron, the photon can spin in either of two directions, and it will be entangled with its partner photon that has fallen into the black hole. Next, Alice measures the total angular momentum, or spin, of the black hole—both its magnitude and, roughly speaking, how much it lines up with a particular axis. With those two bits of information in hand, she then tosses in her electron, losing it forever.

    But Alice can still recover the information about the state of that electron, the team reports in a paper in press at Physical Review Letters. All she has to do is once again measure the spin and orientation of the black hole. Those measurements then entangle the black hole and the in-falling photon. They also teleport the state of the electron to the photon that Alice captured. Thus, the information from the lost electron is dragged back into the observable universe.

    Chatwin-Davies stresses that the scheme is not a plan for a practical experiment. After all, it would require Alice to almost instantly measure the spin of a black hole as massive as the sun to within a single atom’s spin. “We like to joke around that Alice is the most advanced scientist in the universe,” he says.

    The scheme also has major limitations. In particular, as the authors note, it works for one quantum particle, but not for two or more. That’s because the recipe exploits the fact that the black hole conserves angular momentum, so that its final spin is equal to its initial spin plus that of the electron. That trick enables Alice to get out exactly two bits of information—the total spin and its projection along one axis—and that’s just enough information to specify the latitude and longitude of quantum state of one particle. But it’s not nearly enough to recapture all the information trapped in a black hole, which typically forms when a star collapses upon itself.

    To really tackle the black hole information problem, theorists would also have to account for the complex states of the black hole’s interior, says Stefan Leichenauer, a theorist at the University of California, Berkeley. “Unfortunately, all of the big questions we have about black holes are precisely about these internal workings,” he says. “So, this protocol, though interesting in its own right, will probably not teach us much about the black hole information problem in general.”

    However, delving into the interior of black holes would require a quantum mechanical theory of gravity. Of course, developing such a theory is perhaps the grandest goal in all of theoretical physics, one that has eluded physicists for decades.

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

     
  • richardmitnick 10:46 am on December 18, 2015 Permalink | Reply
    Tags: , Einstein and Bohr’s Quantum Debate, , Quantum Mechanics   

    From Physics: “Viewpoint: Closing the Door on Einstein and Bohr’s Quantum Debate” 

    Physics LogoAbout Physics

    Physics Logo 2

    Physics

    December 16, 2015
    Alain Aspect, Laboratoire Charles Fabry, Institut d’Optique Graduate School, CNRS, Université Paris-Saclay, Palaiseau, France

    By closing two loopholes at once, three experimental tests of Bell’s inequalities remove the last doubts that we should renounce local realism. They also open the door to new quantum information technologies.

    1
    APS/Alan Stonebraker
    Figure 1: An apparatus for performing a Bell test. A source emits a pair of entangled photons ν1 and ν2. Their polarizations are analyzed by polarizers A and B (grey blocks), which are aligned, respectively, along directions a and b. (a and b can be along x, y or any direction in the x-y plane; here, they are along x.) Each polarizer has two output channels, labeled +1 and -1. A photon ν1 polarized parallel (perpendicular) to a will emerge in +1 (-1) at A. Similarly, a photon ν2 polarized parallel (perpendicular) to b will emerge in +1 (-1) at B. But in general, the photons are not in a state of polarization corresponding to a specific output channel, and then the formalism of quantum mechanics yields the probabilities of getting results +1 or -1, for specified orientations of the polarizers. For the entangled state of two polarized photons (Ψ) shown here, the quantum formalism predicts random results on each side (a 50% probability of being +1 or -1.) But it also predicts strong correlations between these random results. For instance, if both polarizers are aligned along the same direction (a=b), then the results at A and B will be either (+1,+1) or (-1,-1), but never (+1,-1) or (-1,+1): this is a total correlation, as can be determined by measuring the four rates with the fourfold detection circuit (green). Local realism explains these correlations by assuming a common property of the two photons, whose value changes randomly from one photon pair to the next. Bell’s inequality, however, shows the correlations predicted by local realism are limited; but quantum predictions violate this inequality. A Bell test consists of measuring the correlations and comparing the results with Bell’s inequalities. To perform an “ideal” Bell test, the polarizer settings must be changed randomly while the photons are in flight between the source and the polarizers, and the detector efficiencies should exceed 2/3 (see text for details).

    In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen V wrote a now famous paper questioning the completeness of the formalism of quantum mechanics. Rejecting the idea that a measurement on one particle in an entangled pair could affect the state of the other—distant—particle, they concluded that one must complete the quantum formalism in order to get a reasonable, “local realist,” description of the world. This view says a particle carries with it, locally, all the properties determining the results of any measurement performed on it. (The ensemble of these properties constitutes the particle’s physical reality.) It wasn’t, however, until 1964 that John Stewart Bell, a theorist at CERN, discovered inequalities that allow an experimental test of the predictions of local realism against those of standard quantum physics. In the ensuing decades, experimentalists performed increasingly sophisticated tests of Bell’s inequalities. But these tests have always had at least one “loophole,” allowing a local realist interpretation of the experimental results unless one made a supplementary (albeit reasonable) hypothesis. Now, by closing the two main loopholes at the same time, three teams have independently confirmed that we must definitely renounce local realism [1–3]. Although their findings are, in some sense, no surprise, they crown decades of experimental effort. The results also place several fundamental quantum information schemes, such as device-independent quantum cryptography and quantum networks, on firmer ground.

    It is sometimes forgotten that Einstein played a major role in the early development of quantum physics [4]. He was the first to fully understand the consequences of the energy quantization of mechanical oscillators, and, after introducing “lichtquanten‚” in his famous 1905 paper, he enunciated as early as 1909 the dual wave-particle nature of light [5]. Despite his visionary understanding, he grew dissatisfied with the “Copenhagen interpretation” of the quantum theory, developed by Niels Bohr, and tried to find an inconsistency in the Heisenberg uncertainty relations. At the Solvay conference of 1927, however, Bohr successfully refuted all of Einstein’s attacks, making use of ingenuous gedankenexperiments bearing on a single quantum particle.

    But in 1935, Einstein raised a new objection about the Copenhagen interpretation, this time with a gedankenexperiment involving two particles. He had discovered that the quantum formalism allows two particles to be entangled in a state such that strong correlations are predicted between measurements on these two particles. These correlations would persist at particle separations large enough that the measurements could not be directly connected by any influence, unless it were to travel faster than light. Einstein therefore argued for what he felt was the only reasonable description: that each particle in the pair carries a property, decided at the moment of separation, which determines the measurement results. But since entangled particles are not described separately in the quantum formalism, Einstein concluded the formalism was incomplete [6]. Bohr, however, strongly opposed this conclusion, convinced that it was impossible to complete the quantum formalism without destroying its self-consistency [7].

    With the exception of Erwin Schrödinger [8], most physicists did not pay attention to the debate between Bohr and Einstein, as the conflicting views only affected one’s interpretation of the quantum formalism and not its ability to correctly predict the results of measurements, which Einstein did not question. The situation changed when Bell made the groundbreaking discovery that some predictions of quantum physics conflict with Einstein’s local realist world view [9, 10]. To explain Bell’s finding, it helps to refer to an actual experiment, consisting of a pair of photons whose polarizations are measured at two separate stations (Fig. 1). For the entangled state (Ψ) of two polarized photons shown in the inset, quantum mechanics predicts that the polarization measurements performed at the two distant stations will be strongly correlated. To account for these correlations, Bell developed a general local realist formalism, in which a common property, attributed to each photon of a pair, determines the outcomes of the measurements. In what are now known as Bell’s inequalities, he showed that, for any local realist formalism, there exist limits on the predicted correlations. And he showed that, according to quantum mechanics, these limits are passed for some polarizer settings. That is, quantum-mechanical predictions conflict with local realism, in contradiction with the belief that the conflict was only about interpretation, not about quantitative predictions.

    Bell’s discovery thus shifted Einstein and Bohr’s debate from epistemology to the realm of experimental physics. Within a few years, Bell’s inequalities were adapted to a practical scheme [11]. The first experiments were carried out in 1972 at the University of California, Berkeley [12], and at Harvard [13], then in 1976 at Texas A&M [14]. After some initial discrepancies, the results converged towards an agreement with quantum mechanics and a violation of Bell’s inequalities by as much as 6 standard deviations [shown as σ]. But although these experiments represented genuine tours de force for the time, they were far from ideal. Some loopholes remained open, allowing a determined advocate of Einstein’s point of view to interpret these experiments in a local realist formalism [15].

    The first—and according to Bell [16], the most fundamental—of these loopholes is the “locality loophole.” In demonstrating his inequalities, Bell had to assume that the result of a measurement at one polarizer does not depend on the orientation of the other. This locality condition is a reasonable hypothesis. But in a debate where one envisages new phenomena, it would be better to base such a condition on a fundamental law of nature. In fact, Bell proposed a way to do this. He remarked that if the orientation of each polarizer was chosen while the photons were in flight, then relativistic causality—stating that no influence can travel faster than light—would prevent one polarizer from “knowing” the orientation of the other at the time of a measurement, thus closing the locality loophole [9].

    This is precisely what my colleagues and I did in 1982 at Institut d’Optique, in an experiment in which the polarizer orientations were changed rapidly while the photons were in flight [17] (see note in Ref. [18]). Even with this drastically new experimental scheme, we found results still agreeing with quantum predictions, violating Bell’s inequality by 6 standard deviations. Because of technical limitations, however, the choice of the polarizer orientations in our experiment was not fully random. In 1998, researchers at the University of Innsbruck, using much improved sources of entangled photons [19] were able to perform an experiment with genuine random number generators, and they observed a violation of Bell’s inequality by several tens of standard deviations [20].

    There was, however, a second loophole. This one relates to the fact that the detected pairs in all these experiments were only a small fraction of the emitted pairs. This fraction could depend on the polarizer settings, precluding a derivation of Bell’s inequalities unless one made a reasonable “fair sampling” hypothesis [21]. To close this “detection loophole,” and drop the necessity of the fair sampling hypothesis, the probability of detecting one photon when its partner has been detected (the global quantum efficiency, or “heralding” efficiency) must exceed 2/3—a value not attainable for single-photon counting technology until recently. In 2013, taking advantage of new types of photodetectors with intrinsic quantum efficiencies over 90%, two experiments closed the detection loophole and found a clear violation of Bell’s inequalities [22, 23]. The detection loophole was also addressed with other systems, in particular using ions instead of photons [24, 25], but none of them tackled simultaneously the locality loophole.

    So as of two years ago, both the locality loophole and the detection loopholes had been closed, but separately. Closing the two loopholes together in one experiment is the amazing achievement by the research teams led by Ronald Hanson at Delft University of Technology in the Netherlands [1], Anton Zeilinger at the University of Vienna, Austria [2], and Lynden Shalm at NIST in Boulder, Colorado [3].

    The experiments by the Vienna [2] and NIST [3] groups are based on the scheme in Fig. 1. The teams use rapidly switchable polarizers that are located far enough from the source to close the locality loophole: The distance is 30 meters in the Vienna experiment and more than 100 meters in the Boulder experiment. Both groups also use high-efficiency photon detectors, as demanded to close the detection loophole. They prepare pairs of photons using a nonlinear crystal to convert a pump photon into two “daughter” entangled photons. Each photon is sent to a detection station with a polarizer whose alignment is set using a new type of random number generator developed by scientists in Spain [26] (see 16 December 2015 Synopsis; the same device was used by the Delft group). Moreover, the two teams achieved an unprecedentedly high probability that, when a photon enters one analyzer, its partner enters the opposite analyzer. This, combined with the high intrinsic efficiency of the detectors, gives both experiments a heralding efficiency of about 75%—a value larger than the critical value of 2/3.

    The authors evaluate the confidence level of their measured violation of Bell’s inequality by calculating the probability p that a statistical fluctuation in a local realist model would yield the observed violation. The Vienna team reports a p of 3.7×10−31—a spectacular value corresponding to a violation by more than 11 standard deviations. (Such a small probability is not really significant, and the probability that some unknown error exists is certainly larger, as the authors rightly emphasize.) The NIST team reports an equally convincing p of 2.3×10−7, corresponding to a violation by 7 standard deviations.

    The Delft group uses a different scheme [1]. Inspired by the experiment of Ref. [25], their entanglement scheme consists of two nitrogen vacancy (NV) centers, each located in a different lab. (An NV center is a kind of artificial atom embedded in a diamond crystal.) In each NV center, an electron spin is associated with an emitted photon, which is sent to a common detection station located between the labs housing the NV centers. Mixing the two photons on a beam splitter and detecting them in coincidence entangles the electron spins on the remote NV centers. In cases when the coincidence signal is detected, the researchers then keep the measurements of the correlations between the spin components and compare the resulting correlations to Bell’s inequalities. This is Bell’s “event-ready” scheme [16], which permits the detection loophole to be closed because for each entangling signal there is a result for the two spin-component measurements. The impressive distance between the two labs (1.3 kilometers) allows the measurement directions of the spin components to be chosen independently of the entangling event, thus closing the locality loophole. The events are extremely rare: The Delft team reports a total of 245 events, which allows them to obtain a violation of Bell’s inequality with a p of 4×10−2, corresponding to a violation by 2 standard deviations.

    The schemes demonstrated by the Vienna, NIST, and Delft groups have important consequences for quantum information. For instance, a loophole-free Bell’s inequality test is needed to guarantee the security of some device-independent quantum cryptography schemes [27]. Moreover, the experiment by the Delft group, in particular, shows it is possible to entangle static quantum bits, offering a basis for long distance quantum networks [28, 29].

    Of course we must remember that these experiments were primarily meant to settle the conflict between Einstein’s and Bohr’s points of view. Can we say that the debate over local realism is resolved? There is no doubt that these are the most ideal experimental tests of Bell’s inequalities to date. Yet no experiment, as ideal as it is, can be said to be totally loophole-free. In the experiments with entangled photons, for example, one could imagine that the photons’ properties are determined in the crystal before their emission, in contradiction with the reasonable hypothesis explained in the note in Ref. [18]. The random number generators could then be influenced by the properties of the photons, without violating relativistic causality. Far fetched as it is, this residual loophole cannot be ignored, but there are proposals for how to address it [30].

    Yet more foreign to the usual way of reasoning in physics is the “free-will loophole.” This is based on the idea that the choices of orientations we consider independent (because of relativistic causality) could in fact be correlated by an event in their common past. Since all events have a common past if we go back far enough in time—possibly to the big bang—any observed correlation could be justified by invoking such an explanation. Taken to its logical extreme, however, this argument implies that humans do not have free will, since two experimentalists, even separated by a great distance, could not be said to have independently chosen the settings of their measuring apparatuses. Upon being accused of metaphysics for his fundamental assumption that experimentalists have the liberty to freely choose their polarizer settings, Bell replied [31]: “Disgrace indeed, to be caught in a metaphysical position! But it seems to me that in this matter I am just pursuing my profession of theoretical physics.” I would like to humbly join Bell and claim that, in rejecting such an ad hoc explanation that might be invoked for any observed correlation, “I am just pursuing my profession of experimental physics.”

    Please see original article for References

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

     
  • richardmitnick 8:20 am on December 12, 2015 Permalink | Reply
    Tags: , , , Quantum Mechanics   

    From Daily Galaxy: “”Gravity Alters the Quantum Nature of Particles on Earth” –What Does It Imply at Cosmological Scales?” 

    Daily Galaxy
    The Daily Galaxy

    December 11, 2015
    University of Vienna

    1

    “It is quite surprising that gravity can play any role in quantum mechanics“, says Igor Pikovski, a theoretical physicist working at the Harvard-Smithsonian Center for Astrophysics:”Gravity is usually studied on astronomical scales, but it seems that it also alters the quantum nature of the smallest particles on Earth”. “It remains to be seen what the results imply on cosmological scales, where gravity can be much stronger”, adds Caslav Brukner University Professor at the University of Vienna and Director of the Institute for Quantum Optics and Quantum Information.

    In 1915 Albert Einstein formulated the theory of general relativity which fundamentally changed our understanding of gravity. He explained gravity as the manifestation of the curvature of space and time. Einstein’s theory predicts that the flow of time is altered by mass. This effect, known as “gravitational time dilation“, causes time to be slowed down near a massive object. It affects everything and everybody; in fact, people working on the ground floor will age slower than their colleagues a floor above, by about 10 nanoseconds in one year. This tiny effect has actually been confirmed in many experiments with very precise clocks.

    This past June, a team of researchers from the University of Vienna, Harvard University and the University of Queensland discovered that the slowing down of time can explain another perplexing phenomenon: the transition from quantum behavior to our classical, everyday world.

    The image below is an illustration of a molecule in the presence of gravitational time dilation. The molecule is in a quantum superposition of being several places at the same time.

    2

    Quantum theory, the other major discovery in physics in the early 20th century, predicts that the fundamental building blocks of nature show fascinating and mind-boggling behavior. Extrapolated to the scales of our everyday life quantum theory leads to situations such as the famous example of Schroedinger’s cat: the cat is neither dead nor alive, but in a so-called quantum superposition of both.

    4

    Yet such a behavior has only been confirmed experimentally with small particles and has never been observed with real-world cats. Therefore, scientists conclude that something must cause the suppression of quantum phenomena on larger, everyday scales. Typically this happens because of interaction with other surrounding particles.

    The research team, headed by ?aslav Brukner from the University of Vienna and the Institute of Quantum Optics and Quantum Information, found that time dilation also plays a major role in the demise of quantum effects. They calculated that once the small building blocks form larger, composite objects – such as molecules and eventually larger structures like microbes or dust particles -, the time dilation on Earth can cause a suppression of their quantum behavior.

    The tiny building blocks jitter ever so slightly, even as they form larger objects. And this jitter is affected by time dilation: it is slowed down on the ground and speeds up at higher altitudes. The researchers have shown that this effect destroys the quantum superposition and, thus, forces larger objects to behave as we expect in everyday life.

    The results of Pikovski and his co-workers reveal how larger particles lose their quantum behavior due to their own composition, if one takes time dilation into account. This prediction should be observable in experiments in the near future, which could shed some light on the fascinating interplay between the two great theories of the 20th century, quantum theory and general relativity.

    Publication in Nature Physics: “Universal decoherence due to gravitational time dilation”. I. Pikovski, M. Zych, F. Costa, C. Brukner. Nature Physics (2015) doi:10.1038/nphys3366

    See the full article here .

    Please help promote STEM in your local schools

    stem

    STEM Education Coalition

     
  • richardmitnick 11:05 pm on November 24, 2015 Permalink | Reply
    Tags: , , Leonard Susskind, , Quantum Mechanics   

    From Nature: “Theoretical physics: Complexity on the horizon” 2014 

    Nature Mag
    Nature

    28 May 2014
    Amanda Gefter

    Temp 1

    When physicist Leonard Susskind gives talks these days, he often wears a black T-shirt proclaiming “I ♥ Complexity”. In place of the heart is a Mandelbrot set, a fractal pattern widely recognized as a symbol for complexity at its most beautiful.

    1
    Initial image of a Mandelbrot set zoom sequence with a continuously colored environment

    That pretty much sums up his message. The 74-year-old Susskind, a theorist at Stanford University in California, has long been a leader in efforts to unify quantum mechanics with the general theory of relativityAlbert Einstein’s framework for gravity. The quest for the elusive unified theory has led him to advocate counter-intuitive ideas, such as superstring theory or the concept that our three-dimensional Universe is actually a two-dimensional hologram. But now he is part of a small group of researchers arguing for a new and equally odd idea: that the key to this mysterious theory of everything is to be found in the branch of computer science known as computational complexity.

    This is not a subfield to which physicists have tended to look for fundamental insight. Computational complexity is grounded in practical matters, such as how many logical steps are required to execute an algorithm. But if the approach works, says Susskind, it could resolve one of the most baffling theoretical conundrums to hit his field in recent years: the black-hole firewall paradox, which seems to imply that either quantum mechanics or general relativity must be wrong. And more than that, he says, computational complexity could give theorists a whole new way to unify the two branches of their science — using ideas based fundamentally on information.

    Behind a firewall

    It all began 40 years ago, when physicist Stephen Hawking at the University of Cambridge, UK, realized that quantum effects would cause a black hole to radiate photons and other particles until it completely evaporates away.

    As other researchers were quick to point out, this revelation brings a troubling contradiction. According to the rules of quantum mechanics, the outgoing stream of radiation has to retain information about everything that ever fell into the black hole, even as the matter falling in carries exactly the same information through the black hole’s event horizon, the boundary inside which the black hole’s gravity gets so strong that not even light can escape. Yet this two-way flow could violate a key law of quantum mechanics known as the no-cloning theorem, which dictates that making a perfect copy of quantum information is impossible.

    Happily, as Susskind and his colleagues observed (1) in 1995, nature seemed to sidestep any such violation by making it impossible to see both copies at once: an observer who remains outside the horizon cannot communicate with one who has fallen in. But in 2012, four physicists at the University of California, Santa Barbara — Ahmed Almheiri, Donald Marolf, Joseph Polchinski and James Sully, known collectively as AMPS — spotted a dangerous exception to this rule (2). They found a scenario in which an observer could decode the information in the radiation, jump into the black hole and then compare that information with its forbidden duplicate on the way down.

    AMPS concluded that nature prevents this abomination by creating a blazing firewall just inside the horizon that will incinerate any observer — or indeed, any particle — trying to pass through. In effect, space would abruptly end at the horizon, even though Einstein’s gravitational theory says that space must be perfectly continuous there. If AMPS’s theory is true, says Raphael Bousso, a theoretical physicist at the University of California, Berkeley, “this is a terrible blow to general relativity”.

    Does not compute

    Fundamental physics has been in an uproar ever since, as practitioners have struggled to find a resolution to this paradox. The first people to bring computational complexity into the debate were Stanford’s Patrick Hayden, a physicist who also happens to be a computer scientist, and Daniel Harlow, a physicist at Princeton University in New Jersey. If the firewall argument hinges on an observer’s ability to decode the outgoing radiation, they wondered, just how hard is that to do?

    Impossibly hard, they discovered. A computational-complexity analysis showed that the number of steps required to decode the outgoing information would rise exponentially with the number of radiation particles that carry it. No conceivable computer could finish the calculations until long after the black hole had radiated all of its energy and vanished, along with the forbidden information clones. So the firewall has no reason to exist: the decoding scenario that demands it cannot happen, and the paradox disappears.

    “The black hole’s interior is protected by an armour of computational complexity.”

    Hayden was sceptical of the result at first. But then he and Harlow found much the same answer for many types of black hole (3). “It did seem to be a robust principle,” says Hayden: “a conspiracy of nature preventing you from performing this decoding before the black hole had disappeared on you.”

    The Harlow–Hayden argument made a big impression on Scott Aaronson, who works on computational complexity and the limits of quantum computation at the Massachusetts Institute of Technology in Cambridge. “I regard what they did as one of the more remarkable syntheses of physics and computer science that I’ve seen in my career,” he says.

    It also resonated strongly among theoretical physicists. But not everyone is convinced. Even if the calculation is correct, says Polchinski, “it is hard to see how one would build a fundamental theory on this framework”. Nevertheless, some physicists are trying to do just that. There is a widespread belief in the field that the laws of nature must somehow be based on information. And the idea that the laws might actually be upheld by computational complexity — which is defined entirely in terms of information — offers a fresh perspective.

    It certainly inspired Susskind to dig deeper into the role of complexity. For mathematical clarity, he chose to make his calculations in a theoretical realm known as anti-de Sitter space (AdS). This describes a cosmos that is like our own Universe in the sense that everything in it, including black holes, is governed by gravity. Unlike our Universe, however, it has a boundary — a domain where there is no gravity, just elementary particles and fields governed by quantum physics. Despite this difference, studying physics in AdS has led to many insights, because every object and physical process inside the space can be mathematically mapped to an equivalent object or process on its boundary. A black hole in AdS, for example, is equivalent to a hot gas of ordinary quantum particles on the boundary. Better still, calculations that are complicated in one domain often turn out to be simple in the other. And after the calculations are complete, the insights gained in AdS can generally be translated back into our own Universe.

    Increasing complexity

    Susskind decided to look at a black hole sitting at the centre of an AdS universe, and to use the boundary description to explore what happens inside a black hole’s event horizon. Others had attempted this and failed, and Susskind could see why after he viewed the problem through the lens of computational complexity. Translating from the boundary of the AdS universe to the interior of a black hole requires an enormous number of computational steps, and that number increases exponentially as one moves closer to the event horizon (4). As Aaronson puts it, “the black hole’s interior is protected by an armour of computational complexity”.

    Furthermore, Susskind noticed, the computational complexity tends to grow with time. This is not the increase of disorder, or entropy, that is familiar from everyday physics. Rather, it is a pure quantum effect arising from the way that interactions between the boundary particles cause an explosive growth in the complexity of their collective quantum state.

    If nothing else, Susskind argued, this growth means that complexity behaves much like a gravitational field. Imagine an object floating somewhere outside the black hole. Because this is AdS, he said, the object can be described by some configuration of particles and fields on the boundary. And because the complexity of that boundary description tends to increase over time, the effect is to make the object move towards regions of higher complexity in the interior of the space. But that, said Susskind, is just another way of saying that the object will be pulled down towards the black hole. He captured that idea in a slogan (4): “Things fall because there is a tendency toward complexity.”

    Another implication of increasing complexity turns out to be closely related to an argument (5) that Susskind made last year in collaboration with Juan Maldacena, a physicist at the Institute for Advanced Study in Princeton, New Jersey, and the first researcher to recognize the unique features of AdS. According to general relativity, Susskind and Maldacena noted, two black holes can be many light years apart yet still have their interiors connected by a space-time tunnel known as a wormhole. But according to quantum theory, these widely separated black holes can also be connected by having their states entangled, meaning that information about their quantum states is shared between them in a way that is independent of distance.

    After exploring the many similarities between these connections, Susskind and Maldacena concluded that they were two aspects of the same thing — that the black hole’s degree of entanglement, a purely quantum phenomenon, will determine the wormhole’s width, a matter of pure geometry.

    With his latest work, Susskind says, it turns out that the growth of complexity on the boundary of AdS shows up as an increase in the wormhole’s length. So putting it all together, it seems that entanglement is somehow related to space, and that computational complexity is somehow related to time.

    Susskind is the first to admit that such ideas by themselves are only provocative suggestions; they do not make up a fully fledged theory. But he and his allies are confident that the ideas transcend the firewall paradox.

    “I don’t know where all of this will lead,” says Susskind. “But I believe these complexity–geometry connections are the tip of an iceberg.”

    See the full article for References

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 11:44 pm on October 29, 2015 Permalink | Reply
    Tags: , , Quantum Mechanics,   

    From Nautilus: “Will Quantum Mechanics Swallow Relativity?” 

    Nautilus

    Nautilus

    October 29, 2015
    By Corey S. Powell
    Illustration by Nicholas Garber

    Temp 1

    The contest between gravity and quantum physics takes a new turn.

    It is the biggest of problems, it is the smallest of problems.

    At present physicists have two separate rulebooks explaining how nature works. There is general relativity, which beautifully accounts for gravity and all of the things it dominates: orbiting planets, colliding galaxies, the dynamics of the expanding universe as a whole. That’s big. Then there is quantum mechanics, which handles the other three forces—electromagnetism and the two nuclear forces [weak interaction and strong interaction]. Quantum theory is extremely adept at describing what happens when a uranium atom decays, or when individual particles of light hit a solar cell. That’s small.

    Now for the problem: Relativity and quantum mechanics are fundamentally different theories that have different formulations. It is not just a matter of scientific terminology; it is a clash of genuinely incompatible descriptions of reality.

    The conflict between the two halves of physics has been brewing for more than a century—sparked by a pair of 1905 papers by [Albert]Einstein, one outlining relativity and the other introducing the quantum—but recently it has entered an intriguing, unpredictable new phase. Two notable physicists have staked out extreme positions in their camps, conducting experiments that could finally settle which approach is paramount.

    Just as a pixel is the smallest unit of an image on your screen, so there might be an unbreakable smallest unit of distance: a quantum of space.

    Basically you can think of the division between the relativity and quantum systems as “smooth” versus “chunky.” In general relativity, events are continuous and deterministic, meaning that every cause matches up to a specific, local effect. In quantum mechanics, events produced by the interaction of subatomic particles happen in jumps (yes, quantum leaps), with probabilistic rather than definite outcomes. Quantum rules allow connections forbidden by classical physics. This was demonstrated in a much-discussed recent experiment, in which Dutch researchers defied the local effect. They showed two particles—in this case, electrons—could influence each other instantly, even though they were a mile apart. When you try to interpret smooth relativistic laws in a chunky quantum style, or vice versa, things go dreadfully wrong.

    Relativity gives nonsensical answers when you try to scale it down to quantum size, eventually descending to infinite values in its description of gravity. Likewise, quantum mechanics runs into serious trouble when you blow it up to cosmic dimensions. Quantum fields carry a certain amount of energy, even in seemingly empty space, and the amount of energy gets bigger as the fields get bigger. According to Einstein, energy and mass are equivalent (that’s the message of e=mc2), so piling up energy is exactly like piling up mass. Go big enough, and the amount of energy in the quantum fields becomes so great that it creates a black hole that causes the universe to fold in on itself. Oops.

    Craig Hogan, a theoretical astrophysicist at the University of Chicago and the director of the Center for Particle Astrophysics at Fermilab, is reinterpreting the quantum side with a novel theory in which the quantum units of space itself might be large enough to be studied directly. Meanwhile, Lee Smolin, a founding member of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, is seeking to push physics forward by returning back to Einstein’s philosophical roots and extending them in an exciting direction.

    To understand what is at stake, look back at the precedents. When Einstein unveiled general relativity, he not only superseded Isaac Newton’s theory of gravity; he also unleashed a new way of looking at physics that led to the modern conception of the Big Bang and black holes, not to mention atomic bombs and the time adjustments essential to your phone’s GPS. Likewise, quantum mechanics did much more than reformulate James Clerk Maxwell’s textbook equations of electricity, magnetism, and light. It provided the conceptual tools for the Large Hadron Collider, solar cells, all of modern microelectronics.

    What emerges from the dustup could be nothing less than a third revolution in modern physics, with staggering implications. It could tell us where the laws of nature came from, and whether the cosmos is built on uncertainty or whether it is fundamentally deterministic, with every event linked definitively to a cause.

    2
    THE MAN WITH THE HOLOMETER: Craig Hogan, a theoretical astrophysicist at Fermilab, has built a device to measure what he sees as the exceedingly fine graininess of space. “I’m hoping for an experimental result that forces people to focus the theoretical thinking in a different direction,” Hogan says. The Department of Astronomy and Astrophysics, the University of Chicago.

    A Chunky Cosmos

    Hogan, champion of the quantum view, is what you might call a lamp-post physicist: Rather than groping about in the dark, he prefers to focus his efforts where the light is bright, because that’s where you are most likely to be able to see something interesting. That’s the guiding principle behind his current research. The clash between relativity and quantum mechanics happens when you try to analyze what gravity is doing over extremely short distances, he notes, so he has decided to get a really good look at what is happening right there. “I’m betting there’s an experiment we can do that might be able to see something about what’s going on, about that interface that we still don’t understand,” he says.

    A basic assumption in Einstein’s physics—an assumption going all the way back to Aristotle, really—is that space is continuous and infinitely divisible, so that any distance could be chopped up into even smaller distances. But Hogan questions whether that is really true. Just as a pixel is the smallest unit of an image on your screen and a photon is the smallest unit of light, he argues, so there might be an unbreakable smallest unit of distance: a quantum of space.

    In Hogan’s scenario, it would be meaningless to ask how gravity behaves at distances smaller than a single chunk of space. There would be no way for gravity to function at the smallest scales because no such scale would exist. Or put another way, general relativity would be forced to make peace with quantum physics, because the space in which physicists measure the effects of relativity would itself be divided into unbreakable quantum units. The theater of reality in which gravity acts would take place on a quantum stage.

    The holometer will show the right way (or rule out the wrong way) to understand the underlying quantum structure of space.

    Hogan acknowledges that his concept sounds a bit odd, even to a lot of his colleagues on the quantum side of things. Since the late 1960s, a group of physicists and mathematicians have been developing a framework called string theory to help reconcile general relativity with quantum mechanics; over the years, it has evolved into the default mainstream theory, even as it has failed to deliver on much of its early promise. Like the chunky-space solution, string theory assumes a fundamental structure to space, but from there the two diverge. String theory posits that every object in the universe consists of vibrating strings of energy. Like chunky space, string theory averts gravitational catastrophe by introducing a finite, smallest scale to the universe, although the unit strings are drastically smaller even than the spatial structures Hogan is trying to find.

    Chunky space does not neatly align with the ideas in string theory—or in any other proposed physics model, for that matter. “It’s a new idea. It’s not in the textbooks; it’s not a prediction of any standard theory,” Hogan says, sounding not the least bit concerned. “But there isn’t any standard theory right?”

    If he is right about the chunkiness of space, that would knock out a lot of the current formulations of string theory and inspire a fresh approach to reformulating general relativity in quantum terms. It would suggest new ways to understand the inherent nature of space and time. And weirdest of all, perhaps, it would bolster an au courant notion that our seemingly three-dimensional reality is composed of more basic, two-dimensional units. Hogan takes the “pixel” metaphor seriously: Just as a TV picture can create the impression of depth from a bunch of flat pixels, he suggests, so space itself might emerge from a collection of elements that act as if they inhabit only two dimensions.

    Like many ideas from the far edge of today’s theoretical physics, Hogan’s speculations can sound suspiciously like late-night philosophizing in the freshman dorm. What makes them drastically different is that he plans to put them to a hard experimental test. As in, right now.

    Starting in 2007, Hogan began thinking about how to build a device that could measure the exceedingly fine graininess of space. As it turns out, his colleagues had plenty of ideas about how to do that, drawing on technology developed to search for gravitational waves. Within two years Hogan had put together a proposal and was working with collaborators at Fermilab, the University of Chicago, and other institutions to build a chunk-detecting machine, which he more elegantly calls a “holometer.” (The name is an esoteric pun, referencing both a 17th-century surveying instrument and the theory that 2-D space could appear three-dimensional, analogous to a hologram.)

    Beneath its layers of conceptual complexity, the holometer is technologically little more than a laser beam, a half-reflective mirror to split the laser into two perpendicular beams, and two other mirrors to bounce those beams back along a pair of 40-meter-long tunnels. The beams are calibrated to register the precise locations of the mirrors. If space is chunky, the locations of the mirrors would constantly wander about (strictly speaking, space itself is doing the wandering), creating a constant, random variation in their separation. When the two beams are recombined, they’d be slightly out of sync, and the amount of the discrepancy would reveal the scale of the chunks of space.

    For the scale of chunkiness that Hogan hopes to find, he needs to measure distances to an accuracy of 10-18 meters, about 100 million times smaller than a hydrogen atom, and collect data at a rate of about 100 million readings per second. Amazingly, such an experiment is not only possible, but practical. “We were able to do it pretty cheaply because of advances in photonics, a lot of off the shelf parts, fast electronics, and things like that,” Hogan says. “It’s a pretty speculative experiment, so you wouldn’t have done it unless it was cheap.” The holometer is currently humming away, collecting data at the target accuracy; he expects to have preliminary readings by the end of the year.

    Hogan has his share of fierce skeptics, including many within the theoretical physics community. The reason for the disagreement is easy to appreciate: A success for the holometer would mean failure for a lot of the work being done in string theory. Despite this superficial sparring, though, Hogan and most of his theorist colleagues share a deep core conviction: They broadly agree that general relativity will ultimately prove subordinate to quantum mechanics. The other three laws of physics follow quantum rules, so it makes sense that gravity must as well.

    For most of today’s theorists, though, belief in the primacy of quantum mechanics runs deeper still. At a philosophical—epistemological—level, they regard the large-scale reality of classical physics as a kind of illusion, an approximation that emerges from the more “true” aspects of the quantum world operating at an extremely small scale. Chunky space certainly aligns with that worldview.

    Hogan likens his project to the landmark Michelson-Morley experiment of the 19th century, which searched for the aether—the hypothetical substance of space that, according to the leading theory of the time, transmitted light waves through a vacuum. The experiment found nothing; that perplexing null result helped inspire Einstein’s special theory of relativity, which in turn spawned the general theory of relativity and eventually turned the entire world of physics upside down. Adding to the historical connection, the Michelson-Morley experiment also measured the structure of space using mirrors and a split beam of light, following a setup remarkably similar to Hogan’s.

    “We’re doing the holometer in that kind of spirit. If we don’t see something or we do see something, either way it’s interesting. The reason to do the experiment is just to see whether we can find something to guide the theory,” Hogan says. “You find out what your theorist colleagues are made of by how they react to this idea. There’s a world of very mathematical thinking out there. I’m hoping for an experimental result that forces people to focus the theoretical thinking in a different direction.”

    Whether or not he finds his quantum structure of space, Hogan is confident the holometer will help physics address its big-small problem. It will show the right way (or rule out the wrong way) to understand the underlying quantum structure of space and how that affects the relativistic laws of gravity flowing through it.

    Sidebar: The Black Hole Resolution

    3

    Here on Earth, the clash between the top-down and bottom-up views of physics is playing out in academic journals and in a handful of complicated experimental apparatuses. Theorists on both sides concede that neither pure thought nor technologically feasible tests may be enough to break the deadlock, however. Fortunately, there are other places to look for a more definitive resolution. One of the most improbable of these is also one of the most promising—an idea embraced by physicists almost regardless of where they stand ideologically.

    “Black hole physics gives us a clean experimental target to look for,” says Craig Hogan, a theoretical astrophysicist at the University of Chicago and the director of the Center for Particle Astrophysics at Fermilab. “The issues around quantum black holes are important,” agrees Lee Smolin, a founding member of the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

    Black holes? Really? Granted, these objects are more commonly associated with questions than with answers. They are not things you can create in the laboratory, or poke and prod with instruments, or even study up close with a space probe. Nevertheless, they are the only places in the universe where Hogan’s ideas unavoidably smash into Smolin’s and, more importantly, where the whole of quantum physics collides with general relativity in a way that is impossible to ignore.

    At the outer boundary of the black hole—the event horizon—gravity is so extreme that even light cannot escape, making it an extreme test of how general relativity behaves. At the event horizon, atomic-scale events become enormously stretched out and slowed down; the horizon also divides the physical world into two distinct zones, inside and outside. And there is a very interesting meeting place in terms of the size of a black hole. A stellar-mass black hole is about the size of Los Angeles; a black hole with the mass of the Earth would be roughly the size of a marble. Black holes literally bring the big-small problem in physics home to the human scale.

    The importance of black holes for resolving that problem is the reason why Stephen Hawking and his cohorts debate about them so often and so vigorously. It turns out that we don’t actually need to cozy up close to black holes in order to run experiments with them. Quantum theory implies that a single particle could potentially exist both inside and outside the event horizon, which makes no sense. There is also the question of what happens to information about things that fall into a black hole; the information seems to vanish, even though theory says that information cannot be destroyed. Addressing these contradictions is forcing theoretical physicists to grapple more vigorously than ever before with the interplay of quantum mechanics and general relativity.

    Best of all, the answers will not be confined to the world of theory. Astrophysicists have increasingly sophisticated ways to study the region just outside the event horizon by monitoring the hot, brilliant clouds of particles that swirl around some black holes. An even greater breakthrough is just around the corner: the Event Horizon Telescope. This project is in the process of linking together about a dozen radio dishes from around the world, creating an enormous networked telescope so powerful that it will be able to get a clear look at Sagittarius A*, the massive black hole that resides in the center of our galaxy. Soon, possibly by 2020, the Event Horizon Telescope should deliver its first good portraits. What they show will help constrain the theories of black holes, and so offer telling clues about how to solve the big-small problem.

    Human researchers using football stadium-size radio telescopes, linked together into a planet-size instrument, to study a star-size black hole, to reconcile the subatomic-and-cosmic-level enigma at the heart of physics … if it works, the scale of the achievement will be truly unprecedented.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 1:38 pm on October 22, 2015 Permalink | Reply
    Tags: , Genetic Engineering, , Quantum Mechanics   

    From MIT: “Quantum physics meets genetic engineering” 


    MIT News

    October 14, 2015
    David L. Chandler | MIT News Office

    1
    Rendering of a virus used in the MIT experiments. The light-collecting centers, called chromophores, are in red, and chromophores that just absorbed a photon of light are glowing white. After the virus is modified to adjust the spacing between the chromophores, energy can jump from one set of chromophores to the next faster and more efficiently. Courtesy of the researchers and Lauren Aleza Kaye

    Researchers use engineered viruses to provide quantum-based enhancement of energy transport.


    download the mp4 video here.

    Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

    One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as quantum weirdness. These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

    Surprisingly, the researchers at MIT and Eni, the Italian energy company, achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

    This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT, Eni, and Italian universities.

    Lloyd, a professor of mechanical engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

    But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

    This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the Quantum Goldilocks Effect.

    That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

    In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

    The project started at a workshop held at Eni’s laboratories in Novara, Italy. Lloyd and Belcher, a professor of biological engineering, were reporting on different projects they had worked on, and began discussing, along with Eni researchers, the possibility of a project encompassing their very different expertise. Lloyd, whose work is mostly theoretical, pointed out that the viruses Belcher works with have the right length scales to potentially support quantum effects.

    In 2008, Lloyd had published a paper demonstrating that photosynthetic organisms transmit light energy efficiently because of these quantum effects. When he saw Belcher’s report on her work with engineered viruses, he wondered if that might provide a way to artificially induce a similar effect, in an effort to approach nature’s efficiency.

    “I had been talking about potential systems you could use to demonstrate this effect, and Angela said, ‘We’re already making those,’” Lloyd recalls. Eventually, after much analysis, “We came up with design principles to redesign how the virus is capturing light, and get it to this quantum regime.”

    Within two weeks, Belcher’s team had created their first test version of the engineered virus. Many months of work then went into perfecting the receptors and the spacings.

    Once the team engineered the viruses, they were able to use laser spectroscopy and dynamical modeling to watch the light-harvesting process in action, and to demonstrate that the new viruses were indeed making use of quantum coherence to enhance the transport of excitons.

    “It was really fun,” Belcher says. “A group of us who spoke different [scientific] languages worked closely together, to both make this class of organisms, and analyze the data. That’s why I’m so excited by this.”

    While this initial result is essentially a proof of concept rather than a practical system, it points the way toward an approach that could lead to inexpensive and efficient solar cells or light-driven catalysis, the team says. So far, the engineered viruses collect and transport energy from incoming light, but do not yet harness it to produce power (as in solar cells) or molecules (as in photosynthesis). But this could be done by adding a reaction center, where such processing takes place, to the end of the virus where the excitons end up.

    “This is exciting and high-quality research,” says Alán Aspuru-Guzik, a professor of chemistry and chemical biology at Harvard University who was not involved in this work. The research, he says, “combines the work of a leader in theory (Lloyd) and a leader in experiment (Belcher) in a truly multidisciplinary and exciting combination that spans biology to physics to potentially, future technology.”

    “​Access to controllable excitonic systems is a goal shared by many researchers in the field,” Aspuru-Guzik adds. “This work provides fundamental understanding that can allow for the development of devices with an increased control of exciton flow.”

    The research was supported by Eni through the MIT Energy Initiative. In addition to MIT postdocs Nimrod Heldman and Patrick Rebentrost, the team included researchers at the University of Florence, the University of Perugia, and Eni.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 537 other followers

%d bloggers like this: