Tagged: Quantum Mechanics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:38 pm on April 25, 2016 Permalink | Reply
    Tags: , Quantum Mechanics,   

    From phys.org: “Scientists take next step towards observing quantum physics in real life” 


    April 25, 2016

    An artist’s impression of the membrane coupled to a laser beam. The periodic pattern makes the device highly reflective, while the thin tethers allow for ultra-low mechanical dissipation. Credit: Felix Fricke

    Small objects like electrons and atoms behave according to quantum mechanics, with quantum effects like superposition, entanglement and teleportation. One of the most intriguing questions in modern science is if large objects – like a coffee cup – could also show this behavior. Scientists at the TU Delft have taken the next step towards observing quantum effects at everyday temperatures in large objects. They created a highly reflective membrane, visible to the naked eye, that can vibrate with hardly any energy loss at room temperature. The membrane is a promising candidate to research quantum mechanics in large objects.

    The team has reported their results* in Physical Review Letters.


    “Imagine you’re given a single push on a playground swing. Now imagine this single push allows you to gleefully swing non-stop for nearly a decade. We have created a millimeter-sized version of such a swing on a silicon chip”, says prof. Simon Gröblacher of the Kavli Institute of Nanoscience at the TU Delft.

    Tensile stress

    “In order to do this, we deposit ultra-thin films of ceramic onto silicon chips. This allows us to engineer a million psi of tensile stress, which is the equivalent of 10,000 times the pressure in a car tire, into millimeter-sized suspended membranes that are only eight times thicker than the width of DNA”, explains dr. Richard Norte, lead author of the publication. “Their immense stored energies and ultra-thin geometry mean that these membranes can oscillate for tremendously long times by dissipating only small amounts of energy.”


    To efficiently monitor the motion of the membranes with a laser they need to be extremely reflective. In such a thin structure, this can only be achieved by creating a meta-material through etching a microscopic pattern into the membrane. “We actually made the thinnest super-mirrors ever created, with a reflectivity exceeding 99%. In fact, these membranes are also the world’s best force sensors at room temperature, as they are sensitive enough to measure the gravitational pull between two people 100 km apart from each other”, Richard Norte says.

    Room temperture

    “The high-reflectivity, in combination with the extreme isolation, allows us to overcome a major hurdle towards observing quantum physics with massive objects, for the first time, at room temperature”, says Gröblacher. Because even a single quantum of vibration is enough to heat up and destroy the fragile quantum nature of large objects (in a process called decoherence), researchers have relied on large cryogenic systems to cool and isolate their quantum devices from the heat present in our everyday environments. Creating massive quantum oscillators which are robust to decoherence at room temperature has remained an elusive feat for physicists.

    This is extremely interesting from a fundamental theoretical point of view. One of the strangest predictions of quantum mechanics is that things can be in two places at the same time. Such quantum ‘superpositions’ have now been clearly demonstrated for tiny objects such as electrons or atoms, where we now know that quantum theory works very well.

    Coffee cup

    But quantum mechanics also tells us that the same rules should also apply for macroscopic objects: a coffee cup can be on the table and in the dishwasher at the same time, or Schrödinger’s cat can be in a quantum superposition of being dead and alive. This is however not something we see in our daily lives: the coffee cup is either clean or dirty and the cat is either dead or alive. Experimentally demonstrating a proverbial cat that is simultaneously dead and alive at ambient temperatures is still an open question in quantum mechanics. The steps taken in this research might allow to eventually observe ‘quantum cats’ on everyday life scales and temperatures.

    *Science paper:
    Mechanical Resonators for Quantum Optomechanics Experiments at Room Temperature

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 4:58 pm on April 17, 2016 Permalink | Reply
    Tags: , , Quantum Mechanics   

    From NOVA: “Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?” 



    24 Jun 2015 [NOVA just put this up in social media.]
    Scott Aaronson

    Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

    Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

    Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

    There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude.

    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/
    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/

    In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

    A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

    But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

    In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2n amplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

    But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

    I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

    But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

    At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

    At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

    “Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?”

    As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

    The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

    For this reason, the only hope for a quantum-computing advantage comes from interference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

    Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

    It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

    What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist might give, if she wanted to answer Deutsch’s argument on its own terms.

    There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

    But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

    So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

    To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

    So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

    But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

    But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

    What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

    My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

    Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:11 am on April 17, 2016 Permalink | Reply
    Tags: , , Quantum Mechanics,   

    From Science Alert: “Physicists find a way to probe the quantum realm without wrecking everything” 


    Science Alert

    15 APR 2016


    In 1930, German theoretical physicist Werner Heisenberg came up with a thought experiment, now known as Heisenberg’s microscope, to try to show why it’s impossible to measure an atom’s location with unlimited precision. He imagined trying to measure the position of something like an atom by shooting light at it.

    Light travels as a wave, and Heisenberg knew that different wavelengths could give you different degrees of confidence when used to measure where something is in space. Short wavelengths can give a more precise measurement than long ones, so you’d want to use light with a tiny wavelength to measure where an atom is, since atoms are really small. But there’s a problem: light also carries momentum, and short wavelengths carry more momentum than long ones.

    That means if you use light with a short wavelength to find the atom, you’ll hit the atom with all of that momentum, and that kicks it around and risks completely changing its location (and other properties) in the process. Use longer wavelengths, and you’ll move the atom less, but you’ll also be more uncertain about your measurement.

    You’re in a bind: any measurement changes what you’re measuring, and better measurements lead to bigger changes.

    It’s also possible to prepare atoms in what’s called an entangled state, which means they act cooperatively like a single atom, no matter how far away they are from each other. If you push one, the rest move like you pushed them all individually. And if you mess up one atom by shooting some light at it, you generally mess up the whole collection.

    In the past, these two effects made it impossible to measure how entangled atoms are arranged without destroying the arrangement and the entanglement – which were presumably prepared for some specific purpose, like to make a quantum computer.

    But now, physicists led by T. J. Elliott from the University of Oxford in the UK have proposed a way to measure large-scale properties of a group of entangled atoms without messing up the entanglement. It isn’t measuring individual atoms – that’s permanently off-limits – but it’s more than physicists have managed to do before.

    Usually, when physicists entangle atoms, they have to be careful that the atoms are all more or less the same when they start out. If there are lots of different kinds of atoms in there, they become a lot harder to match up, so the entanglement becomes more fragile.

    But it’s still possible to make stable groups of entangled atoms that have some outliers among them that are unlike the main group, and the paper’s authors have shown that these outliers can be used to measure things about the main group without messing up their entanglement.

    This includes really basic information like the density of atoms – how closely they are to one another – while they’re entangled, which historically has been out of physicists’ reach in individual experiments.

    Before, physicists had to measure a whole bunch of the entangled atoms really quickly, and they’d have to accept that they were changing things around as soon as they measured that first atom. More measurements might check more atoms, but they’d be increasingly uncertain as time went on.

    Now, all they have to do is measure what the outliers are doing and they can figure out how the atoms are distributed without the chaos. Within some limits, knowledge about the atoms’ density gets better – not worse – as more measurements are made.

    Admittedly, measurements still change things a little bit (light is still being used and Heisenberg’s microscope still applies) but the measurements won’t wreck the whole system like they would have before.

    This method of measuring the outliers is a window into a new realm for physicists, who could previously only see what entangled atoms did, not what they’re doing.

    The researchers simulated a simple system as a proof-of-concept, but they showed mathematically that this should work with a wide range of quantum systems where entanglement plays a key role. And small changes to the method could make it possible to measure properties like the magnetisation of entangled atoms, instead of just their density.

    All with atoms that shouldn’t be in the group in the first place. Not bad, physicists. Not bad.

    The research has been published in the journal Physical Review A.

    Science paper:
    Nondestructive probing of means, variances, and correlations of ultracold-atomic-system densities via qubit impurities

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 8:40 pm on March 22, 2016 Permalink | Reply
    Tags: , , Quantum Mechanics,   

    From Ethan Siegel: “10 Quantum Truths About Our Universe” 

    Starts with a bang
    Starts with a Bang

    Sabine Hossenfelder

    Hydrogen wave function Wikimedia Commons user PoorLeno
    Hydrogen wave function Wikimedia Commons user PoorLeno

    Even most of the pros don’t know all 10.

    This post was contributed to Starts With A Bang by Sabine Hossenfelder. Sabine is a theoretical physicist specialized in quantum gravity and high energy physics. She also freelance writes about science.

    “In fact, the mere act of opening the box will determine the state of the cat, although in this case there were three determinate states the cat could be in: these being Alive, Dead, and Bloody Furious.” -Terry Pratchett

    From the moment that it was discovered that the macroscopic, classical rules that governed electricity, magnetism and light didn’t necessarily apply to the smallest, subatomic scales, a whole new view of the Universe became accessible to humanity. This quantum picture is much larger and all-encompassing than most people realize, including many professionals. Here are ten essentials of quantum mechanics that may cause you to re-examine how you picture our Universe, on the smallest scales and beyond.

    1.) Everything is quantum.

    It’s not like some things are quantum mechanical and others are not. Everything obeys the same laws of quantum mechanics — it’s just that quantum effects of large objects are very hard to notice. This is why quantum mechanics was a latecomer to the development of theoretical physics: it wasn’t until physicists had to explain why electrons sit on shells around the atomic nucleus that quantum mechanics became necessary to make accurate predictions.

    2.) Quantization doesn’t necessarily imply discreteness.

    Quanta” are discrete chunks, by definition, but not everything becomes chunky or indivisible on short scales. Electromagnetic waves are made of quanta called “photons,” so the waves can be thought of as being discretized. And electron shells around the atomic nucleus can only have certain discrete radii. But other particle properties do not become discrete even in a quantum theory. The position of electrons in the conducting band of a metal for example is not discrete — the electron can occupy any continuous location within the band. And the energy values of the photons that make up electromagnetic waves are not discrete either. For this reason, quantizing gravity — should we finally succeed at it — does not necessarily mean that space and time have to be made discrete. (But, on the other hand, they might be.)

    3.) Entanglement not the same as superposition.

    A quantum superposition is the ability of a system to be in two different states at the same time, and yet, when measured, one always finds a particular state, never a superposition. Entanglement on the other hand is a correlation between two or more parts of a system — something entirely different. Superpositions are not fundamental: whether a state is or isn’t a superposition depends on what you want to measure. A state can for example be in a superposition of positions and not in a superposition of momenta — so the whole concept is ambiguous. Entanglement on the other hand is unambiguous: it is an intrinsic property of each system and the so-far best known measure of a system’s quantum-ness. (For more details, read What is the difference between entanglement and superposition?)

    A beam splitter, one mechanism for creating entangled photons. Image credit: Wikimedia Commons user Zaereth.

    4.) There is no spooky action at a distance.

    Nowhere in quantum mechanics is information ever transmitted non-locally, so that it jumps over a stretch of space without having to go through all places in between. Entanglement is itself non-local, but it doesn’t do any action — it is a correlation that is not connected to non-local transfer of information or any other observable. When you see a study where two entangled photons are separated by a great distance and then the spin of each one is measured, there is no information being transferred faster than the speed of light. In fact, if you attempt to bring the results of two observations together (which is information transmission), that information can only travel at the speed of light, no faster! What constitutes “information” was a great source confusion in the early days of quantum mechanics, but we know today that the theory can be made perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transferred faster than the speed of light.

    A quantum optics setup. Image credit Matthew Broome
    A quantum optics setup. Image credit Matthew Broome

    5.) Quantum physics an active research area.

    It’s not like quantum mechanics is yesterday’s news. True, the theory originated more than a century ago. But many aspects of it became testable only with modern technology. Quantum optics, quantum information, quantum computing, quantum cryptography, quantum thermodynamics, and quantum metrology are all recently formed and presently very active research areas. With the new capabilities brought about by these technologies, interest in the foundations of quantum mechanics has been reignited.

    6.) Einstein didn’t deny it.

    Contrary to popular opinion, Einstein was not a quantum mechanics denier. He couldn’t possibly be — the theory was so successful early on that no serious scientist could dismiss it. (In fact, it was his Nobel-winning discovery of the photoelectric effect, proving that photons acted as particles as well as waves, that was one of the foundational discoveries of quantum mechanics.) Einstein instead argued that the theory was incomplete, and believed the inherent randomness of quantum processes must have a deeper explanation. It was not that he thought the randomness was wrong, he just thought that this wasn’t the end of the story. For an excellent clarification of Einstein’s views on quantum mechanics, I recommend George Musser’s article What Einstein Really Thought about Quantum Mechanics (paywalled, sorry).

    7.) It’s all about uncertainty.

    The central postulate of quantum mechanics is that there are pairs of observables that cannot simultaneously be measured, like for example the position and momentum of a particle. These pairs are called “conjugate variables,” and the impossibility to measure both their values precisely is what makes all the difference between a quantized and a non-quantized theory. In quantum mechanics, this uncertainty is fundamental, not due to experimental shortcomings. One of the most bizarre manifestations of this is the uncertainty between energy and time, which means that unstable particles (with a short lifetime) have inherently uncertain masses, thanks to Einstein’s E=mc2. Particles like the Higgs boson, the W-and-Z bosons and the top quarks all have masses that are intrinsically uncertain by 1–10% because of their short lifetimes.

    CERN ATLAS Higgs Event
    CERN ATLAS Higgs Event

    Image credit: the LEP collaboration and various sub-collaborations, 2005, via http://arxiv.org/abs/hep-ex/0509008. Precision Electroweak Measurements on the Z Resonance. Note that the Z-particle appears with a “width” in energy.

    8.) Quantum effects are not necessarily small…

    We do not normally observe quantum effects on long distances because the necessary correlations are very fragile. Treat them carefully enough however, and quantum effects can persist over long distances. Photons have for example been entangled over separations as much as several hundreds of kilometers. In Bose-Einstein condensates, a degenerate state of matter found at cold temperatures, up to several million of atoms have been brought into one coherent quantum state. And finally, some researchers even believe that dark matter may have quantum effects which span across entire galaxies.

    9.) …but they dominate the small scales.

    In quantum mechanics, every particle is also a wave and every wave is also a particle. The effects of quantum mechanics become very pronounced once one observes a particle on distances that are comparable to the associated wavelength. This is why atomic and subatomic physics cannot be understood without quantum mechanics, whereas planetary orbits are effectively unchanged by quantum behavior.

    10.) Schrödinger’s cat is dead. Or alive. But not both.

    It was not well-understood in the early days of quantum mechanics, but the quantum behavior of macroscopic objects decays very rapidly. This “decoherence” is due to constant interactions with the environment which are, in relatively warm and dense places like those necessary for life, impossible to avoid. This explains that what we think of as a measurement doesn’t require a human; simply interacting with the environment counts. It also explains why bringing large objects into superpositions of two different states is therefore extremely difficult and the superposition fades rapidly. The heaviest object that has so far been brought into a superposition of locations is a carbon-60 molecule, while the more ambitious have proposed to do this experiment for viruses or even heavier creatures like bacteria. Thus, the paradox that Schrödinger’s cat once raised — the transfer of a quantum superposition (the decaying atom) to a large object (the cat) — has been resolved. We now understand that while small things like atoms can exist in superpositions for extended amounts of time, a large object would settle extremely rapidly in one particular state. That’s why we never see cats that are both dead and alive.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 9:59 am on February 22, 2016 Permalink | Reply
    Tags: , , Quantum Mechanics   

    From COSMOS: “A different picture of quantum surrealism” 

    Cosmos Magazine bloc


    22 Feb 2016
    Cathal O’Connell

    Quantum Surrealism COSMOS
    No image credit found.

    New research supports an old, more intuitive theory of how sub-atomic particles behave. Cathal O’Connell explains.

    With its ideas of particles zipping in and out of existence, quantum mechanics is probably the kookiest-sounding theory in science. And our understanding of it is little helped by the mysterious “probability fields” most physicists say dictate the zipping.

    But a more intuitive picture may lie beneath. As new research demonstrates, beneath the shroud of probability, particles can in fact be viewed as behaving like billiard balls rolling along a table – although in surreal fashion.

    The result helps resurrect an 80-year-old picture of quantum mechanics, and provides one of the most stirring demonstrations yet of an effect [Albert] Einstein called “spooky action at a distance”.

    The work, reported in Science Advances, is a new version of the most famous experiment in quantum mechanics, in which particles of light, called photons, are fired at two slits before being detected on a screen.

    Hog-tied by Heisenberg’s uncertainty principle, for decades physicists thought they could never know which slit a particular photon went through – any attempted measurement stops it in its tracks.

    But in 2011, physicist Aephraim Steinberg at the University of Toronto achieved the seemingly impossible by tracking the trajectories of photons using a series of “weak” measurements, gentle enough not to disturb their position.

    This method showed trajectories that looked similar to classical ones – like those of balls flying through the air.

    Although it was a seemingly outstanding result, some physicists were not convinced, highlighting the experiment’s inability to deal with entanglement (where two particles, in this case photons, are intimately connected so that measurement on one instantly affects the other, no matter how far away it is).

    The critics pointed out that doing the same experiment with two entangled photons would lead to a contradiction – such as the photon’s trajectory being measured as going through the top slit, but the photon itself hitting the bottom of the detector (as if it came from the bottom slit). They coined the term “surreal trajectories” to describe this result.

    Now Steinberg’s team has achieved the experiment for entangled photons, and shown how the surreal behaviour is caused by the “spooky” influence of the other particle.

    The team first entangled two photons, then sent one of the pair through the regular two-slit apparatus, and the other through an apparatus that monitored polarisation – the plane the light waves are travelling in.

    Weirdly, the choice made by the experimenters in how to measure the polarisation determined which slit the first photon went through – as if interfering with one particle caused the other to change direction instantaneously.

    This kind of bizarre phenomenon is exactly what Einstein had in mind when he dubbed it “spooky action”. Physicists have seen evidence of it before, but never in such a direct fashion.

    The results bolster a non-standard interpretation of quantum mechanics, which throws out the notion of abstract probability fields altogether.

    First put forward by Louis de Broglie in 1927, the interpretation treats quantum objects just like classical particles, but imagines them riding like a surfer on top of a so-called pilot wave.

    The wave is still probabilistic, but the particle does take a real trajectory from source to target.

    The new work does not disprove the standard “probabilistic” view of quantum mechanics, but it does highlight that the pilot-wave interpretation is perfectly valid too. That is “something that’s not recognised by a large part of the physics community”, says Howard Wiseman, a physicist at Griffith University who proposed the experiment.

    It may be easier to visualise real trajectories, rather than abstract wave function collapses.

    “I would phrase it in terms of having different pictures,” says Steinberg. “Different pictures can be useful. They can help shape better intuitions.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 11:04 am on January 31, 2016 Permalink | Reply
    Tags: , Bose-Einstein-condensates, Quantum Mechanics, Quantum randomness, Techniche Universitat Wein (Vienna)   

    From TUW: “Solving Hard Quantum Problems: Everything is Connected” 

    Techniche Universitat Wein (Vienna)

    Techniche Universitat Wein (Vienna)

    Florian Aigner

    Further Information:
    Dr. Kaspar Sakmann
    Institute for Atomic and Subatomic Physics
    TU Wien
    Stadionallee 2, 1020 Vienna, Austria
    T: +43-1-58801-141889

    Bose-Einstein-condensates making waves a many-particle phenomenon

    Quantum systems are extremely hard to analyse if they consist of more than just a few parts. It is not difficult to calculate a single hydrogen atom, but in order to describe an atom cloud of several thousand atoms, it is usually necessary to use rough approximations. The reason for this is that quantum particles are connected to each other and cannot be described separately. Kaspar Sakmann (TU Wien, Vienna) and Mark Kasevich (Stanford, USA) have now shown in an article published in Nature Physics that this problem can be overcome. They succeeded in calculating effects in ultra-cold atom clouds which can only be explained in terms of the quantum correlations between many atoms. Such atom clouds are known as Bose-Einstein condensates and are an active field of research.

    Quantum Correlations

    Quantum physics is a game of luck and randomness. Initially, the atoms in a cold atom cloud do not have a predetermined position. Much like a die whirling through the air, where the number is yet to be determined, the atoms are located at all possible positions at the same time. Only when they are measured, their positions are fixed. “We shine light on the atom cloud, which is then absorbed by the atoms”, says Kaspar Sakmann. “The atoms are photographed, and this is what determines their position. The result is completely random.”

    There is, however, an important difference between quantum randomness and a game of dice: if different dice are thrown at the same time, they can be seen as independent from each other. Whether or not we roll a six with die number one does not influence the result of die number seven. The atoms in the atom cloud on the other hand are quantum physically connected. It does not make sense to analyse them individually, they are one big quantum object. Therefore, the result of every position measurement of any atom depends on the positions of all the other atoms in a mathematically complicated way.

    “It is not hard to determine the probability that a particle will be found at a specific position”, says Kaspar Sakmann. “The probability is highest in the centre of the cloud and gradually diminishes towards the outer fringes.” In a classically random system, this would be all the information that is needed. If we know that in a dice roll, any number has the probability of one sixth, then we can also determine the probability of rolling three ones with three dice. Even if we roll five ones consecutively, the probability remains the same the next time. With quantum particles, it is more complicated than that.

    “We solve this problem step by step”, says Sakmann. “First we calculate the probability of the first particle being measured on a certain position. The probability distribution of the second particle depends on where the first particle has been found. The position of the third particle depends on the first two, and so on.” In order to be able to describe the position of the very last particle, all the other positions have to be known. This kind of quantum entanglement makes the problem mathematically extremely challenging.

    Only Correlations Can Explain the Experimental Data

    But these correlations between many particles are extremely important – for example for calculating the behaviour of colliding Bose-Einstein-condensates. “The experiment shows that such collisions can lead to a special kind of quantum waves. On certain positions we find many particles, on an adjacent position we do not find any”, says Kaspar Sakmann. “If we consider the atoms separately, this cannot be explained. Only if we take the full quantum distribution into account, with all its higher correlations, these waves can be reproduced by our calculations.”

    Also other phenomena have been calculated with the same method, for instance Bose-Einstein-condensates which are stirred with a laser beam, so that little vortices emerge – another typical quantum many-particle-effect. “Our results show how important theses correlations are and that it is possible to include them in quantum calculations, in spite of all mathematical difficulties”, says Sakmann. With certain modifications, the approach can be expected to be useful for many other quantum systems as well.

    Original paper: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3631.htmlhttp://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys3631.html

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Techniche Universitat Wein (Vienna) campus

    Our mission is “technology for people”. Through our research we “develop scientific excellence”,
    through our teaching we “enhance comprehensive competence”.

    TU Wien (TUW) is located in the heart of Europe, in a cosmopolitan city of great cultural diversity. For nearly 200 years, TU Wien has been a place of research, teaching and learning in the service of progress. TU Wien is among the most successful technical universities in Europe and is Austria’s largest scientific-technical research and educational institution.

  • richardmitnick 6:27 pm on January 7, 2016 Permalink | Reply
    Tags: , , Quantum Mechanics, ,   

    From Physics Today: “Three groups close the loopholes in tests of Bell’s theorem” 

    physicstoday bloc

    Physics Today

    January 2016, page 14
    Johanna L. Miller

    Until now, the quintessential demonstration of quantum entanglement has required extra assumptions.

    The predictions of quantum mechanics are often difficult to reconcile with intuitions about the classical world. Whereas classical particles have well-defined positions and momenta, quantum wavefunctions give only the probability distributions of those quantities. What’s more, quantum theory posits that when two systems are entangled, a measurement on one instantly changes the wavefunction of the other, no matter how distant.

    Might those counterintuitive effects be illusory? Perhaps quantum theory could be supplemented by a system of hidden variables that restore local realism, so every measurement’s outcome depends only on events in its past light cone. In a 1964 theorem John Bell showed that the question is not merely philosophical: By looking at the correlations in a series of measurements on widely separated systems, one can distinguish quantum mechanics from any local-realist theory. (See the article by Reinhold Bertlmann, Physics Today, July 2015, page 40.) Such Bell tests in the laboratory have come down on the side of quantum mechanics. But until recently, their experimental limitations have left open two important loopholes that require additional assumptions to definitively rule out local realism.

    Now three groups have reported experiments that close both loopholes simultaneously. First, Ronald Hanson, Bas Hensen (both pictured in figure 1), and their colleagues at Delft University of Technology performed a loophole-free Bell test using a novel entanglement-swapping scheme.1 More recently, two groups—one led by Sae Woo Nam and Krister Shalm of NIST,2 the other by Anton Zeilinger and Marissa Giustina of the University of Vienna3—used a more conventional setup with pairs of entangled photons generated at a central source.

    Temp 1
    Figure 1. Bas Hensen (left) and Ronald Hanson in one of the three labs they used for their Bell test. FRANK AUPERLE

    The results fulfill a long-standing goal, not so much to squelch any remaining doubts that quantum mechanics is real and complete, but to develop new capabilities in quantum information and security. A loophole-free Bell test demonstrates not only that particles can be entangled at all but also that a particular source of entangled particles is working as intended and hasn’t been tampered with. Applications include perfectly secure quantum key distribution and unhackable sources of truly random numbers.

    In a typical Bell test trial, Alice and Bob each possess one of a pair of entangled particles, such as polarization-entangled photons or spin-entangled electrons. Each of them makes a random and independent choice of a basis—a direction in which to measure the particle’s polarization or spin—and performs the corresponding measurement. Under quantum mechanics, the results of Alice’s and Bob’s measurements over repeated trials can be highly correlated—even though their individual outcomes can’t be foreknown. In contrast, local-realist theories posit that only local variables, such as the state of the particle, can influence the outcome of a measurement. Under any such theory, the correlation between Alice’s and Bob’s measurements is much less.

    But what if some hidden signal informs Bob’s experiment about Alice’s choice of basis, or vice versa? If such a signal can change the state of Bob’s particle, it can create quantum-like correlations in a system without actual quantum entanglement. That possibility is at the heart of the so-called locality loophole. The loophole can be closed by arranging the experiment, as shown in figure 2, so that no light-speed signal with information about Alice’s choice of basis can reach Bob until after his measurement is complete.

    Temp 2
    Figure 2. The locality loophole arises from the possibility that hidden signals between Alice and Bob can influence the results of their measurements. This space–time diagram represents an entangled-photon experiment for which the loophole is closed. The diagonal lines denote light-speed trajectories: The paths of the entangled photons are shown in red, and the forward light cones of the measurement-basis choices are shown in blue. Note that Bob cannot receive information about Alice’s chosen basis until after his measurement is complete, and vice versa.

    In practice, under that arrangement, for Alice and Bob to have enough time to choose their bases and make their measurements, they must be positioned at least tens of meters apart. That requirement typically means that the experiments are done with entangled photons, which can be transported over such distances without much damage to their quantum state. But the inefficiencies in handling and detecting single photons introduce another loophole, called the fair-sampling or detection loophole: If too many trials go undetected by Alice, Bob, or both, it’s possible for the detected trials to display quantum-like correlations even when the set of all trials does not.

    In Bell tests that are implemented honestly, there’s little reason to think that the detected trials are anything other than a representative sample of all trials. But one can exploit the detection loophole to fool the test on purpose by causing trials to go undetected for reasons other than random chance. For example, manifestly classical states of light can mimic single photons in one basis but go entirely undetected in another (see Physics Today, December 2011, page 20). Furthermore, similar tricks can be used for hacking quantum cryptography systems. The only way to guarantee that a hacker is not present is to close the loopholes.

    Instead of the usual entangled photons, the Delft group based their experiment on entangled diamond nitrogen–vacancy (NV) centers, electron spins associated with point defects in the diamond’s crystal lattice and prized for their long quantum coherence times. The scheme is sketched in figure 3: Each NV center is first entangled with a photon, then the photons are sent to a central location and jointly measured. A successful joint measurement, which transfers the entanglement to the two NV centers, signals Alice and Bob that the Bell test trial is ready to proceed.

    Temp 3
    Figure 3. Entanglement swapping between diamond nitrogen–vacancy (NV) centers. Alice and Bob entangle their NV spins with photons, then transmit the photons to a central location to be jointly measured. After a successful joint measurement, which signals that the NV spins are entangled with each other, each spin is measured in a basis chosen by a random-number generator (RNG). (Adapted from ref. 1.)

    n 2013 the team carried out a version of that experiment4 with the NV spins separated by 3 m. “It was at that moment,” says Hanson, “that I realized that we could do a loophole-free Bell test—and also that we could be the first.” A 3-m separation is not enough to close the locality loophole, so the researchers set about relocating the NV-center equipment to two separate labs 1.3 km apart and fiber-optically linking them to the joint-measurement apparatus at a third lab in between.

    A crucial aspect of the entanglement-swapping scheme is that the Bell test trial doesn’t begin until the joint measurement is made. As far as the detection loophole is concerned, attempted trials without a successful joint measurement don’t count. That’s fortunate, because the joint measurement succeeds in just one out of every 156 million attempts—a little more than once per hour.

    That inefficiency stems from two main sources. First, the initial spin–photon entanglement succeeds just 3% of the time at each end. Second, photon loss in the optical fibers is substantial: The photons entangled with the NV centers have a wavelength of 637 nm, well outside the so-called telecom band, 1520–1610 nm, where optical fibers work best. In contrast, once the NV spins are entangled, they can be measured efficiently and accurately. So of the Bell test trials that the researchers are able to perform, none are lost to nondetection.

    Early in the summer of 2015, Hanson and colleagues ran their experiment for 220 hours over 18 days and obtained 245 useful trials. They saw clear evidence of quantum correlations—although with so few trials, the likelihood of a nonquantum system producing the same correlations by chance is as much as 4%.

    The Delft researchers are working on improving their system by converting their photons into the telecom band. Hanson estimates that they could then extend the separation between the NV centers from 1.3 km up to 100 km. That distance makes feasible a number of quantum network applications, such as quantum key distribution.

    In quantum key distribution—as in a Bell test—Alice and Bob perform independently chosen measurements on a series of entangled particles. On trials for which Alice and Bob have fortuitously chosen to measure their particles in the same basis, their results are perfectly correlated. By conferring publicly to determine which trials those were, then looking privately at their measurement results for those trials, they can obtain a secret string of ones and zeros that only they know. (See article by Daniel Gottesman and Hoi-Kwong Lo, Physics Today, November 2000, page 22.)

    The NIST and Vienna groups both performed their experiments with photons, and both used single-photon detectors developed by Nam and his NIST colleagues. The Vienna group used so-called transition-edge sensors that are more than 98% efficient;5 the NIST group used superconducting nanowire single-photon detectors (SNSPDs), which are not as efficient but have far better timing resolution. Previous SNSPDs had been limited to 70% efficiency at telecom wavelengths—in part because the polycrystalline superconductor of choice doesn’t couple well to other optical elements. By switching to an amorphous superconducting material, Nam and company increased the detection efficiency to more than 90%.6

    Shalm realized that the new SNSPDs might be good enough for a loophole-free Bell test. “We had the detectors that worked at telecom wavelengths, so we had to generate entangled photons at the same wavelengths,” he says. “That was a big engineering challenge.” Another challenge was to boost the efficiency of the mazes of optics that carry the entangled photons from the source to the detector. “Normally, every time photons enter or exit an optical fiber, the coupling is only about 80% efficient,” explains Shalm. “We needed to get that up to 95%. We were worrying about every quarter of a percent.”

    In September 2015 the NIST group conducted its experiment between two laboratory rooms separated by 185 m. The Vienna researchers positioned their detectors 60 m apart in the subbasement of the Vienna Hofburg Castle. Both groups had refined their overall system efficiencies so that each detector registered 75% or more of the photons created by the source—enough to close the detection loophole.

    In contrast to the Delft group’s rate of one trial per hour, the NIST and Vienna groups were able to conduct thousands of trials per second; they each collected enough data in less than one hour to eliminate any possibility that their correlations could have arisen from random chance.

    It’s not currently feasible to extend the entangled-photon experiments into large-scale quantum networks. Even at telecom wavelengths, photons traversing the optical fibers are lost at a nonnegligible rate, so lengthening the fibers would lower the fraction of detected trials and reopen the detection loophole. The NIST group is working on using its experiment for quantum random-number generation, which doesn’t require the photons to be conveyed over such vast distances.

    Random numbers are widely used in security applications. For example, one common system of public-key cryptography involves choosing at random two large prime numbers, keeping them private, but making their product public. Messages can be encrypted by anyone who knows the product, but they can be decrypted only by someone who knows the two prime factors.

    The scheme is secure because factorizing a large number is a computationally hard problem. But it loses that security if the process used to choose the prime numbers can be predicted or reproduced. Numbers chosen by computer are at best pseudorandom because computers can run only deterministic algorithms. But numbers derived from the measurement of quantum states—whose quantum nature is verified through a loophole-free Bell test—can be truly random and unpredictable.

    The NIST researchers plan to make their random-number stream publicly available to everyone, so it can’t be used for encryption keys that need to be kept private. But a verified public source of tamperproof random numbers has other uses, such as choosing unpredictable samples of voters for opinion polling, taxpayers for audits, or products for safety testing.


    B. Hensen et al., Nature 526, 682 (2015). http://dx.doi.org/10.1038/nature15759
    L. K. Shalm et al., Phys. Rev. Lett. (in press), http://arxiv.org/abs/1511.03189.
    M. Giustina et al., Phys. Rev. Lett. (in press), http://arxiv.org/abs/1511.03190.
    H. Bernien et al., Nature 497, 86 (2013). http://dx.doi.org/10.1038/nature12016
    A. E. Lita, A. J. Miller, S. W. Nam, Opt. Express 16, 3032 (2008). http://dx.doi.org/10.1364/OE.16.003032
    F. Marsili et al., Nat. Photonics 7, 210 (2013).http://dx.doi.org/10.1038/nphoton.2013.13

    © 2016 American Institute of Physics
    DOI: http://dx.doi.org/10.1063/PT.3.3039

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The American Physical Society strives to:

    Be the leading voice for physics and an authoritative source of physics information for the advancement of physics and the benefit of humanity;
    Provide effective programs in support of the physics community and the conduct of physics;
    Collaborate with national scientific societies for the advancement of science, science education and the science community;
    Cooperate with international physics societies to promote physics, to support physicists worldwide and to foster international collaboration;
    Promote an active, engaged and diverse membership, and support the activities of its units and members.

  • richardmitnick 5:32 pm on January 2, 2016 Permalink | Reply
    Tags: , , , Quantum Mechanics   

    From ETH Zürich: “Faster entanglement of distant quantum dots” 

    ETH Zurich bloc

    ETH Zürich

    Oliver Morsch

    Entanglement between distant quantum objects is an important ingredient for future information technologies. Researchers at the ETH have now developed a method with which such states can be created a thousand times faster than before.

    Temp 1
    In two entangled quantum objects the spins are in a superposition of the states “up/down” and “down/up”. Researchers at the ETH have created such states in quantum dots that are five meters apart. (Visualisations: ETH Zürich / Aymeric Delteil)

    In many future information and telecommunication technologies, a remarkable quantum effect called entanglement will likely play an important role. The entanglement of two quantum objects means that measurements on one of the objects instantaneously determine the properties of the other one – without any exchange of information between them.

    Disapprovingly, Albert Einstein called this strange non-locality “spooky action at a distance”. In the meantime, physicists have warmed to it and are now trying to put it to good use, for instance in order to make data transmission immune to eavesdropping. To that end, the creation of entanglement between spatially distant quantum particles is indispensable. That, however, is not easy and typically works rather slowly. A group of physicists led by Atac Imamoglu, a professor at the Institute for Quantum Electronics at the ETH in Zurich, have now demonstrated a method that allows the creation of a thousand times more entangled states per second than was possible before.

    Distant quantum dots

    In their experiments, the young researchers Aymeric Delteil, Zhe Sun und Wei-bo Gao used two so-called quantum dots that were placed five metres apart in the laboratory. Quantum dots are tiny structures, measuring only a few nanometres, inside a semiconductor material and in which electrons are trapped in a sort of cage. The quantum mechanical energy states of those electrons can be represented by spins, i.e., little arrows pointing up or down. When the spin states are entangled, it is possible to deduce from a measurement performed on one of the quantum dots which state the other one will be found in. If the spin of the first quantum dot points up, the other one points down, and vice versa. Before the measurement, however, the directions of the two spins are both unknown: they are in a quantum mechanical superposition of both spin combinations.

    Entanglement by scattershot

    In order to entangle the two quantum dots with each other the researchers at ETH used the principle of heralding. “Unfortunately, at the moment it is practically impossible to entangle quantum objects that are far apart with certainty and on demand”, explains Imamoglu. Instead, it is necessary to create the entangled states using a scattershot approach in which the quantum dots are constantly bombarded with light particles, which are then scattered back. Every so often this will result in a fluke: one of the scattered light particles makes a detector click, and the resulting spin states are actually entangled.

    Imamoglu and his colleagues make use of this trick. They send laser pulses simultaneously to the two quantum dots and measure the light particles subsequently emitted by them. Before doing so, they carefully eliminated any possibility to find out which quantum dot the light particles originated from. The click in the light detector then “heralds” the actual entanglement of the quantum dots and signals that they can now be used further, e.g., for transmitting quantum information.

    Possible improvements

    The researchers tested their method by continuously shooting around ten million laser pulses per second at the quantum dots. This high repetition rate was possible because the spin states of quantum dots can be controlled within just a few nanoseconds. The measurements showed that in this way 2300 entangled states were produced per second.

    “That’s already a good start”, says Imamoglu, adding that the method certainly has room for improvement. Entangling quantum dots that are more than five metres apart, for instance, would require an enhancement of their coherence time. This time indicates how long a quantum state survives before it is destroyed through the influence of its environment (such as electric or magnetic fields). If the heralding light particle takes longer than one coherence time to fly to the detector, then a click no longer heralds entanglement. In future experiments the physicists want, therefore, to replace the quantum dots by so-called quantum dot molecules, whose coherence time are a hundred times longer. Furthermore, improvements of the detection probability of the light particles could lead to an even higher entanglement yield.

    Delteil A, Sun Z, Gao W, Togan E, Faelt S, Imamoglu A: Generation of heralded entanglement between distant hole spins, Nature Physics, 21 December 2015, doi: 10.1038/nphys3605

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ETH Zurich campus
    ETH Zurich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zurich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zurich, underlining the excellent reputation of the university.

  • richardmitnick 4:24 pm on December 24, 2015 Permalink | Reply
    Tags: , , , Quantum Mechanics, ,   

    From Ethan Siegel: “What Are Quantum Gravity’s Alternatives To String Theory?” 

    Starts with a bang
    Starts with a Bang

    Ethan Siegel

    Image credit: CPEP (Contemporary Physics Education Project), NSF/DOE/LBNL.

    If there is a quantum theory of gravity, is String Theory the only game in town?

    “I just think too many nice things have happened in string theory for it to be all wrong. Humans do not understand it very well, but I just don’t believe there is a big cosmic conspiracy that created this incredible thing that has nothing to do with the real world.” –Edward Witten

    The Universe we know and love — with [Albert] Einstein’s General Relativity as our theory of gravity and quantum field theories of the other three forces — has a problem that we don’t often talk about: it’s incomplete, and we know it. Einstein’s theory on its own is just fine, describing how matter-and-energy relate to the curvature of space-and-time. Quantum field theories on their own are fine as well, describing how particles interact and experience forces. Normally, the quantum field theory calculations are done in flat space, where spacetime isn’t curved. We can do them in the curved space described by Einstein’s theory of gravity as well (although they’re harder — but not impossible — to do), which is known as semi-classical gravity. This is how we calculate things like Hawking radiation and black hole decay.

    Image credit: NASA, via http://www.nasa.gov/topics/universe/features/smallest_blackhole.html

    But even that semi-classical treatment is only valid near and outside the black hole’s event horizon, not at the location where gravity is truly at its strongest: at the singularities (or the mathematically nonsensical predictions) theorized to be at the center. There are multiple physical instances where we need a quantum theory of gravity, all having to do with strong gravitational physics on the smallest of scales: at tiny, quantum distances. Important questions, such as:

    What happens to the gravitational field of an electron when it passes through a double slit?
    What happens to the information of the particles that form a black hole, if the black hole’s eventual state is thermal radiation?
    And what is the behavior of a gravitational field/force at and around a singularity?

    Image credit: Nature 496, 20–23 (04 April 2013) doi:10.1038/496020a, via http://www.nature.com/news/astrophysics-fire-in-the-hole-1.12726.

    In order to explain what happens at short distances in the presence of gravitational sources — or masses — we need a quantum, discrete, and hence particle-based theory of gravity. The known quantum forces are mediated by particles known as bosons, or particles with integer spin. The photon mediates the electromagnetic force, the W-and-Z bosons mediate the weak force, while the gluons mediate the strong force. All these types of particles have a spin of 1, which for massive (W-and-Z) particles mean they can take on spin values of -1, 0, or +1, while for massless ones (like gluons and photons), they can take on values of -1 or +1 only.

    The Higgs boson is also a boson, although it doesn’t mediate any forces, and has a spin of 0. Because of what we know about gravitation — General Relativity is a tensor theory of gravity — it must be mediated by a massless particle with a spin of 2, meaning it can take on a spin value of -2 or +2 only.

    This is fantastic! It means that we already know a few things about a quantum theory of gravity before we even try to formulate one! We know this because whatever the true quantum theory of gravity turns out to be, it must be consistent with General Relativity when we’re not at very small distances from a massive particle or object, just as — 100 years ago — we knew that General Relativity needed to reduce to Newtonian gravity in the weak-field regime.

    Image credit: NASA, of an artist’s concept of Gravity Probe B orbiting the Earth to measure space-time curvature.

    NASA Gravity Probe B
    Gravity Probe B

    The big question, of course is how? How do you quantize gravity in a way that’s correct (at describing reality), consistent (with both GR and QFT), and hopefully leads to calculable predictions for new phenomena that might be observed, measured or somehow tested. The leading contender, of course, is something you’ve long heard of: String Theory.

    String Theory is an interesting framework — it can include all of the standard model fields and particles, both the fermions and the bosons.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    It includes also a 10-dimensional Tensor-Scalar theory of gravity: with 9 space and 1 time dimensions, and a scalar field parameter. If we erase six of those spatial dimensions (through an incompletely defined process that people just call compactification) and let the parameter (ω) that defines the scalar interaction go to infinity, we can recover General Relativity.

    Image credit: NASA/Goddard/Wade Sisler, of Brian Greene presenting on String Theory.

    But there are a whole host of phenomenological problems with String Theory. One is that it predicts a large number of new particles, including all the supersymmetric ones, none of which have been found.

    Supersymmetry standard model
    Standard Model of Supersymmetry

    It claims to not need to need “free parameters” like the standard model has (for the masses of the particles), but it replaces that problem with an even worse one. String theory refers to “10⁵⁰⁰ possible solutions,” where these solutions refer to the vacuum expectation values of the string fields, and there’s no mechanism to recover them; if you want String Theory to work, you need to give up on dynamics, and simply say, “well, it must’ve been anthropically selected.” There are frustrations, drawbacks, and problems with the very idea of String Theory. But the biggest problem with it may not be these mathematical ones. Instead, it may be that there are four other alternatives that may lead us to quantum gravity instead; approaches that are completely independent of String Theory.

    Image credit: Wikimedia Commons user Linfoxman, of an illustration of a quantized “fabric of space.”

    1.) Loop Quantum Gravity [reader, please take the time to visit this link and read the article]. LQG is an interesting take on the problem: rather than trying to quantize particles, LQG has as one of its central features that space itself is discrete. Imagine a common analogy for gravity: a bedsheet pulled taut, with a bowling ball in the center. Rather than a continuous fabric, though, we know that the bedsheet itself is really quantized, in that it’s made up of molecules, which in turn are made of atoms, which in turn are made of nuclei (quarks and gluons) and electrons.

    Space might be the same way! Perhaps it acts like a fabric, but perhaps it’s made up of finite, quantized entities. And perhaps it’s woven out of “loops,” which is where the theory gets it name from. Weave these loops together and you get a spin network, which represents a quantum state of the gravitational field. In this picture, not just the matter itself but space itself is quantized. The way to go from this idea of a spin network to a perhaps realistic way of doing gravitational computations is an active area of research, one that saw a tremendous leap forward made in just 2007/8, so this is still actively advancing.

    Image credit: Wikimedia Commons user & reasNink, generated with Wolfram Mathematica 8.0.

    2.) Asymptotically Safe Gravity. This is my personal favorite of the attempts at a quantum theory of gravity. Asymptotic freedom was developed in the 1970s to explain the unusual nature of the strong interaction: it was a very weak force at extremely short distances, then got stronger as (color) charged particles got farther and farther apart. Unlike electromagnetism, which had a very small coupling constant, the strong force has a large one. Due to some interesting properties of QCD, if you wound up with a (color) neutral system, the strength of the interaction fell off rapidly. This was able to account for properties like the physical sizes of baryons (protons and neutrons, for example) and mesons (pions, for example).

    Asymptotic safety, on the other hand, looks to solve a fundamental problem that’s related to this: you don’t need small couplings (or couplings that tend to zero), but rather for the couplings to simply be finite in the high-energy limit. All coupling constants change with energy, so what asymptotic safety does is pick a high-energy fixed point for the constant (technically, for the renormalization group, from which the coupling constant is derived), and then everything else can be calculated at lower energies.

    At least, that’s the idea! We’ve figured out how to do this in 1+1 dimensions (one space and one time), but not yet in 3+1 dimensions. Still, progress has been made, most notably by Christof Wetterich, who had two ground breaking papers in the 1990s. More recently, Wetterich used asymptotic safety — just six years ago — to calculate a prediction for the mass of the Higgs boson before the LHC found it. The result?

    Image credit: Mikhail Shaposhnikov & Christof Wetterich.

    Amazingly, what it indicated was perfectly in line with what the LHC wound up finding.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    It’s such an amazing prediction that if asymptotic safety is correct, and — when the error bars are beaten down further — the masses of the top quark, the W-boson and the Higgs boson are finalized, there may not even be a need for any other fundamental particles (like SUSY particles) for physics to be stable all the way up to the Planck scale. It’s not only very promising, it has many of the same appealing properties of string theory: quantizes gravity successfully, reduces to GR in the low energy limit, and is UV-finite. In addition, it beats string theory on at least one account: it doesn’t need the addition of new particles or parameters that we have no evidence for! Of all the string theory alternatives, this one is my favorite.

    3.) Causal Dynamical Triangulations. This idea, CDT, is one of the new kids in town, first developed only in 2000 by Renate Loll and expanded on by others since. It’s similar to LQG in that space itself is discrete, but is primarily concerned with how that space itself evolves. One interesting property of this idea is that time must be discrete as well! As an interesting feature, it gives us a 4-dimensional spacetime (not even something put in a priori, but something that the theory gives us) at the present time, but at very, very high energies and small distances (like the Planck scale), it displays a 2-dimensional structure. It’s based on a mathematical structure called a simplex, which is a multi-dimensional analogue of a triangle.

    Image credit: screenshot from the Wikipedia page for Simplex, via https://en.wikipedia.org/wiki/Simplex.

    A 2-simplex is a triangle, a 3-simplex is a tetrahedron, and so on. One of the “nice” features of this option is that causality — a notion held sacred by most human beings — is explicitly preserved in CDT. (Sabine has some words on CDT here, and its possible relation to asymptotically safe gravity.) It might be able to explain gravity, but it isn’t 100% certain that the standard model of elementary particles can fit suitably into this framework. It’s only major advances in computation that have enabled this to become a fairly well-studied alternative of late, and so work in this is both ongoing and relatively young.

    4.) Emergent gravity. And finally, we come to what’s probably the most speculative, recent of the quantum gravity possibilities. Emergent gravity only gained prominence in 2009, when Erik Verlinde proposed entropic gravity, a model where gravity was not a fundamental force, but rather emerged as a phenomenon linked to entropy. In fact, the seeds of emergent gravity go back to the discoverer of the conditions for generating a matter-antimatter asymmetry, Andrei Sakharov, who proposed the concept back in 1967. This research is still in its infancy, but as far as developments in the last 5–10 years go, it’s hard to ask for more than this.

    Image credit: flickr gallery of J. Gabas Esteban.

    We’re sure we need a quantum theory of gravity to make the Universe work at a fundamental level, but we’re not sure what that theory looks like or whether any of these five avenues (string theory included) are going to prove fruitful or not. String Theory is the best studied of all the options, but Loop Quantum Gravity is a rising second, with the others being given serious consideration at long last. They say the answer’s always in the last place you look, and perhaps that’s motivation enough to start looking, seriously, in newer places.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 7:00 pm on December 23, 2015 Permalink | Reply
    Tags: , , , , , , Quantum Mechanics   

    From AAAS: “Physicists figure out how to retrieve information from a black hole” 



    23 December 2015
    Adrian Cho

    Temp 1
    It would take technologies beyond our wildest dreams to extract the tiniest amount of quantum information from a black hole like this one. NASA; M. Weiss/Chandra X-Ray Center

    Black holes earn their name because their gravity is so strong not even light can escape from them. Oddly, though, physicists have come up with a bit of theoretical sleight of hand to retrieve a speck of information that’s been dropped into a black hole. The calculation touches on one of the biggest mysteries in physics: how all of the information trapped in a black hole leaks out as the black hole “evaporates.” Many theorists think that must happen, but they don’t know how.

    Unfortunately for them, the new scheme may do more to underscore the difficulty of the larger “black hole information problem” than to solve it. “Maybe others will be able to go further with this, but it’s not obvious to me that it will help,” says Don Page, a theorist at the University of Alberta in Edmonton, Canada, who was not involved in the work.

    You can shred your tax returns, but you shouldn’t be able to destroy information by tossing it into a black hole. That’s because, even though quantum mechanics deals in probabilities—such as the likelihood of an electron being in one location or another—the quantum waves that give those probabilities must still evolve predictably, so that if you know a wave’s shape at one moment you can predict it exactly at any future time. Without such “unitarity” quantum theory would produce nonsensical results such as probabilities that don’t add up to 100%.

    But suppose you toss some quantum particles into a black hole. At first blush, the particles and the information they encode is lost. That’s a problem, as now part of the quantum state describing the combined black hole-particles system has been obliterated, making it impossible to predict its exact evolution and violating unitarity.

    Physicists think they have a way out. In 1974, British theorist Stephen Hawking argued that black holes can radiate particles and energy. Thanks to quantum uncertainty, empty space roils with pairs of particles flitting in and out of existence. Hawking realized that if a pair of particles from the vacuum popped into existence straddling the black hole’s boundary then one particle could fly into space, while the other would fall into the black hole. Carrying away energy from the black hole, the exiting Hawking radiation should cause a black hole to slowly evaporate. Some theorists suspect information reemerges from the black hole encoded in the radiation—although how remains unclear as the radiation is supposedly random.

    Now, Aidan Chatwin-Davies, Adam Jermyn, and Sean Carroll of the California Institute of Technology in Pasadena have found an explicit way to retrieve information from one quantum particle lost in a black hole, using Hawking radiation and the weird concept of quantum teleportation.

    Quantum teleportation enables two partners, Alice and Bob, to transfer the delicate quantum state of one particle such as an electron to another. In quantum theory, an electron can spin one way (up), the other way (down), or literally both ways at once. In fact, its state can be described by a point on a globe in which north pole signifies up and the south pole signifies down. Lines of latitude denote different mixtures of up and down, and lines of longitude denote the “phase,” or how the up and down parts mesh. However, if Alice tries to measure that state, it will “collapse” one way or the other, up or down, squashing information such as the phase. So she can’t measure the state and send the information to Bob, but must transfer it intact.

    To do that Alice and Bob can share an additional pair of electrons connected by a special quantum link called entanglement. The state of either particle in the entangled pair is uncertain—it simultaneously points everywhere on the globe—but the states are correlated so that if Alice measures her particle from the pair and finds it spinning, say, up, she’ll know instantly that Bob’s electron is spinning down. So Alice has two electrons—the one whose state she wants to teleport and her half of the entangled pair. Bob has just the one from the entangled pair.

    To perform the teleportation, Alice takes advantage of one more strange property of quantum mechanics: that measurement not only reveals something about a system, it also changes its state. So Alice takes her two unentangled electrons and performs a measurement that “projects” them into an entangled state. That measurement breaks the entanglement between the pair of electrons that she and Bob share. But at the same time, it forces Bob’s electron into the state that her to-be-teleported electron was in. It’s as if, with the right measurement, Alice squeezes the quantum information from one side of the system to the other.

    Chatwin-Davies and colleagues realized that they could teleport the information about the state of an electron out of a black hole, too. Suppose that Alice is floating outside the black hole with her electron. She captures one photon from a pair born from Hawking radiation. Much like an electron, the photon can spin in either of two directions, and it will be entangled with its partner photon that has fallen into the black hole. Next, Alice measures the total angular momentum, or spin, of the black hole—both its magnitude and, roughly speaking, how much it lines up with a particular axis. With those two bits of information in hand, she then tosses in her electron, losing it forever.

    But Alice can still recover the information about the state of that electron, the team reports in a paper in press at Physical Review Letters. All she has to do is once again measure the spin and orientation of the black hole. Those measurements then entangle the black hole and the in-falling photon. They also teleport the state of the electron to the photon that Alice captured. Thus, the information from the lost electron is dragged back into the observable universe.

    Chatwin-Davies stresses that the scheme is not a plan for a practical experiment. After all, it would require Alice to almost instantly measure the spin of a black hole as massive as the sun to within a single atom’s spin. “We like to joke around that Alice is the most advanced scientist in the universe,” he says.

    The scheme also has major limitations. In particular, as the authors note, it works for one quantum particle, but not for two or more. That’s because the recipe exploits the fact that the black hole conserves angular momentum, so that its final spin is equal to its initial spin plus that of the electron. That trick enables Alice to get out exactly two bits of information—the total spin and its projection along one axis—and that’s just enough information to specify the latitude and longitude of quantum state of one particle. But it’s not nearly enough to recapture all the information trapped in a black hole, which typically forms when a star collapses upon itself.

    To really tackle the black hole information problem, theorists would also have to account for the complex states of the black hole’s interior, says Stefan Leichenauer, a theorist at the University of California, Berkeley. “Unfortunately, all of the big questions we have about black holes are precisely about these internal workings,” he says. “So, this protocol, though interesting in its own right, will probably not teach us much about the black hole information problem in general.”

    However, delving into the interior of black holes would require a quantum mechanical theory of gravity. Of course, developing such a theory is perhaps the grandest goal in all of theoretical physics, one that has eluded physicists for decades.

    See the full article here .

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 549 other followers

%d bloggers like this: