Tagged: Entanglement Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:57 am on March 8, 2020 Permalink | Reply
    Tags: "A Computer Science Proof Holds Answers for Math and Physics", , , Entanglement, Game Show Physics, , The commuting operator model of entanglement, The computer researchers: Henry Yuen the University of Toronto a; Zhengfeng Ji the University of Technology Sydney; Anand Natarajan and Thomas Vidick Caltech; John Wright the University of Texas, The Connes embedding conjecture, The correspondence between entanglement and computing came as a jolt to many researchers., The problems that can be verified through interactions with entangled quantum provers called MIP* equals the class of problems no harder than the halting problem a class called RE. “MIP*=RE.”, The tensor product model,   

    From WIRED: “A Computer Science Proof Holds Answers for Math and Physics” 


    From WIRED

    03.08.2020
    Kevin Hartnett

    An advance in our understanding of quantum computing offers stunning solutions to problems that have long puzzled mathematicians and physicists.

    In 1935, Albert Einstein, working with Boris Podolsky and Nathan Rosen, grappled with a possibility revealed by the new laws of quantum physics: that two particles could be entangled, or correlated, even across vast distances.

    The very next year, Alan Turing formulated the first general theory of computing and proved that there exists a problem that computers will never be able to solve.

    These two ideas revolutionized their respective disciplines. They also seemed to have nothing to do with each other. But now a landmark proof has combined them while solving a raft of open problems in computer science, physics, and mathematics.

    The new proof establishes that quantum computers that calculate with entangled quantum bits or qubits, rather than classical 1s and 0s, can theoretically be used to verify answers to an incredibly vast set of problems. The correspondence between entanglement and computing came as a jolt to many researchers.

    “It was a complete surprise,” said Miguel Navascués, who studies quantum physics at the Institute for Quantum Optics and Quantum Information in Vienna.

    The proof’s co-authors set out to determine the limits of an approach to verifying answers to computational problems. That approach involves entanglement. By finding that limit the researchers ended up settling two other questions almost as a byproduct: Tsirelson’s problem in physics, about how to mathematically model entanglement, and a related problem in pure mathematics called the Connes embedding conjecture.

    In the end, the results cascaded like dominoes.

    “The ideas all came from the same time. It’s neat that they come back together again in this dramatic way,” said Henry Yuen of the University of Toronto and an author of the proof, along with Zhengfeng Ji of the University of Technology Sydney, Anand Natarajan and Thomas Vidick of the California Institute of Technology, and John Wright of the University of Texas, Austin. The five researchers are all computer scientists.

    Undecidable Problems

    Turing defined a basic framework for thinking about computation before computers really existed. In nearly the same breath, he showed that there was a certain problem computers were provably incapable of addressing. It has to do with whether a program ever stops.

    Typically, computer programs receive inputs and produce outputs. But sometimes they get stuck in infinite loops and spin their wheels forever. When that happens at home, there’s only one thing left to do.

    “You have to manually kill the program. Just cut it off,” Yuen said.

    Turing proved that there’s no all-purpose algorithm that can determine whether a computer program will halt or run forever. You have to run the program to find out.

    1
    The computer scientists Henry Yuen, Thomas Vidick, Zhengfeng Ji, Anand Natarajan and John Wright co-authored a proof about verifying answers to computational problems and ended up solving major problems in math and quantum physics.Courtesy of (Yuen) Andrea Lao; (Vidick) Courtesy of Caltech; (Ji) Anna Zhu; (Natarajan) David Sella; (Wright) Soya Park.

    “You’ve waited a million years and a program hasn’t halted. Do you just need to wait 2 million years? There’s no way of telling,” said William Slofstra, a mathematician at the University of Waterloo.

    In technical terms, Turing proved that this halting problem is undecidable — even the most powerful computer imaginable couldn’t solve it.

    After Turing, computer scientists began to classify other problems by their difficulty. Harder problems require more computational resources to solve — more running time, more memory. This is the study of computational complexity.

    Ultimately, every problem presents two big questions: “How hard is it to solve?” and “How hard is it to verify that an answer is correct?”

    Interrogate to Verify

    When problems are relatively simple, you can check the answer yourself. But when they get more complicated, even checking an answer can be an overwhelming task. However, in 1985 computer scientists realized it’s possible to develop confidence that an answer is correct even when you can’t confirm it yourself.

    The method follows the logic of a police interrogation.

    If a suspect tells an elaborate story, maybe you can’t go out into the world to confirm every detail. But by asking the right questions, you can catch your suspect in a lie or develop confidence that the story checks out.

    In computer science terms, the two parties in an interrogation are a powerful computer that proposes a solution to a problem—known as the prover—and a less powerful computer that wants to ask the prover questions to determine whether the answer is correct. This second computer is called the verifier.

    To take a simple example, imagine you’re colorblind and someone else—the prover—claims two marbles are different colors. You can’t check this claim by yourself, but through clever interrogation you can still determine whether it’s true.

    Put the two marbles behind your back and mix them up. Then ask the prover to tell you which is which. If they really are different colors, the prover should answer the question correctly every time. If the marbles are actually the same color—meaning they look identical—the prover will guess wrong half the time.

    “If I see you succeed a lot more than half the time, I’m pretty sure they’re not” the same color, Vidick said.

    By asking a prover questions, you can verify solutions to a wider class of problems than you can on your own.

    In 1988, computer scientists considered what happens when two provers propose solutions to the same problem. After all, if you have two suspects to interrogate, it’s even easier to solve a crime, or verify a solution, since you can play them against each other.

    “It gives more leverage to the verifier. You interrogate, ask related questions, cross-check the answers,” Vidick said. If the suspects are telling the truth, their responses should align most of the time. If they’re lying, the answers will conflict more often.

    Similarly, researchers showed that by interrogating two provers separately about their answers, you can quickly verify solutions to an even larger class of problems than you can when you only have one prover to interrogate.

    Computational complexity may seem entirely theoretical, but it’s also closely connected to the real world. The resources that computers need to solve and verify problems—time and memory—are fundamentally physical. For this reason, new discoveries in physics can change computational complexity.

    “If you choose a different set of physics, like quantum rather than classical, you get a different complexity theory out of it,” Natarajan said.

    The new proof is the end result of 21st-century computer scientists confronting one of the strangest ideas of 20th-century physics: entanglement.

    The Connes Embedding Conjecture

    When two particles are entangled, they don’t actually affect each other—they have no causal relationship. Einstein and his co-authors elaborated on this idea in their 1935 paper. Afterward, physicists and mathematicians tried to come up with a mathematical way of describing what entanglement really meant.

    Yet the effort came out a little muddled. Scientists came up with two different mathematical models for entanglement—and it wasn’t clear that they were equivalent to each other.

    In a roundabout way, this potential dissonance ended up producing an important problem in pure mathematics called the Connes embedding conjecture. Eventually, it also served as a fissure that the five computer scientists took advantage of in their new proof.

    The first way of modeling entanglement was to think of the particles as spatially isolated from each other. One is on Earth, say, and the other is on Mars; the distance between them is what prevents causality. This is called the tensor product model.

    But in some situations, it’s not entirely obvious when two things are causally separate from each other. So mathematicians came up with a second, more general way of describing causal independence.

    When the order in which you perform two operations doesn’t affect the outcome, the operations “commute”: 3 x 2 is the same as 2 x 3. In this second model, particles are entangled when their properties are correlated but the order in which you perform your measurements doesn’t matter: Measure particle A to predict the momentum of particle B or vice versa. Either way, you get the same answer. This is called the commuting operator model of entanglement.

    Both descriptions of entanglement use arrays of numbers organized into rows and columns called matrices. The tensor product model uses matrices with a finite number of rows and columns. The commuting operator model uses a more general object that functions like a matrix with an infinite number of rows and columns.

    Over time, mathematicians began to study these matrices as objects of interest in their own right, completely apart from any connection to the physical world. As part of this work, a mathematician named Alain Connes conjectured in 1976 that it should be possible to approximate many infinite-dimensional matrices with finite-dimensional ones. This is one implication of the Connes embedding conjecture.

    The following decade a physicist named Boris Tsirelson posed a version of the problem that grounded it in physics once more. Tsirelson conjectured that the tensor product and commuting operator models of entanglement were roughly equivalent. This makes sense, since they’re theoretically two different ways of describing the same physical phenomenon. Subsequent work showed that because of the connection between matrices and the physical models that use them, the Connes embedding conjecture and Tsirelson’s problem imply each other: Solve one, and you solve the other.

    Yet the solution to both problems ended up coming from a third place altogether.

    Game Show Physics

    In the 1960s, a physicist named John Bell came up with a test for determining whether entanglement was a real physical phenomenon, rather than just a theoretical notion. The test involved a kind of game whose outcome reveals whether something more than ordinary, non-quantum physics is at work.

    Computer scientists would later realize that this test about entanglement could also be used as a tool for verifying answers to very complicated problems.

    But first, to see how the games work, let’s imagine two players, Alice and Bob, and a 3-by-3 grid. A referee assigns Alice a row and tells her to enter a 0 or a 1 in each box so that the digits sum to an odd number. Bob gets a column and has to fill it out so that it sums to an even number. They win if they put the same number in the one place her row and his column overlap. They’re not allowed to communicate.

    Under normal circumstances, the best they can do is win 89% of the time. But under quantum circumstances, they can do better.

    Imagine Alice and Bob split a pair of entangled particles. They perform measurements on their respective particles and use the results to dictate whether to write 1 or 0 in each box. Because the particles are entangled, the results of their measurements are going to be correlated, which means their answers will correlate as well — meaning they can win the game 100% of the time.

    2
    Illustration: Lucy Reading-Ikkanda/Quanta Magazine

    So if you see two players winning the game at unexpectedly high rates, you can conclude that they are using something other than classical physics to their advantage. Such Bell-type experiments are now called “nonlocal” games, in reference to the separation between the players. Physicists actually perform them in laboratories.

    “People have run experiments over the years that really show this spooky thing is real,” said Yuen.

    As when analyzing any game, you might want to know how often players can win a nonlocal game, provided they play the best they can. For example, with solitaire, you can calculate how often someone playing perfectly is likely to win.

    But in 2016, William Slofstra proved that there’s no general algorithm for calculating the exact maximum winning probability for all nonlocal games. So researchers wondered: Could you at least approximate the maximum-winning percentage?

    Computer scientists have homed in on an answer using the two models describing entanglement. An algorithm that uses the tensor product model establishes a floor, or minimum value, on the approximate maximum-winning probability for all nonlocal games. Another algorithm, which uses the commuting operator model, establishes a ceiling.

    These algorithms produce more precise answers the longer they run. If Tsirelson’s prediction is true, and the two models really are equivalent, the floor and the ceiling should keep pinching closer together, narrowing in on a single value for the approximate maximum-winning percentage.

    But if Tsirelson’s prediction is false, and the two models are not equivalent, “the ceiling and the floor will forever stay separated,” Yuen said. There will be no way to calculate even an approximate winning percentage for nonlocal games.

    In their new work, the five researchers used this question — about whether the ceiling and floor converge and Tsirelson’s problem is true or false — to solve a separate question about when it’s possible to verify the answer to a computational problem.

    Entangled Assistance

    In the early 2000s, computer scientists began to wonder: How does it change the range of problems you can verify if you interrogate two provers that share entangled particles?

    Most assumed that entanglement worked against verification. After all, two suspects would have an easier time telling a consistent lie if they had some means of coordinating their answers.

    But over the last few years, computer scientists have realized that the opposite is true: By interrogating provers that share entangled particles, you can verify a much larger class of problems than you can without entanglement.

    “Entanglement is a way to generate correlations that you think might help them lie or cheat,” Vidick said. “But in fact you can use that to your advantage.”

    To understand how, you first need to grasp the almost otherworldly scale of the problems whose solutions you could verify through this interactive procedure.

    Imagine a graph—a collection of dots (vertices) connected by lines (edges). You might want to know whether it’s possible to color the vertices using three colors, so that no vertices connected by an edge have the same color. If you can, the graph is “three-colorable.”

    If you hand a pair of entangled provers a very large graph, and they report back that it can be three-colored, you’ll wonder: Is there a way to verify their answer?

    For very big graphs, it would be impossible to check the work directly. So instead, you could ask each prover to tell you the color of one of two connected vertices. If they each report a different color, and they keep doing so every time you ask, you’ll gain confidence that the three-coloring really works.

    But even this interrogation strategy fails as graphs get really big—with more edges and vertices than there are atoms in the universe. Even the task of stating a specific question (“Tell me the color of XYZ vertex”) is more than you, the verifier, can manage: The amount of data required to name a specific vertex is more than you can hold in your working memory.

    But entanglement makes it possible for the provers to come up with the questions themselves.

    “The verifier doesn’t have to compute the questions. The verifier forces the provers to compute the questions for them,” Wright said.

    The verifier wants the provers to report the colors of connected vertices. If the vertices aren’t connected, then the answers to the questions won’t say anything about whether the graph is three-colored. In other words, the verifier wants the provers to ask correlated questions: One prover asks about vertex ABC and the other asks about vertex XYZ. The hope is that the two vertices are connected to each other, even though neither prover knows which vertex the other is thinking about. (Just as Alice and Bob hope to fill in the same number in the same square even though neither knows which row or column the other has been asked about.)

    If two provers were coming up with these questions completely on their own, there’d be no way to force them to select connected, or correlated, vertices in a way that would allow the verifier to validate their answers. But such correlation is exactly what entanglement enables.

    “We’re going to use entanglement to offload almost everything onto the provers. We make them select questions by themselves,” Vidick said.

    At the end of this procedure, the provers each report a color. The verifier checks whether they’re the same or not. If the graph really is three-colorable, the provers should never report the same color.

    “If there is a three-coloring, the provers will be able to convince you there is one,” Yuen said.

    As it turns out, this verification procedure is another example of a nonlocal game. The provers “win” if they convince you their solution is correct.

    In 2012, Vidick and Tsuyoshi Ito proved that it’s possible to play a wide variety of nonlocal games with entangled provers to verify answers to at least the same number of problems you can verify by interrogating two classical computers. That is, using entangled provers doesn’t work against verification. And last year, Natarajan and Wright proved that interacting with entangled provers actually expands the class of problems that can be verified.

    But computer scientists didn’t know the full range of problems that can be verified in this way. Until now.

    A Cascade of Consequences

    In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.

    “The verification capability of this type of model is really mind-boggling,” Yuen said.

    But the halting problem can’t be solved. And that fact is the spark that sets the final proof in motion.

    Imagine you hand a program to a pair of entangled provers. You ask them to tell you whether it will halt. You’re prepared to verify their answer through a kind of nonlocal game: The provers generate questions and “win” based on the coordination between their answers.

    If the program does in fact halt, the provers should be able to win this game 100 percent of the time—similar to how if a graph is actually three-colorable, entangled provers should never report the same color for two connected vertices. If it doesn’t halt, the provers should only win by chance—50 percent of the time.

    That means if someone asks you to determine the approximate maximum-winning probability for a specific instance of this nonlocal game, you will first need to solve the halting problem. And solving the halting problem is impossible. Which means that calculating the approximate maximum-winning probability for nonlocal games is undecidable, just like the halting problem.

    This in turn means that the answer to Tsirelson’s problem is no—the two models of entanglement are not equivalent. Because if they were, you could pinch the floor and the ceiling together to calculate an approximate maximum-winning probability.

    “There cannot be such an algorithm, so the two [models] must be different,” said David Pérez-García of the Complutense University of Madrid.

    The new paper proves that the class of problems that can be verified through interactions with entangled quantum provers, a class called MIP*, is exactly equal to the class of problems that are no harder than the halting problem, a class called RE. The title of the paper states it succinctly: “MIP* = RE.”

    In the course of proving that the two complexity classes are equal, the computer scientists proved that Tsirelson’s problem is false, which, due to previous work, meant that the Connes embedding conjecture is also false.

    For researchers in these fields, it was stunning that answers to such big problems would fall out from a seemingly unrelated proof in computer science.

    “If I see a paper that says MIP* = RE, I don’t think it has anything to do with my work,” said Navascués, who co-authored previous work tying Tsirelson’s problem and the Connes embedding conjecture together. “For me it was a complete surprise.”

    Quantum physicists and mathematicians are just beginning to digest the proof. Prior to the new work, mathematicians had wondered whether they could get away with approximating infinite-dimensional matrices by using large finite-dimensional ones instead. Now, because the Connes embedding conjecture is false, they know they can’t.

    “Their result implies that’s impossible,” said Slofstra.

    The computer scientists themselves did not aim to answer the Connes embedding conjecture, and as a result, they’re not in the best position to explain the implications of one of the problems they ended up solving.

    “Personally, I’m not a mathematician. I don’t understand the original formulation of the Connes embedding conjecture well,” said Natarajan.

    He and his co-authors anticipate that mathematicians will translate this new result into the language of their own field. In a blog post announcing the proof, Vidick wrote, “I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences.”

    Yet as other researchers run with the proof, the line of inquiry that prompted it is coming to a halt. For more than three decades, computer scientists have been trying to figure out just how far interactive verification will take them. They are now confronted with the answer, in the form of a long paper with a simple title and echoes of Turing.

    “There’s this long sequence of works just wondering how powerful” a verification procedure with two entangled quantum provers can be, Natarajan said. “Now we know how powerful it is. That story is at an end.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 7:04 pm on February 7, 2017 Permalink | Reply
    Tags: Entanglement, , Reality at the atomic scale,   

    From The New Yorker: “Quantum Theory by Starlight” Gee, Actual Physical Science from The New Yorker 

    new-yorker-bloc-rea-irvin
    Rea Irvin

    The New Yorker

    [Shock of shocks, The New Yorker remembers the physical sciences.Anyone remember Jeremy Bernstein?]

    2.7.17
    David Kaiser

    1
    In parsing the strange dance of subatomic particles, it can be helpful to think of them as twins. IMAGE BY CHRONICLE / ALAMY

    The headquarters of the National Bank of Austria, in central Vienna, are exceptionally secure. During the week, in the basement of the building, employees perform quality-control tests on huge stacks of euros. One night last spring, however, part of the bank was given over to a different sort of testing. A group of young physicists, with temporary I.D. badges and sensitive electronics in tow, were allowed up to the top floor, where they assembled a pair of telescopes. One they aimed skyward, at a distant star in the Milky Way. The other they pointed toward the city, searching for a laser beam shot from a rooftop several blocks away. For all the astronomical equipment, though, their real quarry was a good deal smaller. They were there to conduct a new test of quantum theory, the branch of physics that seeks to explain reality at the atomic scale.

    It is difficult to overstate the weirdness of quantum physics. Even Albert Einstein and Erwin Schrödinger, both major architects of the theory, ultimately found it too outlandish to be wholly true. Throughout the summer of 1935, they aired their frustrations in a series of letters. For one thing, unlike Newtonian physics and Einstein’s relativity, which elegantly explained the behavior of everything from the fall of apples to the motion of galaxies, quantum theory offered only probabilities for various outcomes, not rock-solid predictions. It was an “epistemology-soaked orgy,” Einstein wrote, treating objects in the real world as mere puffs of possibility—both there and not there, or, in the case of Schrödinger’s famous imaginary cat, both alive and dead. Strangest of all was what Schrödinger dubbed “entanglement.” In certain situations, the equations of quantum theory implied that one subatomic particle’s behavior was bound up with another’s, even if the second particle was across the room, or on the other side of the planet, or in the Andromeda galaxy. They couldn’t be communicating, exactly, since the effect seemed to be instantaneous, and Einstein had already demonstrated that nothing could travel faster than light. In a letter to a friend, he dismissed entanglement as “spooky actions at a distance”—more ghost story than respectable science. But how to account for the equations?

    Physicists often invoke twins when trying to articulate the more fantastical elements of their theories. Einstein’s relativity, for instance, introduced the so-called twin paradox, which illustrates how a rapid journey through space and time can make one woman age more slowly than her twin. (Schrödinger’s interest in twins was rather less academic. His exploits with the Junger sisters, who were half his age, compelled his biographer to save a spot in the index for “Lolita complex.”) I am a physicist, and my wife and I actually have twins, so I find it particularly helpful to think about them when trying to parse the strange dance of entanglement.

    Let us call our quantum twins Ellie and Toby. Imagine that, at the same instant, Ellie walks into a restaurant in Cambridge, Massachusetts, and Toby walks into a restaurant in Cambridge, England. They ponder the menus, make their selections, and enjoy their meals. Afterward, their waiters come by to offer dessert. Ellie is given the choice between a brownie and a cookie. She has no real preference, being a fan of both, so she chooses one seemingly at random. Toby, who shares his sister’s catholic attitude toward sweets, does the same. Both siblings like their restaurants so much that they return the following week. This time, when their meals are over, the waiters offer ice cream or frozen yogurt. Again the twins are delighted—so many great options!—and again they choose at random.

    In the ensuing months, Ellie and Toby return to the restaurants often, alternating aimlessly between cookies or brownies and ice cream or frozen yogurt. But when they get together for Thanksgiving, looking rather plumper than last year, they compare notes and find a striking pattern in their selections. It turns out that when both the American and British waiters offered baked goods, the twins usually ordered the same thing—a brownie or a cookie for each. When the offers were different, Toby tended to order ice cream when Ellie ordered brownies, and vice versa. For some reason, though, when they were both offered frozen desserts, they tended to make opposite selections—ice cream for one, frozen yogurt for the other. Toby’s chances of ordering ice cream seemed to depend on what Ellie ordered, an ocean away. Spooky, indeed.

    Einstein believed that particles have definite properties of their own, independent of what we choose to measure, and that local actions produce only local effects—that what Toby orders has no bearing on what Ellie orders. In 1964, the Irish physicist John Bell identified the statistical threshold between Einstein’s world and the quantum world. If Einstein was right, then the outcomes of measurements on pairs of particles should line up only so often; there should be a strict limit on how frequently Toby’s and Ellie’s dessert orders are correlated. But if he was wrong, then the correlations should occur significantly more often. For the past four decades, scientists have tested the boundaries of Bell’s theorem. In place of Ellie and Toby, they have used specially prepared pairs of particles, such as photons of light. In place of friendly waiters recording dessert orders, they have used instruments that can measure some physical property, such as polarization—whether a photon’s electric field oscillates along or at right angles to some direction in space. To date, every single published test has been consistent with quantum theory.

    From the start, however, physicists have recognized that their experiments are subject to various loopholes, circumstances that could, in principle, account for the observed results even if quantum theory were wrong and entanglement merely a chimera. One loophole, known as locality, concerns information flow: could a particle on one side of the experiment, or the instrument measuring it, have sent some kind of message to the other side before the second measurement was completed? Another loophole concerns statistics: what if the particles that were measured somehow represented a biased sample, a few spooky dessert orders amid thousands of unseen boring ones? Physicists have found clever ways of closing one or the other of these loopholes over the years, and in 2015, in a beautiful experiment out of the Netherlands, one group managed to close both at once. But there is a third major loophole, one that Bell overlooked in his original analysis. Known as the freedom-of-choice loophole, it concerns whether some event in the past could have nudged both the choice of measurements to be performed and the behavior of the entangled particles—in our analogy, the desserts being offered and the selections that Ellie and Toby made. Where the locality loophole imagines Ellie and Toby, or their waiters, communicating with each other, the freedom-of-choice loophole supposes that some third party could have rigged things without any of them noticing. It was this loophole that my colleagues and I recently set out to address.

    We performed our experiment last April, spread out in three locations across Schrödinger’s native Vienna. A laser in Anton Zeilinger’s laboratory at the Institute for Quantum Optics and Quantum Information supplied our entangled photons. About three-quarters of a mile to the north, Thomas Scheidl and his colleagues set up two telescopes in a different university building. One was aimed at the institute, ready to receive the entangled photons, and one was pointed in the opposite direction, fixed on a star in the night sky. Several blocks south of the institute, at the National Bank of Austria, a second team, led by Johannes Handsteiner, had a comparable setup. Their second telescope, the one that wasn’t looking at the institute, was turned to the south.

    Our group’s goal was to measure pairs of entangled particles while insuring that the type of measurement we performed on one had nothing to do with how we assessed the other. In short, we wanted to turn the universe into a pair of random-number generators. Handsteiner’s target star was six hundred light-years from Earth, which meant that the light received by his telescope had been travelling for six hundred years. We selected the star carefully, such that the light it emitted at a particular moment all those centuries ago would reach Handsteiner’s telescope first, before it could cover the extra distance to either Zeilinger’s lab or the university. Scheidl’s target star, meanwhile, was nearly two thousand light-years away. Both team’s telescopes were equipped with special filters, which could distinguish extremely rapidly between photons that were more red or more blue than a particular reference wavelength. If Handsteiner’s starlight in a given instant happened to be more red, then the instruments at his station would perform one type of measurement on the entangled photon, which was just then zipping through the night sky, en route from Zeilinger’s laboratory. If Handsteiner’s starlight happened instead to be blue, then the other type of measurement would be performed. The same went for Scheidl’s station. The detector settings on each side changed every few millionths of a second, based on new observations of the stars.

    With this arrangement, it was as if each time Ellie walked into the restaurant, her waiter offered her a dessert based on an event that had occurred several centuries earlier, trillions of miles from the Earth—which neither Ellie, nor Toby, nor Toby’s waiter could have foreseen. Meanwhile, by placing Handsteiner’s and Scheidl’s stations relatively far apart, we were able to close the locality loophole even as we addressed the freedom-of-choice loophole. (Since we only detected a small fraction of all the entangled particles that were emitted from Zeilinger’s lab, though, we had to assume that the photons we did measure represented a fair sample of the whole collection.) We conducted two experiments that night, aiming the stellar telescopes at one pair of stars for three minutes, then another pair for three more. In each case, we detected about a hundred thousand pairs of entangled photons. The results from each experiment showed beautiful agreement with the predictions from quantum theory, with correlations far exceeding what Bell’s inequality would allow. Our results were published on Tuesday in the journal Physical Review Letters.

    How might a devotee of Einstein’s ideas respond? Perhaps our assumption of fair sampling was wrong, or perhaps some strange, unknown mechanism really did exploit the freedom-of-choice loophole, in effect alerting one receiving station of what was about to occur at the other. We can’t rule out such a bizarre scenario, but we can strongly constrain it. In fact, our experiment represents an improvement by sixteen orders of magnitude—a factor of ten million billion—over previous efforts to address the freedom-of-choice loophole. In order to account for the results of our new experiment, the unknown mechanism would need to have been set in place before the emission of the starlight that Handsteiner’s group observed, back when Joan of Arc’s friends still called her Joanie.

    Experiments like ours—and follow-up versions we plan to conduct, using larger telescopes to spy even fainter, more distant astronomical objects—harness some of the largest scales in nature to test its tiniest, and most fundamental, phenomena. Beyond that, our explorations could help shore up the security of next-generation devices, such as quantum-encryption schemes, which depend on entanglement to protect against hackers and eavesdroppers. But, for me, the biggest motivation remains exploring the strange mysteries of quantum theory. The world described by quantum mechanics is fundamentally, stubbornly different from the worlds of Newtonian physics or Einsteinian relativity. If Ellie’s and Toby’s dessert orders are going to keep lining up so spookily, I want to know why.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:01 am on January 26, 2017 Permalink | Reply
    Tags: Accelerating mirror, , Black hole paradox, Entanglement, , , , Shooting electron waves through plasma could reveal if black holes permanently destroy information,   

    From Science Alert: “Shooting electron waves through plasma could reveal if black holes permanently destroy information” 

    ScienceAlert

    Science Alert

    25 JAN 2017
    MIKE MCRAE

    1
    Interstellar/Paramount Pictures

    Without having to enter a black hole ourselves…

    One of the greatest dilemmas in astrophysics is the black hole paradox – if black holes really do destroy every scrap of information that enters them.

    Now, physicists might have finally come up with a way to test the paradox once and for all, by accelerating a wave of negatively charged electrons through a cloud of plasma.

    As far as objects in space go, black holes need little introduction. Get too close, and their concentrated mass will swallow you, never to return.

    But in the 1970s, physicists including Stephen Hawking proposed that black holes weren’t necessarily forever.

    Thanks to the peculiarities of quantum mechanics, particles did indeed radiate away from black holes, Hawking hypothesised, which means, theoretically, black holes could slowly evaporate away over time.

    This poses the paradox. Information – the fundamental coding of stuff in the Universe – can’t just disappear. That’s a big rule. But when a black hole evaporates away, where does its bellyful of information go?

    A clue might be found in the nature of the radiation Hawking described. This form of radiation arises when a pair of virtual particles pops into existence right up against a black hole’s line of no return – the ‘event horizon’.

    Usually, such paired particles cancel each other out, and the Universe is none the wiser. But in the case of Hawking radiation, one of these particles falls across the horizon into the gravitational grip of the black hole. The other barely escapes off into the Universe as a bona fide particle.

    Physicists have theorised that this escaped particle preserves the information of its twin thanks to the quirks of quantum dynamics. In this case, the phenomenon of entanglement would allow the particles to continue share a connection, even separated by time and space, leaving a lasting legacy of whatever was devoured by the black hole.

    To demonstrate this, physicists could catch a particle that has escaped a black hole’s event horizon, and then wait for the black hole to spill its guts in many, many years, to test if there’s indeed a correlation between one of the photons and its entangled twin. Which, let’s face it, isn’t exactly practical.

    Now, Pisin Chen from the National Taiwan University and Gerard Mourou from École Polytechnique in France have described a slightly easier method.

    They suggest that a high-tech ‘accelerating mirror’ should provide the same opportunity of separating entangled particles.

    That sounds strange, but as a pair of particles zips into existence in this hypothetical experiment, one would reflect from the accelerating mirror as the other became trapped at the boundary. Just as it might happen in a black hole.

    Once the mirror stopped moving, the ‘trapped’ photon would be freed, just as the energy would be released from a dying black hole.

    Chen’s and Mourou’s mirror would be made by pulsing an X-ray laser through a cloud of ionised gas in a plasma wakefield accelerator. The pulse would leave a trail of negatively charged electrons, which would serve nicely as a mirror.

    By altering the density of the plasma on a small enough scale, the ‘mirror’ would accelerate away from the laser pulse.

    As clever as the concept is, the experiment is still in its ‘thought bubble ‘stage. Even with established methods and trusted equipment, entanglement is tricky business to measure.

    And Hawking radiation itself has yet to be observed as an actual thing.

    Yet Chen’s and Mourou’s model could feasibly be built using existing technology, and as the researchers point out in their paper, could also serve to test other hypotheses on the physics of black holes.

    It sounds far more appealing than waiting until the end of time in front of a black hole, at least.

    This research was published in Physical Review Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: