Tagged: Quantum entanglement Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:33 pm on November 7, 2017 Permalink | Reply
    Tags: A way to link a group of atoms’ quantum mechanical properties among themselves far more quickly than is currently possible potentially providing a tool for highly precise sensing and quantum compute, , Dipolar interaction, Getting the atoms into an entangled state more quickly would be a potential advantage in any practical application not least because entanglement can be fleeting, Need Entangled Atoms? Get 'Em FAST! With NIST’s New Patent-Pending Method, , , Quantum entanglement, Uncertainty is the key here   

    From NIST: “Need Entangled Atoms? Get ‘Em FAST! With NIST’s New Patent-Pending Method” 


    NIST

    November 07, 2017

    Chad Boutin
    boutin@nist.gov
    (301) 975-4261

    1
    While quantum entanglement usually spreads through the atoms in an optical lattice via short-range interactions with the atoms’ immediate neighbors (left), new theoretical research shows that taking advantage of long-range dipolar interactions among the atoms could enable it to spread more quickly (right), a potential advantage for quantum computing and sensing applications.
    Credit: Gorshkov and Hanacek/NIST

    Physicists at the National Institute of Standards and Technology (NIST) have come up with a way to link a group of atoms’ quantum mechanical properties among themselves far more quickly than is currently possible, potentially providing a tool for highly precise sensing and quantum computer applications. NIST has applied for a patent on the method, which is detailed in a new paper in Physical Review Letters.

    The method, which has not yet been demonstrated experimentally, essentially would speed up the process of quantum entanglement in which the properties of multiple particles become interconnected with one other. Entanglement would propagate through a group of atoms in dramatically less time, allowing scientists to build an entangled system exponentially faster than is common today.

    Arrays of entangled atoms suspended in laser light beams, known as optical lattices, are one approach to creating the logic centers of prototype quantum computers, but an entangled state is difficult to maintain more than briefly. Applying the method to these arrays could give scientists precious time to do more with these arrays of atoms before entanglement is lost in a process known as decoherence.

    The method takes advantage of a physical relationship among the atoms called dipolar interaction, which allows atoms to influence each other over greater distances than previously possible. The research team’s Alexey Gorshkov compares it to sharing tennis balls among a group of people. While previous methods essentially allowed people to pass tennis balls only to a person standing next to them, the new approach would allow an individual to toss them to people across the room.

    “It is these long-range dipolar interactions in 3-D that enable you to create entanglement much faster than in systems with short-range interactions,” said Gorshkov, a theoretical physicist at NIST and at both the Joint Center for Quantum Information and Computer Science and the Joint Quantum Institute, which are collaborations between NIST and the University of Maryland. “Obviously, if you can throw stuff directly at people who are far away, you can spread the objects faster.”

    Applying the technique would center around adjusting the timing of laser light pulses, turning the lasers on and off in particular patterns and rhythms to quick-change the suspended atoms into a coherent entangled system.

    The approach also could find application in sensors, which might exploit entanglement to achieve far greater sensitivity than classical systems can. While entanglement-enhanced quantum sensing is a young field, it might allow for high-resolution scanning of tiny objects, such as distinguishing slight temperature differences among parts of an individual living cell or performing magnetic imaging of its interior.

    Gorshkov said the method builds on two studies from the 1990s in which different NIST researchers considered the possibility of using a large number of tiny objects—such as a group of atom—as sensors. Atoms could measure the properties of a nearby magnetic field, for example, because the field would change their electrons’ energy levels. These earlier efforts showed that the uncertainty in these measurements would be advantageously lower if the atoms were all entangled, rather than merely a bunch of independent objects that happened to be near one another.

    “Uncertainty is the key here,” said Gorshkov. “You want that uncertainty as low as possible. If the atoms are entangled, you have less uncertainty about that magnetic field’s magnitude.”

    Getting the atoms into an entangled state more quickly would be a potential advantage in any practical application, not least because entanglement can be fleeting.

    When a group of atoms is entangled, the quantum state of each one is bound up with the others so that the entire system possesses a single quantum state. This connection can exist even if the atoms are separated and completely isolated from one another (giving rise to Einstein’s famous description of it as “spooky action at a distance”), but entanglement is also quite a fragile condition. The difficulty of maintaining it among large numbers of atoms has slowed the development of entanglement-based technologies such as quantum computers.

    “Entangled states tend to decohere and go back to being a bunch of ordinary independent atoms,” Gorshkov said. “People knew how to create entanglement, but we looked for a way to do it faster.”

    If the method can be experimentally demonstrated, it could give a quantum computer’s processor additional time so it can outpace decoherence, which threatens to make a computation fall apart before the qubits can finish their work. It would also reduce the uncertainty if used in sensing applications.

    “We think this is a practical way to increase the speed of entanglement,” Gorshkov said. “It was cool enough to patent, so we hope it proves commercially useful to someone.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NIST Campus, Gaitherberg, MD, USA

    NIST Mission, Vision, Core Competencies, and Core Values

    NIST’s mission

    To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
    NIST’s vision

    NIST will be the world’s leader in creating critical measurement solutions and promoting equitable standards. Our efforts stimulate innovation, foster industrial competitiveness, and improve the quality of life.
    NIST’s core competencies

    Measurement science
    Rigorous traceability
    Development and use of standards

    NIST’s core values

    NIST is an organization with strong values, reflected both in our history and our current work. NIST leadership and staff will uphold these values to ensure a high performing environment that is safe and respectful of all.

    Perseverance: We take the long view, planning the future with scientific knowledge and imagination to ensure continued impact and relevance for our stakeholders.
    Integrity: We are ethical, honest, independent, and provide an objective perspective.
    Inclusivity: We work collaboratively to harness the diversity of people and ideas, both inside and outside of NIST, to attain the best solutions to multidisciplinary challenges.
    Excellence: We apply rigor and critical thinking to achieve world-class results and continuous improvement in everything we do.

    Advertisements
     
  • richardmitnick 7:29 am on October 9, 2017 Permalink | Reply
    Tags: Back in 1905 Einstein had helped pioneer quantum theory with his revolutionary discovery that light has the characteristics of both a wave and a particle, Bohr found a flaw in Einstein’s logic, But does this also mean “spooky action at a distance” is real?, , Einstein and Bohr continued to debate the issue for the rest of their lives, , Einstein was the first to publicly support de Broglie’s bold hypothesis, Einstein-Podolsky-Rosen paradox, Einstein: “God does not play dice with the universe”, Einstein’s hopes of finding hidden variables that would take the uncertainty out of quantum theory were dashed, Erwin Schrödinger, Fifth Solvay Congress in Brussels, Instant action violated Einstein’s theory of relativity: nothing can travel faster than the speed of light, Louis de Broglie, , Quantum entanglement, , The Copenhagen theory held that subatomic particles were ruled by chance   

    From COSMOS: “Einstein, Bohr and the origins of entanglement” Wonderful article on this debate between Einstein and Bohr 

    Cosmos Magazine bloc

    COSMOS Magazine

    06 October 2017
    Robyn Arianrhod

    Two of history’s greatest physicists argued for decades over one of the deepest mysteries of quantum mechanics. Today, their successors are opening new fronts in the battle to understand ‘spooky action at a distance’.

    1
    Niels Bohr and Albert Einstein at the Fifth Solvay Congress. American Institute Of Physics / Getty Images

    It all began in October 1927, at the Fifth Solvay Congress in Brussels. It was Louis de Broglie’s first congress, and he had been “full of pleasure and curiosity” at the prospect of meeting Einstein, his teenage idol. Now 35, de Broglie happily reported: “I was particularly struck by his mild and thoughtful expression, by his general kindness, by his simplicity, and by his friendliness.”

    Back in 1905, Einstein had helped pioneer quantum theory with his revolutionary discovery that light has the characteristics of both a wave and a particle. Niels Bohr later explained this as “complementarity”: depending on how you observe light, you will see either wave or particle behaviour. As for de Broglie, he had taken Einstein’s idea into even stranger territory in his 1924 PhD thesis: if light waves could behave like particles, then perhaps particles of matter could also behave like waves! After all, Einstein had shown that energy and matter were interchangeable, via E = mc2.

    Einstein was the first to publicly support de Broglie’s bold hypothesis. By 1926, Erwin Schrödinger had developed a mathematical formula to describe such “matter waves”, which he pictured as some kind of rippling sea of smeared-out particles. But Max Born showed that Schrödinger’s waves are, in effect, “waves of probability”. They encode the statistical likelihood that a particle will show up at a given place and time based on the behaviour of many such particles in repeated experiments. When the particle is observed, something strange appears to happen. The wave-function “collapses” to a single point, allowing us to see the particle at a particular position.

    Born’s probability wave also fitted neatly with Werner Heisenberg’s recently proposed “uncertainty principle”. Heisenberg had concluded that in the quantum world it is not possible to obtain exact information about both the position and the momentum of a particle at the same time. He imagined the very act of measuring a quantum particle’s position, say by shining a light on it, gave it a jolt that changed its momentum, so the two could never be precisely measured at once.

    When the world’s leading physicists gathered in Brussels in 1927, this was the strange state of quantum physics.

    The official photograph of the participants shows 28 besuited, sober-looking men, and one equally serious woman, Marie Curie. But fellow physicist Paul Ehrenfest’s private photo of intellectual adversaries Bohr and Einstein captures the spirit of the conference: Bohr looks intensely thoughtful, hand on his chin, while Einstein is leaning back looking relaxed and dreamy. This gentle, contemplative picture belies the depth of the famous clash between these two intellectual titans – a clash that hinged on the extraordinary concept of quantum entanglement.

    At the congress, Bohr presented his view of quantum mechanics for the first time. Dubbed the Copenhagen interpretation, in honour of Bohr’s home city, it combined his own idea of particle-wave complementarity with Born’s probability waves and Heisenberg’s uncertainty principle.

    Most of the attendees readily accepted this view, but Einstein was perturbed. It was one thing for groups of particles to be ruled by chance; indeed Einstein had explained the jittery motion of pollen in apparently still water (dubbed Brownian motion) by invoking the random group behaviour of water molecules. Individual molecules, though, would still be ruled by Newton’s laws of motion; their exact movements could in principle be calculated.

    By contrast, the Copenhagen theory held that subatomic particles were ruled by chance.

    Einstein began his attack in the time-honoured tradition of reductio ad absurdum – arguing that the logical extension of quantum theory would lead to an absurd outcome.

    After several sleepless nights, Bohr found a flaw in Einstein’s logic. Einstein did not retreat: he was sure he could convince Bohr of the absurdity of this strange new theory. Their debate flowed over into the Sixth Solvay Congress in 1930, and on until Einstein felt he finally had the pieces in place to checkmate Bohr at the seventh congress in 1933. Two weeks before that, however, Nazi persecution forced Einstein to flee to the United States. The planned checkmate would have to wait.

    When it came, it was deceptively simple. In 1935 at Princeton, Einstein and two collaborators, Boris Podolsky and Nathan Rosen, published what became known as the Einstein-Podolsky-Rosen paradox [Physical Review Journals Archive], or EPR for short. Podolsky wrote up the thought experiment in a mathematical form, but let me illustrate it with jellybeans.

    Suppose you have a red and a green jellybean in a box. The box seals off the jellybeans from all others: technically speaking, the pair form an “isolated system”, and they are “entangled” in the sense that the colour of one jellybean gives information about the other. You can see this by asking a friend to close her eyes and pick a jellybean at random. If she picks red, you know the remaining sweet is green.

    This is key to EPR: by knowing the colour of your friend’s jellybean, you can know the colour of your own without “disturbing” it by looking at it. But in trying to bypass the supposed observer-effect in this way, EPR had also inadvertently uncovered the strange idea of “entanglement”. The term was coined by Schrödinger after he read the EPR paper .

    So now apply this technique to two electrons. Instead of a colour, each one has an intrinsic property called “spin”. Imagine something like the spin axis of a gyroscope. If two electrons are prepared together in the lab so that they have zero total spin, then the principle of conservation of angular momentum means that if one of the electrons has its spin axis up, the other electron’s axis must be down. The electrons are entangled, just as the jellybeans were.

    2

    With jellybeans, the colour of your friend’s chosen sweet is fixed, whether or not she actually observes it. With electrons, by contrast, until your friend makes her observation, quantum theory simply says there is a 50% chance its spin is up, and 50% it is down.

    The EPR attempt to strike at the heart of quantum theory now goes like this. Perhaps the spin of your friend’s electron was in fact determined before she picked it out. However, like a watermark that can’t be detected until a special light is shone on it, the spin state is only revealed when she looks at it. Quantum spin, then, involves a “hidden variable”, yet to be described by quantum theory. Alternatively, if quantum mechanics is correct and complete, then the theory defies common sense – because as soon as your friend checks the spin of her electron, your electron appears to respond instantly, because if hers is “up” then yours will be “down”.

    This is because the correlation between the two spins was built into the experiment when the electrons were first entangled, just as putting the two jellybeans in a box ensures the colour of your jellybean will be “opposite” that of your friend’s. The implications are profound. Even if your friend moved to the other side of the galaxy, your electron would “know” that it must manifest the opposite spin in the instant she makes her observation.

    Of course, instant action violated Einstein’s theory of relativity: nothing can travel faster than the speed of light. Hence Einstein dubbed this absurd proposition “spooky action at a distance”.

    But there was more. Spin is not the only property your friend could have chosen to observe. What EPR showed, then, is that the physical nature of your electron seems to have no identity of its own. Rather, it depends on how your friend chooses to observe her electron. As Einstein put it: “Do you really believe the Moon is there only when you look at it?” The EPR paper concluded: “No reasonable definition of reality could be expected to permit this.” Ergo, the authors believed, quantum theory had some serious problems.

    Bohr was stumped by EPR. He ditched the idea that the act of measurement jolted the state of the particle. (Indeed, later experiments would show that uncertainty is not solely the result of an interfering observer; it is an inherent characteristic of particles.)

    But he did not abandon the uncertainty at the heart of quantum mechanics. Instead of trying to wrestle with the real world implications, he concluded [Physical Review Journals Archive] that we can only speak of what we observe – at the beginning of the experiment and the end when your friend’s electron is definitely “up”, say. We cannot speak about what happens in between.

    Einstein and Bohr continued to debate the issue for the rest of their lives. What they really disagreed about was the nature of reality. Bohr believed that nature was fundamentally random. Einstein did not. “God does not play dice with the universe,” he declared.

    Nevertheless, Einstein knew that quantum theory accurately described the results of real as opposed to thought experiments. So most physicists considered that Bohr had won. They focused on applying quantum theory, and questions about the EPR paradox and entanglement became a niche interest.

    In 1950, Chien-Shiung Wu and Irving Shaknov [Physical Review Journals Archive] found oddly linked behaviour in pairs of photons. They didn’t know it at the time but it was the first real-world observation of quantum entanglement.

    ___________________________________________________________________________
    Some suggest that something like a ‘wormhole’ – a tunnel in spacetime between two widely separated black holes, a consequence of general relativity theory first deduced by Einstein and Rosen – may be the mechanism underlying entanglement.
    ___________________________________________________________________________

    Later, David Bohm realised [Physical Review Journals Archive] Wu and Shaknov’s discovery was an opportunity to take entanglement out of the realm of thought experiments and into the lab. Following Bohm, in 1964 John Bell translated the two EPR alternatives into a mathematical relationship that could be tested. But it was left to other experimenters – most famously Alain Aspect in 1981 [Physical Review Letters] – to carry out the tests.

    Einstein’s hopes of finding hidden variables that would take the uncertainty out of quantum theory were dashed. There seemed no escaping the bizarre consequences of EPR and the reality of entanglement.

    But does this also mean “spooky action at a distance” is real? Entanglement in electrons has been demonstrated at distances of a kilometre or two. But so far that’s too short a distance to know if faster-than-light interactions between them were involved. Things may soon become clearer: at the time of writing, Chinese scientists have just announced the successful transmission of entangled photons [Science] from an orbiting satellite over distances of more than 1,200 km.

    On the other hand, some physicists have recently taken up Einstein’s side of the argument. For instance, in 2016 Bengt Nordén, of Chalmers University in Sweden, published a paper [Cambridge Quarterly Reviews of Biophysics] entitled, Quantum entanglement: facts and fiction – how wrong was Einstein after all? Against Bohr’s better judgement, such physicists are once again asking about the meaning of reality, and wondering what is causing the weird phenomenon of entanglement.

    Some even suggest that something like a “wormhole” – a tunnel in spacetime between two widely separated black holes, a consequence of general relativity theory first deduced by Einstein and Rosen – may be the mechanism underlying entanglement. The mythical faster-than-light tachyon is another possible contender.

    But nearly everyone agrees that whatever is going on between entangled particles, experimenters can only communicate their observations of entangled particles at light speed or less.

    Entanglement is no longer a philosophical curio: not only are physicists using it to encrypt information and relying on it to underpin the design of tomorrow’s quantum computers, they are once again grappling with the hard questions about the nature of reality that entanglement raises.

    Ninety years after the Fifth Solvay Congress, Einstein’s thought experiments continue to drive science onwards.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:27 am on September 14, 2017 Permalink | Reply
    Tags: , Brian Greene, Cosmology- origins of the universe, , , , Quantum entanglement, , Superstring theory, Unified theory of physics,   

    From Harvard Gazette: “A master of explaining the universe” 

    Harvard University
    Harvard University

    September 13, 2017
    Colleen Walsh

    1
    Brian Greene ’84, a Columbia University theoretical physicist and mathematician, has made it his mission to illuminate the wonders of the universe for non-scientists. Photo by Greg Kessler/World Science Festival

    Harvard Overseer and Columbia physicist Brian Greene seeks wider audience for the wonders of science.

    He is the founder of the World Science Festival, the author of numerous best-selling books, including the Pulitzer Prize finalist “The Elegant Universe,” and an expert at explaining knotty concepts. Now he’s back at Harvard. On Sept. 19, Brian Greene ’84, Harvard Overseer and Columbia University theoretical physicist and mathematician, will explore shifting ideas of space, time, and reality in a talk at the Radcliffe Institute for Advanced Study. The Gazette caught up with Greene to ask him about his years at Harvard, his passion for science, and how he defines superstring theory in a tweet.

    GAZETTE: Where did your initial interest in math and physics come from?

    GREENE: When I was a kid growing up in Manhattan I was deeply fascinated with mathematics, and at a young age my dad taught me the basics of arithmetic. I was captivated from then on by the ability to use a few simple rules to undertake calculations that no one had ever done before. Now, most of these calculations weren’t ever done because they weren’t interesting, but for a kid to be able to do something new is deeply thrilling. Later on, when I learned in high school and most forcefully when I got to college at Harvard that math isn’t merely a game but it’s something that can help you understand what happens out there in the real universe, then I was kind of hooked for life.

    GAZETTE: Were there any classes or professors that had a big impact on you at Harvard?

    GREENE: Oh, huge. Howard Georgi was my freshman physics professor, and he had a deep impact on my love of the subject. There’s now a mathematician who wasn’t at Harvard when I was an undergrad but whom I worked with extensively as a graduate student and then he moved to Harvard, Shing-Tung Yau in the Mathematics Department. He had a deep impact on me. The Harvard faculty had quite a formative impact on me across the years.

    GAZETTE: I know you are famous for being able to explain awesome scientific concepts. In the age of social media, can you define superstring theory in a tweet?

    GREENE: Superstring theory is our best attempt to realize Einstein’s dream of the unified theory. #unification.

    GAZETTE: So break that down for me, and this doesn’t have to be in a tweet format. What is the unified theory of physics and why is it so important?

    GREENE: Einstein envisioned that there might be a master law of physics, perhaps captured by a single mathematical equation that would be so powerful that in principle it could describe every physical process in the universe — the big stuff, the small stuff, and everything in between. And he believed it so deeply that he pursued it relentlessly for the last 30 years of his life. On various occasions Einstein announced that he had the unified theory, always, however, to have to retract that sometime later when he realized that his latest proposal didn’t quite work. In the end it was a very frustrating experience for him. And when he died, that dream of unification died with him. But about 10 or 15 years later some scientist stumbled upon a new approach — this approach called superstring theory — and over the course of decades realized that this may in fact be the unified theory that Einstein was looking for. And that’s what we have been developing ever since.

    GAZETTE: What has been the main focus of your work for the last several years?

    GREENE: I have been working on issues of cosmology, origins of the universe. I’ve been working on the possibility of a multiverse — that we might live in a reality that comprises more than one universe. I’ve been working on some strange features of quantum mechanics called quantum entanglement, where distant objects can somehow act as though they are sitting right next to each other. Again this is a discovery that sort of goes back to Einstein himself, so things in that domain have been my main focus of late.

    GAZETTE: Tell me more about multiple universes.

    GREENE: Well, it’s a curious idea because for most people the word universe means everything: all that there is. But developments over the past couple of decades have convinced many of us that there is at least a possibility that what we have long thought to be everything is actually perhaps just a small part of a much bigger reality. And that bigger reality might have other realms that would rightly be called universes of their own, and if that’s the case then the grand picture of reality involves a whole collection of universes, and that’s why we no longer use the word universe to describe all there is … we speak of “multi” — there are multiverses because of this multiplicity of universes.

    GAZETTE: Is there current or future research that you could see really changing the nature of how we see the universe?

    GREENE: My own feeling, and it’s shared by colleagues, is that the next breakthrough will come when we deeply understand the fundamental ingredients of space and time themselves. And this is an open question. Just like matter is made up of atoms and molecules, could it be that space and time are themselves made up of more fundamental constituents? In fact, this is what I will be talking about at Radcliffe, recent work that at least hints at an answer to what the ingredients of space and time might actually be.

    GAZETTE: What has inspired you to work to make science understandable?

    GREENE: My view of science is not that it’s merely an effort to unearth the basic laws of physics, but I view it more as a very human undertaking to see how we fit into the grand scheme of things and to answer the questions that have been asked since the time we could ask questions: Where did we come from? What are we made of? How did the universe come to be? What is time? What will happen in the distant future? All these questions I think speak deeply to who we are as a species, and for the vast majority of people to be cut off from the most up-to-date thinking on these deep questions because they don’t speak mathematics, they don’t have a graduate degree in physics, I think that’s tragic. So for decades now I’ve felt that part of my charge is to bring these ideas to a wider audience, to make them available to anyone who has a curiosity and a little bit of stick-to-itiveness to push through some deep, difficult, but ultimately gratifying ideas.

    GAZETTE: If you weren’t a physicist what would you be?

    GREENE: Well, if I was starting out today I think I would probably go into neuroscience. I like to think of the big questions. Where did the universe come from? Where did life come from? And where does mind come from? And for those I think the time is really ripe to understand the nature of intelligence and thought. I think there are going to be great, great breakthroughs in that area in the next couple of decades.

    GAZETTE: Favorite physicist?

    GREENE: There’s nobody who compares with Isaac Newton in terms of the leap that he pushed humanity through from the way we understood the world before he began to think about it until after he existed.

    GAZETTE: What is your take on Voyager?

    GREENE: The “Star Trek” version or the real version?

    GAZETTE: The real version.

    GREENE: I think it’s a great symbol of who we are as a species. We are explorers. We are deeply committed to understanding the universe, and to envision these little spacecraft that have left the solar system and they are floating out there in the great unknown as harbingers, if you will, of human life back on the planet is a deeply moving picture and one that really captures who we are.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Harvard University campus
    Harvard is the oldest institution of higher education in the United States, established in 1636 by vote of the Great and General Court of the Massachusetts Bay Colony. It was named after the College’s first benefactor, the young minister John Harvard of Charlestown, who upon his death in 1638 left his library and half his estate to the institution. A statue of John Harvard stands today in front of University Hall in Harvard Yard, and is perhaps the University’s best known landmark.

    Harvard University has 12 degree-granting Schools in addition to the Radcliffe Institute for Advanced Study. The University has grown from nine students with a single master to an enrollment of more than 20,000 degree candidates including undergraduate, graduate, and professional students. There are more than 360,000 living alumni in the U.S. and over 190 other countries.

     
  • richardmitnick 9:21 am on August 28, 2017 Permalink | Reply
    Tags: , Chinese Physicists Just Achieved Quantum Teleportation Underwater For The First Time, Quantum entanglement,   

    From Science Alert: “Physicists Just Achieved Quantum Teleportation Underwater For The First Time” 

    ScienceAlert

    Science Alert

    28 AUG 2017
    FIONA MACDONALD

    1
    sakkmesterke/Shutterstock.com

    Chinese scientists have successfully sent information between entangled particles through sea water, the first time this type of quantum communication has been achieved underwater.

    In this proof-of-concept experiment, information was sent across a 3.3-metre (10.8-foot) long tank of seawater, but the researchers predict they should be able to use the same technique to send unhackable communications close to 900 metres (0.55 miles) through open water.

    “People have talked about the idea of underwater quantum communication before, but I’m not aware of anyone who has done an experiment like this,” Thomas Jennewein from the University of Waterloo in Canada told Devin Powell over at New Scientist.

    “An obvious application would be a submarine which wants to remain submerged but communicate in a secure fashion.”

    This is a big deal, because quantum communication – also known as quantum teleportation – promises to allow people to send messages that are protected from prying eyes by the laws of physics. It’s the ultimate encryption.

    It’s based on the idea of quantum entanglement – that kooky phenomenon Einstein referred to as “spooky at a distance”. Basically, quantum entanglement means that two particles become inextricably linked so that whatever happens to one will automatically affect the other, no matter how far apart they are.

    Through that mechanism, scientists have already ‘teleported’ information across vast distances through optical fibre and even open space.

    Earlier this year, a separate team of Chinese researchers were able to use quantum entanglement to teleport information to a satellite in Earth’s orbit across more than 500 km (311 miles).

    But up until now, no one had done the same thing in water, which is notorious for scattering anything we try to beam through it. Just think of shining a laser pointer into the air and into water.

    For this experiment, researchers from Shanghai Jiao Tong University took seawater from the Yellow Sea and set it up in a 3 metre tank in the lab.

    They then created a pair of entangled photons by shooting a beam of light through a crystal. Whatever the polarisation of one of the photons, its pair would automatically have the opposite polarisation.

    These particles were placed at opposite ends of the tank, and the team showed that despite being separated by metres of seawater, they could accurately communicate information between them more than 98 percent of the time.

    “Our results confirm the feasibility of a seawater quantum channel, representing the first step towards underwater quantum communication,” the researchers write in the journal The Optical Society.

    It’s still early days, and not only is it important for other teams to now replicate this result, but it remains to be seen whether the same thing can be done across greater distances, but also in seawater not confined to a tank.

    Based on the team’s calculations, they predict that it would be possible to achieve quantum communication through open water across a distance of 885 metres (0.55 miles) using photons in the blue-green window.

    But New Scientist reports that other groups have calculated a limit of underwater quantum communication of just 120 metres (0.07 miles).

    “Because ocean water absorbs light, extending this is going to difficult,” Jeffrey Uhlmann, a physicists from the University of Missouri in Columbia, told Powell.

    How far we can stretch this underwater quantum communication remains to be seen, but now that researchers have shown it’s possible, it’s only a matter of time before the limits begin to be pushed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:27 am on July 12, 2017 Permalink | Reply
    Tags: , , , , Micius satellite, , Quantum entanglement, Teleportation achieved   

    From MIT Tech Review: “First Object Teleported from Earth to Orbit” 

    MIT Technology Review
    M.I.T Technology Review

    July 10, 2017
    No writer credit found

    Researchers in China have teleported a photon from the ground to a satellite orbiting more than 500 kilometers above.

    Last year, a Long March 2D rocket took off from the Jiuquan Satellite Launch Centre in the Gobi Desert carrying a satellite called Micius, named after an ancient Chinese philosopher who died in 391 B.C. The rocket placed Micius in a Sun-synchronous orbit so that it passes over the same point on Earth at the same time each day.

    Micius is a highly sensitive photon receiver that can detect the quantum states of single photons fired from the ground. That’s important because it should allow scientists to test the technological building blocks for various quantum feats such as entanglement, cryptography, and teleportation.

    2
    Micius satellite. https://www.fusecrunch.com/chinas-first-quantum-satellite.html

    Today, the Micius team announced the results of its first experiments. The team created the first satellite-to-ground quantum network, in the process smashing the record for the longest distance over which entanglement has been measured. And they’ve used this quantum network to teleport the first object from the ground to orbit.

    Teleportation has become a standard operation in quantum optics labs around the world. The technique relies on the strange phenomenon of entanglement. This occurs when two quantum objects, such as photons, form at the same instant and point in space and so share the same existence. In technical terms, they are described by the same wave function.

    3
    No image caption or credit.

    The curious thing about entanglement is that this shared existence continues even when the photons are separated by vast distances. So a measurement on one immediately influences the state of the other, regardless of the distance between them.

    Back in the 1990s, scientists realized they could use this link to transmit quantum information from one point in the universe to another. The idea is to “download” all the information associated with one photon in one place and transmit it over an entangled link to another photon in another place.

    This second photon then takes on the identity of the first. To all intents and purposes, it becomes the first photon. That’s the nature of teleportation and it has been performed many times in labs on Earth.

    Teleportation is a building block for a wide range of technologies. “Long-distance teleportation has been recognized as a fundamental element in protocols such as large-scale quantum networks and distributed quantum computation,” says the Chinese team.

    In theory, there should be no maximum distance over which this can be done. But entanglement is a fragile thing because photons interact with matter in the atmosphere or inside optical fibers, causing the entanglement to be lost.

    As a result, the distance over which scientists have measured entanglement or performed teleportation is severely limited. “Previous teleportation experiments between distant locations were limited to a distance on the order of 100 kilometers, due to photon loss in optical fibers or terrestrial free-space channels,” says the team.

    But Micius changes all that because it orbits at an altitude of 500 kilometers, and for most of this distance, any photons making the journey travel through a vacuum. To minimize the amount of atmosphere in the way, the Chinese team set up its ground station in Ngari in Tibet at an altitude of over 4,000 meters. So the distance from the ground to the satellite varies from 1,400 kilometers when it is near the horizon to 500 kilometers when it is overhead.

    To perform the experiment, the Chinese team created entangled pairs of photons on the ground at a rate of about 4,000 per second. They then beamed one of these photons to the satellite, which passed overhead every day at midnight. They kept the other photon on the ground.

    Finally, they measured the photons on the ground and in orbit to confirm that entanglement was taking place, and that they were able to teleport photons in this way. Over 32 days, they sent millions of photons and found positive results in 911 cases. “We report the first quantum teleportation of independent single-photon qubits from a ground observatory to a low Earth orbit satellite—through an up-link channel— with a distance up to 1400 km,” says the Chinese team.

    This is the first time that any object has been teleported from Earth to orbit, and it smashes the record for the longest distance for entanglement.

    That’s impressive work that sets the stage for much more ambitious goals in the future. “This work establishes the first ground-to-satellite up-link for faithful and ultra-long-distance quantum teleportation, an essential step toward global-scale quantum internet,” says the team.

    It also shows China’s obvious dominance and lead in a field that, until recently, was led by Europe and the U.S.—Micius would surely have been impressed. But an important question now is how the West will respond.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

     
  • richardmitnick 1:27 pm on June 18, 2017 Permalink | Reply
    Tags: , China has taken the leadership in quantum communication, China Shatters 'Spooky Action at a Distance' Record, For now the system remains mostly a proof of concept, Global quantum communication is possible and will be achieved in the near future, , Preps for Quantum Internet, Quantum entanglement, ,   

    From SA: “China Shatters ‘Spooky Action at a Distance’ Record, Preps for Quantum Internet” 

    Scientific American

    Scientific American

    June 15, 2017
    Lee Billings

    1
    Credit: Alfred Pasieka Getty Images

    In a landmark study, a team of Chinese scientists using an experimental satellite has tested quantum entanglement over unprecedented distances, beaming entangled pairs of photons to three ground stations across China—each separated by more than 1,200 kilometers. The test verifies a mysterious and long-held tenet of quantum theory, and firmly establishes China as the front-runner in a burgeoning “quantum space race” to create a secure, quantum-based global communications network—that is, a potentially unhackable “quantum internet” that would be of immense geopolitical importance. The findings were published Thursday in Science.

    “China has taken the leadership in quantum communication,” says Nicolas Gisin, a physicist at the University of Geneva who was not involved in the study. “This demonstrates that global quantum communication is possible and will be achieved in the near future.”

    The concept of quantum communications is considered the gold standard for security, in part because any compromising surveillance leaves its imprint on the transmission. Conventional encrypted messages require secret keys to decrypt, but those keys are vulnerable to eavesdropping as they are sent out into the ether. In quantum communications, however, these keys can be encoded in various quantum states of entangled photons—such as their polarization—and these states will be unavoidably altered if a message is intercepted by eavesdroppers. Ground-based quantum communications typically send entangled photon pairs via fiber-optic cables or open air. But collisions with ordinary atoms along the way disrupt the photons’ delicate quantum states, limiting transmission distances to a few hundred kilometers. Sophisticated devices called “quantum repeaters”—equipped with “quantum memory” modules—could in principle be daisy-chained together to receive, store and retransmit the quantum keys across longer distances, but this task is so complex and difficult that such systems remain largely theoretical.

    “A quantum repeater has to receive photons from two different places, then store them in quantum memory, then interfere them directly with each other” before sending further signals along a network, says Paul Kwiat, a physicist at the University of Illinois in Urbana–Champaign who is unaffiliated with the Chinese team. “But in order to do all that, you have to know you’ve stored them without actually measuring them.” The situation, Kwiat says, is a bit like knowing what you have received in the mail without looking in your mailbox or opening the package inside. “You can shake the package—but that’s difficult to do if what you’re receiving is just photons. You want to make sure you’ve received them but you don’t want to absorb them. In principle it’s possible—no question—but it’s very hard to do.”

    To form a globe-girdling secure quantum communications network, then, the only available solution is to beam quantum keys through the vacuum of space then distribute them across tens to hundreds of kilometers using ground-based nodes. Launched into low Earth orbit in 2016 and named after an ancient Chinese philosopher, the 600-kilogram “Micius” satellite is China’s premiere effort to do just that, and is only the first of a fleet the nation plans as part of its $100-million Quantum Experiments at Space Scale (QUESS) program.

    Micius carries in its heart an assemblage of crystals and lasers that generates entangled photon pairs then splits and transmits them on separate beams to ground stations in its line-of-sight on Earth. For the latest test, the three receiving stations were located in the cities of Delingha and Ürümqi—both on the Tibetan Plateau—as well as in the city of Lijiang in China’s far southwest. At 1,203 kilometers, the geographical distance between Delingha and Lijiang is the record-setting stretch over which the entangled photon pairs were transmitted.

    For now the system remains mostly a proof of concept, because the current reported data transmission rate between Micius and its receiving stations is too low to sustain practical quantum communications. Of the roughly six million entangled pairs that Micius’s crystalline core produced during each second of transmission, only about one pair per second reached the ground-based detectors after the beams weakened as they passed through Earth’s atmosphere and each receiving station’s light-gathering telescopes. Team leader Jian-Wei Pan—a physicist at the University of Science and Technology of China in Hefei who has pushed and planned for the experiment since 2003—compares the feat with detecting a single photon from a lone match struck by someone standing on the moon. Even so, he says, Micius’s transmission of entangled photon pairs is “a trillion times more efficient than using the best telecommunication fibers. … We have done something that was absolutely impossible without the satellite.” Within the next five years, Pan says, QUESS will launch more practical quantum communications satellites.

    Although Pan and his team plan for Micius and its nascent network of sister satellites to eventually distribute quantum keys, their initial demonstration instead aimed to achieve a simpler task: proving Einstein wrong.

    Einstein famously derided as “spooky action at a distance” one of the most bizarre elements of quantum theory—the way that measuring one member of an entangled pair of particles seems to instantaneously change the state of its counterpart, even if that counterpart particle is on the other side of the galaxy. This was abhorrent to Einstein, because it suggests information might be transmitted between the particles faster than light, breaking the universal speed limit set by his theory of special relativity. Instead, he and others posited, perhaps the entangled particles somehow shared “hidden variables” that are inaccessible to experiment but would determine the particles’ subsequent behavior when measured. In 1964 the physicist John Bell devised a way to test Einstein’s idea, calculating a limit that physicists could statistically measure for how much hidden variables could possibly correlate with the behavior of entangled particles. If experiments showed this limit to be exceeded, then Einstein’s idea of hidden variables would be incorrect.

    Ever since the 1970s “Bell tests” by physicists across ever-larger swaths of spacetime have shown that Einstein was indeed mistaken, and that entangled particles do in fact surpass Bell’s strict limits. The most definitive test arguably occurred in the Netherlands in 2015, when a team at Delft University of Technology closed several potential “loopholes” that had plagued past experiments and offered slim-but-significant opportunities for the influence of hidden variables to slip through. That test, though, involved separating entangled particles by scarcely more than a kilometer. With Micius’s transmission of entangled photons between widely separated ground stations, Pan’s team has now performed a Bell test at distances a thousand times greater. Just as before, their results confirm that Einstein was wrong. The quantum realm remains a spooky place—although no one yet understands why.

    “Of course, no one who accepts quantum mechanics could possibly doubt that entanglement can be created over that distance—or over any distance—but it’s still nice to see it made concrete,” says Scott Aaronson, a physicist at The University of Texas at Austin. “Nothing we knew suggested this goal was unachievable. The significance of this news is not that it was unexpected or that it overturns anything previously believed, but simply that it’s a satisfying culmination of years of hard work.”

    That work largely began in the 1990s when Pan, leader of the Chinese team, was a graduate student in the lab of the physicist Anton Zeilinger at the University of Innsbruck in Austria. Zeilinger was Pan’s PhD adviser, and they collaborated closely to test and further develop ideas for quantum communication. Pan returned to China to start his own lab in 2001, and Zeilinger started one as well at the Austrian Academy of Sciences in Vienna. For the next seven years they would compete fiercely to break records for transmitting entangled photon pairs across ever-wider gaps, and in ever-more extreme conditions, in ground-based experiments. All the while each man lobbied his respective nation’s space agency to green-light a satellite that could be used to test the technique from space. But Zeilinger’s proposals perished in a bureaucratic swamp at the European Space Agency whereas Pan’s were quickly embraced by the China National Space Administration. Ultimately, Zeilinger chose to collaborate again with his old pupil rather than compete against him; today the Austrian Academy of Sciences is a partner in QUESS, and the project has plans to use Micius to perform an intercontinental quantum key distribution experiment between ground stations in Vienna and Beijing.

    “I am happy that the Micius works so well,” Zeilinger says. “But one has to realize that it is a missed opportunity for Europe and others, too.”

    For years now, other researchers and institutions have been scrambling to catch up, pushing governments for more funding for further experiments on the ground and in space—and many of them see Micius’s success as the catalytic event they have been waiting for. “This is a major milestone, because if we are ever to have a quantum internet in the future, we will need to send entanglement over these sorts of long distances,” says Thomas Jennewein, a physicist at the University of Waterloo in Canada who was not involved with the study. “This research is groundbreaking for all of us in the community—everyone can point to it and say, ‘see, it does work!’”

    Jennewein and his collaborators are pursuing a space-based approach from the ground up, partnering with the Canadian Space Agency to plan a smaller, simpler satellite that could launch as soon as five years from now to act as a “universal receiver” and redistribute entangled photons beamed up from ground stations. At the National University of Singapore, an international collaboration led by the physicist Alexander Ling has already launched cheap shoe box–size CubeSats to create, study and perhaps even transmit photon pairs that are “correlated”—a situation just shy of full entanglement. And in the U.S., Kwiat at the University of Illinois is using NASA funding to develop a device that could someday test quantum communications using “hyperentanglement” (the simultaneous entanglement of photon pairs in multiple ways) onboard the International Space Station.

    Perhaps most significantly, a team led by Gerd Leuchs and Christoph Marquardt at the Max Planck Institute for the Science of Light in Germany is developing quantum communications protocols for commercially available laser systems already in space onboard the European Copernicus and SpaceDataHighway satellites. Using one of these systems, the team successfully encoded and sent simple quantum states to ground stations using photons beamed from a satellite in geostationary orbit, some 38,000 kilometers above Earth. This approach, Marquardt explains, does not rely on entanglement and is very different from that of QUESS—but it could, with minimal upgrades, nonetheless be used to distribute quantum keys for secure communications in as little as five years. Their results appear in Optica.

    “Our purpose is really to find a shortcut into making things like quantum key distribution with satellites economically viable and employable, pretty fast and soon,” Marquardt says. “[Engineers] invested 20 years of hard work making these systems, so it’s easier to upgrade them than to design everything from scratch. … It is a very good advantage if you can rely on something that is already qualified in space, because space qualification is very complicated. It usually takes five to 10 years just to develop that.”

    Marquardt and others suspect, however, that this field could be much further advanced than has been publicly acknowledged, with developments possibly hidden behind veils of official secrecy in the U.S. and elsewhere. It may be that the era of quantum communication is already upon us. “Some colleague of mine made the joke, ‘the silence of the U.S. is very loud,’” Marquardt says. “They had some very good groups concerning free-space satellites and quantum key distribution at Los Alamos [National Laboratory] and other places, and suddenly they stopped publishing. So we always say there are two reasons that they stopped publishing: either it didn’t work, or it worked really well!”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 11:07 am on June 11, 2017 Permalink | Reply
    Tags: , Bell test, Cosmic Bell test, Experiment Reaffirms Quantum Weirdness, John Bell, , Quantum entanglement, , , Superdeterminism   

    From Quanta: “Experiment Reaffirms Quantum Weirdness” 

    Quanta Magazine
    Quanta Magazine

    February 7, 2017 [I wonder where this was hiding. It just appeared today in social media.]
    Natalie Wolchover

    Physicists are closing the door on an intriguing loophole around the quantum phenomenon Einstein called “spooky action at a distance.”

    1
    Olena Shmahalo/Quanta Magazine

    There might be no getting around what Albert Einstein called “spooky action at a distance.” With an experiment described today in Physical Review Letters — a feat that involved harnessing starlight to control measurements of particles shot between buildings in Vienna — some of the world’s leading cosmologists and quantum physicists are closing the door on an intriguing alternative to “quantum entanglement.”

    “Technically, this experiment is truly impressive,” said Nicolas Gisin, a quantum physicist at the University of Geneva who has studied this loophole around entanglement.

    00:00/09:42

    According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another — at least, until they are measured, when they seem to suddenly roll the dice and jump into formation. Stranger still, when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel light-years apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

    In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

    Only there’s a hitch: In addition to locality and realism, Bell made another, subtle assumption to derive his formula — one that went largely ignored for decades. “The three assumptions that go into Bell’s theorem that are relevant are locality, realism and freedom,” said Andrew Friedman of the Massachusetts Institute of Technology, a co-author of the new paper. “Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.” This is known as the “freedom-of-choice” loophole.

    In a Bell test, entangled photons A and B are separated and sent to far-apart optical modulators — devices that either block photons or let them through to detectors, depending on whether the modulators are aligned with or against the photons’ polarization directions. Bell’s inequality puts an upper limit on how often, in a local-realistic universe, photons A and B will both pass through their modulators and be detected. (Researchers find that entangled photons are correlated more often than this, violating the limit.) Crucially, Bell’s formula assumes that the two modulators’ settings are independent of the states of the particles being tested. In experiments, researchers typically use random-number generators to set the devices’ angles of orientation. However, if the modulators are not actually independent — if nature somehow restricts the possible settings that can be chosen, correlating these settings with the states of the particles in the moments before an experiment occurs — this reduced freedom could explain the outcomes that are normally attributed to quantum entanglement.

    The universe might be like a restaurant with 10 menu items, Friedman said. “You think you can order any of the 10, but then they tell you, ‘We’re out of chicken,’ and it turns out only five of the things are really on the menu. You still have the freedom to choose from the remaining five, but you were overcounting your degrees of freedom.” Similarly, he said, “there might be unknowns, constraints, boundary conditions, conservation laws that could end up limiting your choices in a very subtle way” when setting up an experiment, leading to seeming violations of local realism.

    This possible loophole gained traction in 2010, when Michael Hall, now of Griffith University in Australia, developed a quantitative way of reducing freedom of choice [Phys.Rev.Lett.]. In Bell tests, measuring devices have two possible settings (corresponding to one bit of information: either 1 or 0), and so it takes two bits of information to specify their settings when they are truly independent. But Hall showed that if the settings are not quite independent — if only one bit specifies them once in every 22 runs — this halves the number of possible measurement settings available in those 22 runs. This reduced freedom of choice correlates measurement outcomes enough to exceed Bell’s limit, creating the illusion of quantum entanglement.

    The idea that nature might restrict freedom while maintaining local realism has become more attractive in light of emerging connections between information and the geometry of space-time. Research on black holes, for instance, suggests that the stronger the gravity in a volume of space-time, the fewer bits can be stored in that region. Could gravity be reducing the number of possible measurement settings in Bell tests, secretly striking items from the universe’s menu?

    2
    Members of the cosmic Bell test team calibrating the telescope used to choose the settings of one of their two detectors located in far-apart buildings in Vienna. Jason Gallicchio

    Friedman, Alan Guth and colleagues at MIT were entertaining such speculations a few years ago when Anton Zeilinger, a famous Bell test experimenter at the University of Vienna, came for a visit.

    4
    Alan Guth, Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    5
    Alan Guth’s notes. http://www.bestchinanews.com/Explore/4730.html

    Zeilinger also had his sights on the freedom-of-choice loophole. Together, they and their collaborators developed an idea for how to distinguish between a universe that lacks local realism and one that curbs freedom.

    In the first of a planned series of “cosmic Bell test” experiments, the team sent pairs of photons from the roof of Zeilinger’s lab in Vienna through the open windows of two other buildings and into optical modulators, tallying coincident detections as usual. But this time, they attempted to lower the chance that the modulator settings might somehow become correlated with the states of the photons in the moments before each measurement. They pointed a telescope out of each window, trained each telescope on a bright and conveniently located (but otherwise random) star, and, before each measurement, used the color of an incoming photon from each star to set the angle of the associated modulator. The colors of these photons were decided hundreds of years ago, when they left their stars, increasing the chance that they (and therefore the measurement settings) were independent of the states of the photons being measured.

    And yet, the scientists found that the measurement outcomes still violated Bell’s upper limit, boosting their confidence that the polarized photons in the experiment exhibit spooky action at a distance after all.

    Nature could still exploit the freedom-of-choice loophole, but the universe would have had to delete items from the menu of possible measurement settings at least 600 years before the measurements occurred (when the closer of the two stars sent its light toward Earth). “Now one needs the correlations to have been established even before Shakespeare wrote, ‘Until I know this sure uncertainty, I’ll entertain the offered fallacy,’” Hall said.

    Next, the team plans to use light from increasingly distant quasars to control their measurement settings, probing further back in time and giving the universe an even smaller window to cook up correlations between future device settings and restrict freedoms. It’s also possible (though extremely unlikely) that the team will find a transition point where measurement settings become uncorrelated and violations of Bell’s limit disappear — which would prove that Einstein was right to doubt spooky action.

    “For us it seems like kind of a win-win,” Friedman said. “Either we close the loophole more and more, and we’re more confident in quantum theory, or we see something that could point toward new physics.”

    There’s a final possibility that many physicists abhor. It could be that the universe restricted freedom of choice from the very beginning — that every measurement was predetermined by correlations established at the Big Bang. “Superdeterminism,” as this is called, is “unknowable,” said Jan-Åke Larsson, a physicist at Linköping University in Sweden; the cosmic Bell test crew will never be able to rule out correlations that existed before there were stars, quasars or any other light in the sky. That means the freedom-of-choice loophole can never be completely shut.

    But given the choice between quantum entanglement and superdeterminism, most scientists favor entanglement — and with it, freedom. “If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:16 pm on May 16, 2017 Permalink | Reply
    Tags: , , , , , Quantum entanglement, Tim Maudlin   

    From Quanta: “A Defense of the Reality of Time” Tim Maudlin 

    Quanta Magazine
    Quanta Magazine

    May 16, 2017
    George Musser

    1
    Tim Maudlin. Edwin Tse for Quanta Magazine

    Time isn’t just another dimension, argues Tim Maudlin. To make his case, he’s had to reinvent geometry.

    Physicists and philosophers seem to like nothing more than telling us that everything we thought about the world is wrong. They take a peculiar pleasure in exposing common sense as nonsense. But Tim Maudlin thinks our direct impressions of the world are a better guide to reality than we have been led to believe.

    Not that he thinks they always are. Maudlin, who is a professor at New York University and one of the world’s leading philosophers of physics, made his name studying the strange behavior of “entangled” quantum particles, which display behavior that is as counterintuitive as can be; if anything, he thinks physicists have downplayed how transformative entanglement is.

    2
    Quantum entanglement. ATCA

    At the same time, though, he thinks physicists can be too hasty to claim that our conventional views are misguided, especially when it comes to the nature of time.

    He defends a homey and unfashionable view of time. It has a built-in arrow. It is fundamental rather than derived from some deeper reality. Change is real, as opposed to an illusion or an artifact of perspective. The laws of physics act within time to generate each moment. Mixing mathematics, physics and philosophy, Maudlin bats away the reasons that scientists and philosophers commonly give for denying this folk wisdom.

    The mathematical arguments are the target of his current project, the second volume of New Foundations for Physical Geometry (the first appeared in 2014). Modern physics, he argues, conceptualizes time in essentially the same way as space. Space, as we commonly understand it, has no innate direction — it is isotropic. When we apply spatial intuitions to time, we unwittingly assume that time has no intrinsic direction, either. New Foundations rethinks topology in a way that allows for a clearer distinction between time and space. Conventionally, topology — the first level of geometrical structure — is defined using open sets, which describe the neighborhood of a point in space or time. “Open” means a region has no sharp edge; every point in the set is surrounded by other points in the same set.

    Maudlin proposes instead to base topology on lines. He sees this as closer to our everyday geometrical intuitions, which are formed by thinking about motion. And he finds that, to match the results of standard topology, the lines need to be directed, just as time is. Maudlin’s approach differs from other approaches that extend standard topology to endow geometry with directionality; it is not an extension, but a rethinking that builds in directionality at the ground level.

    Maudlin discussed his ideas with Quanta Magazine in March. Here is a condensed and edited version of the interview.

    Why might one think that time has a direction to it? That seems to go counter to what physicists often say.

    I think that’s a little bit backwards. Go to the man on the street and ask whether time has a direction, whether the future is different from the past, and whether time doesn’t march on toward the future. That’s the natural view. The more interesting view is how the physicists manage to convince themselves that time doesn’t have a direction.
    They would reply that it’s a consequence of Einstein’s special theory of relativity, which holds that time is a fourth dimension.

    This notion that time is just a fourth dimension is highly misleading. In special relativity, the time directions are structurally different from the space directions. In the timelike directions, you have a further distinction into the future and the past, whereas any spacelike direction I can continuously rotate into any other spacelike direction. The two classes of timelike directions can’t be continuously transformed into one another.

    Standard geometry just wasn’t developed for the purpose of doing space-time. It was developed for the purpose of just doing spaces, and spaces have no directedness in them. And then you took this formal tool that you developed for this one purpose and then pushed it to this other purpose.

    When relativity was developed in the early part of the 20th century, did people begin to see this problem?

    I don’t think they saw it as a problem. The development was highly algebraic, and the more algebraic the technique, the further you get from having a geometrical intuition about what you’re doing. So if you develop the standard account of, say, the metric of space-time, and then you ask, “Well, what happens if I start putting negative numbers in this thing?” That’s a perfectly good algebraic question to ask. It’s not so clear what it means geometrically. And people do the same thing now when they say, “Well, what if time had two dimensions?” As a purely algebraic question, I can say that. But if you ask me what could it mean, physically, for time to have two dimensions, I haven’t the vaguest idea. Is it consistent with the nature of time that it be a two-dimensional thing? Because if you think that what time does is order events, then that order is a linear order, and you’re talking about a fundamentally one-dimensional kind of organization.
    And so you are trying to allow for the directionality of time by rethinking geometry. How does that work?

    I really was not starting from physics. I was starting from just trying to understand topology. When you teach, you’re forced to confront your own ignorance. I was trying to explain standard topology to some students when I was teaching a class on space and time, and I realized that I didn’t understand it. I couldn’t see the connection between the technical machinery and the concepts that I was using.

    Suppose I just hand you a bag of points. It doesn’t have a geometry. So I have to add some structure to give it anything that is recognizably geometrical. In the standard approach, I specify which sets of points are open sets. In my approach, I specify which sets of points are lines.

    How does this differ from ordinary geometry taught in high school?

    In this approach that’s based on lines, a very natural thing to do is to put directionality on the lines. It’s very easy to implement at the level of axioms. If you’re doing Euclidean geometry, this isn’t going to occur to you, because your idea in Euclidean geometry is if I have a continuous line from A to B, it’s just as well a continuous line B to A — that there’s no directionality in a Euclidean line.
    From the pure mathematical point of view, why might your approach be preferable?

    In my approach, you put down a linear structure on a set of points. If you put down lines according to my axioms, there’s then a natural definition of an open set, and it generates a topology.

    Another important conceptual advantage is that there’s no problem thinking of a line that’s discrete. People form lines where there are only finitely many people, and you can talk about who’s the next person in line, and who’s the person behind them, and so on. The notion of a line is neutral between it being discrete and being continuous. So you have this general approach.

    Why is this kind of modification important for physics?

    As soon as you start talking about space-time, the idea that time has a directionality is obviously something we begin with. There’s a tremendous difference between the past and the future. And so, as soon as you start to think geometrically of space-time, of something that has temporal characteristics, a natural thought is that you are thinking of something that does now have an intrinsic directionality. And if your basic geometrical objects can have directionality, then you can use them to represent this physical directionality.
    Physicists have other arguments for why time doesn’t have a direction.

    Often one will hear that there’s a time-reversal symmetry in the laws. But the normal way you describe a time-reversal symmetry presupposes there’s a direction of time. Someone will say the following: “According to Newtonian physics, if the glass can fall off the table and smash on the floor, then it’s physically possible for the shards on the floor to be pushed by the concerted effort of the floor, recombine into the glass and jump back up on the table.” That’s true. But notice, both of those descriptions are ones that presuppose there’s a direction of time. That is, they presuppose that there’s a difference between the glass falling and the glass jumping, and there’s a difference between the glass shattering and the glass recombining. And the difference between those two is always which direction is the future, and which direction is the past.

    So I’m certainly not denying that there is this time-reversibility. But the time-reversibility doesn’t imply that there isn’t a direction of time. It just says that for every event that the laws of physics allow, there is a corresponding event in which various things have been reversed, velocities have been reversed and so on. But in both of these cases, you think of them as allowing a process that’s running forward in time.

    Now that raises a puzzle: Why do we often see the one kind of thing and not the other kind of thing? And that’s the puzzle about thermodynamics and entropy and so on.

    If time has a direction, is the thermodynamic arrow of time still a problem?

    The problem there isn’t with the arrow. The problem is with understanding why things started out in a low-entropy state. Once you have that it starts in a low-entropy state, the normal thermodynamic arguments lead you to expect that most of the possible initial states are going to yield an increasing entropy. So the question is, why did things start out so low entropy?

    One choice is that the universe is only finite in time and had an initial state, and then there’s the question: “Can you explain why the initial state was low?” which is a subpart of the question, “Can you explain an initial state at all?” It didn’t come out of anything, so what would it mean to explain it in the first place?

    The other possibility is that there was something before the big bang. If you imagine the big bang is the bubbling-off of this universe from some antecedent proto-universe or from chaotically inflating space-time, then there’s going to be the physics of that bubbling-off, and you would hope the physics of the bubbling-off might imply that the bubbles would be of a certain character.
    Given that we still need to explain the initial low-entropy state, why do we need the internal directedness of time? If time didn’t have a direction, wouldn’t specification of a low-entropy state be enough to give it an effective direction?

    If time didn’t have a direction, it seems to me that would make time into just another spatial dimension, and if all we’ve got all are spatial dimensions, then it seems to me nothing’s happening in the universe. I can imagine a four-dimensional spatial object, but nothing occurs in it. This is the way people often talk about the, quote, “block universe” as being fixed or rigid or unchanging or something like that, because they’re thinking of it like a four-dimensional spatial object. If you had that, then I don’t see how any initial condition put on it — or any boundary condition put on it; you can’t say “initial” anymore — could create time. How can a boundary condition change the fundamental character of a dimension from spatial to temporal?

    Suppose on one boundary there’s low entropy; from that I then explain everything. You might wonder: “But why that boundary? Why not go from the other boundary, where presumably things are at equilibrium?” The peculiar characteristics at this boundary are not low entropy — there’s high entropy there — but that the microstate is one of the very special ones that leads to a long period of decreasing entropy. Now it seems to me that it has the special microstate because it developed from a low-entropy initial state. But now I’m using “initial” and “final,” and I’m appealing to certain causal notions and productive notions to do the explanatory work. If you don’t have a direction of time to distinguish the initial from the final state and to underwrite these causal locutions, I’m not quite sure how the explanations are supposed to go.

    But all of this seems so — what can I say? It seems so remote from the physical world. We’re sitting here and time is going on, and we know what it means to say that time is going on. I don’t know what it means to say that time really doesn’t pass and it’s only in virtue of entropy increasing that it seems to.

    You don’t sound like much of a fan of the block universe.

    There’s a sense in which I believe a certain understanding of the block universe. I believe that the past is equally real as the present, which is equally real as the future. Things that happened in the past were just as real. Pains in the past were pains, and in the future they’ll be real too, and there was one past and there will be one future. So if that’s all it means to believe in a block universe, fine.

    People often say, “I’m forced into believing in a block universe because of relativity.” The block universe, again, is some kind of rigid structure. The totality of concrete physical reality is specifying that four-dimensional structure and what happens everywhere in it. In Newtonian mechanics, this object is foliated by these planes of absolute simultaneity. And in relativity you don’t have that; you have this light-cone structure instead. So it has a different geometrical character. But I don’t see how that different geometrical character gets rid of time or gets rid of temporality.

    The idea that the block universe is static drives me crazy. What is it to say that something is static? It’s to say that as time goes on, it doesn’t change. But it’s not that the block universe is in time; time is in it. When you say it’s static, it somehow suggests that there is no change, nothing really changes, change is an illusion. It blows your mind. Physics has discovered some really strange things about the world, but it has not discovered that change is an illusion.
    What does it mean for time to pass? Is that synonymous with “time has a direction,” or is there something in addition?

    There’s something in addition. For time to pass means for events to be linearly ordered, by earlier and later. The causal structure of the world depends on its temporal structure. The present state of the universe produces the successive states. To understand the later states, you look at the earlier states and not the other way around. Of course, the later states can give you all kinds of information about the earlier states, and, from the later states and the laws of physics, you can infer the earlier states. But you normally wouldn’t say that the later states explain the earlier states. The direction of causation is also the direction of explanation.
    Am I accurate in getting from you that there’s a generation or production going on here — that there’s a machinery that sits grinding away, one moment giving rise to the next, giving rise to the next?

    Well, that’s certainly a deep part of the picture I have. The machinery is exactly the laws of nature. That gives a constraint on the laws of nature — namely, that they should be laws of temporal evolution. They should be laws that tell you, as time goes on, how will new states succeed old ones. The claim would be there are no fundamental laws that are purely spatial and that where you find spatial regularities, they have temporal explanations.

    Does this lead you to a different view of what a law even is?

    It leads me to a different view than the majority view. I think of laws as having a kind of primitive metaphysical status, that laws are not derivative on anything else. It’s, rather, the other way around: Other things are derivative from, produced by, explained by, derived from the laws operating. And there, the word “operating” has this temporal characteristic.
    Why is yours a minority view? Because it seems to me, if you ask most people on the street what the laws of physics do, they would say, “It’s part of a machinery.”

    I often say my philosophical views are just kind of the naïve views you would have if you took a physics class or a cosmology class and you took seriously what you were being told. In a physics class on Newtonian mechanics, they’ll write down some laws and they’ll say, “Here are the laws of Newtonian mechanics.” That’s really the bedrock from which you begin.

    I don’t think I hold really bizarre views. I take “time doesn’t pass” or “the passage of time is an illusion” to be a pretty bizarre view. Not to say it has to be false, but one that should strike you as not what you thought.
    What does this all have to say about whether time is fundamental or emergent?

    I’ve never been able to quite understand what the emergence of time, in its deeper sense, is supposed to be. The laws are usually differential equations in time. They talk about how things evolve. So if there’s no time, then things can’t evolve. How do we understand — and is the emergence a temporal emergence? It’s like, in a certain phase of the universe, there was no time; and then in other phases, there is time, where it seems as though time emerges temporally out of non-time, which then seems incoherent.

    Where do you stop offering analyses? Where do you stop — where is your spade turned, as Wittgenstein would say? And for me, again, the notion of temporality or of time seems like a very good place to think I’ve hit a fundamental feature of the universe that is not explicable in terms of anything else.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 6:57 am on May 16, 2017 Permalink | Reply
    Tags: , , EPR paradox, , , , Quantum entanglement,   

    From COSMOS: “Using Einstein’s ‘spooky action at a distance’ to hear ripples in spacetime” 

    Cosmos Magazine bloc

    COSMOS

    16 May 2017
    Cathal O’Connell

    1
    The new technique will aid in the detection of gravitational waves caused by colliding black holes. Henze / NASA

    In new work that connects two of Albert Einstein’s ideas in a way he could scarcely have imagined, physicists have proposed a way to improve gravitational wave detectors, using the weirdness of quantum physics.

    The new proposal, published in Nature Physics, could double the sensitivity of future detectors listening out for ripples in spacetime caused by catastrophic collisions across the universe.

    When the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) detected gravitational waves in late 2015 it was the first direct evidence of the gravitational waves Einstein had predicted a century before.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    Now it another of Einstein’s predictions – one he regarded as a failure – could potentially double the sensitivity of LIGOs successors.

    The story starts with his distaste for quantum theory – or at least for the fundamental fuzziness of all things it seemed to demand.

    Einstein thought the universe would ultimately prove predictable and exact, a clockwork universe rather than one where God “plays dice”. In 1935 he teamed up with Boris Podolsky and Nathan Rosen to publish a paper they thought would be a sort of reductio ad absurdum. They hoped to disprove quantum mechanics by following it to its logical, ridiculous conclusion. Their ‘EPR paradox’ (named for their initials) described the instantaneous influence of one particle on another, what Einstein called “spooky action at a distance” because it seemed at first to be impossible.

    Yet this sally on the root of quantum physics failed, as the EPR effect turned out not to be a paradox after all. Quantum entanglement, as it’s now known, has been repeatedly proven to exist, and features in several proposed quantum technologies, including quantum computation and quantum cryptography.

    2
    Artistic rendering of the generation of an entangled pair of photons by spontaneous parametric down-conversion as a laser beam passes through a nonlinear crystal. Inspired by an image in Dance of the Photons by Anton Zeilinger. However, this depiction is from a different angle, to better show the “figure 8” pattern typical of this process, clearly shows that the pump beam continues across the entire image, and better represents that the photons are entangled.
    Date 31 March 2011
    Source Entirely self-generated using computer graphics applications.
    Author J-Wiki at English Wikipedia

    Now we can add gravity wave detection to the list.

    LIGO works by measuring the minute wobbling of mirrors as a gravitational wave stretches and squashes spacetime around them. It is insanely sensitive – able to detect wobbling down to 10,000th the width of a single proton.

    At this level of sensitivity the quantum nature of light becomes a problem. This means the instrument is limited by the inherent fuzziness of the photons bouncing between its mirrors — this quantum noise washes out weak signals.

    To get around this, physicists plan to use so-called squeezed light to dial down the level of quantum noise near the detector (while increasing it elsewhere).

    The new scheme aids this by adding two new, entangled laser beams to the mix. Because of the ‘spooky’ connection between the two entangled beams, their quantum noise is correlated – detecting one allows the prediction of the other.

    This way, the two beams can be used to probe the main LIGO beam, helping nudge it into a squeezed light state. This reduces the noise to a level that standard quantum theory would deem impossible.

    The authors of the new proposal write that it is “appropriate for all future gravitational-wave detectors for achieving sensitivities beyond the standard quantum limit”.

    Indeed, the proposal could as much as double the sensitivity of future detectors.

    Over the next 30 years, astronomers aim to improve the sensitivity of the detectors, like LIGO, by 30-fold. At that level, we’d be able to hear all black hole mergers in the observable universe.

    ESA/eLISA, the future of gravitational wave research

    However, along with improved sensitivity, the proposed system would also increase the number of photons lost in the detector. Raffaele Flaminio, a physicist at the National Astronomical Observatory of Japan, points out in a perspective piece for Nature Physics [no link], Flaminio that the team need to do more work to understand how this will affect ultimate performance.

    “But the idea of using Einstein’s most famous (mistaken) paradox to improve the sensitivity of gravitational-wave detectors, enabling new tests of his general theory of relativity, is certainly intriguing,” Flaminio writes. “Einstein’s ideas – whether wrong or right – continue to have a strong influence on physics and astronomy.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 2:40 pm on April 9, 2017 Permalink | Reply
    Tags: , NIST Team Proves 'Spooky Action at a Distance' is Really Real, , Quantum entanglement   

    From NIST: “NIST Team Proves ‘Spooky Action at a Distance’ is Really Real” 

    NIST

    March 28, 2017

    1
    US physicists have made a breakthrough in proving that quantum mechanics is indeed “spooky” and that trapped ions can be relied on for the quantum entanglement crucial for building super-fast futuristic computers iStock

    Adding to strong recent demonstrations that particles of light perform what Einstein called “spooky action at a distance,” in which two separated objects can have a connection that exceeds everyday experience, physicists at the National Institute of Standards and Technology (NIST) have confirmed that particles of matter can act really spooky too.

    The NIST team entangled a pair of beryllium ions (charged atoms) in a trap, thus linking their properties, and then separated the pair and performed one of a set of possible manipulations on each ion’s properties before measuring them. Across thousands of runs, the pair’s measurement outcomes in certain cases matched, or in other cases differed, more often than everyday experience would predict. These strong correlations are hallmarks of quantum entanglement.

    What’s more, statistical calculations found the ion pairs displayed a rare high level of spookiness.

    “We are confident that the ions are 67 percent spooky,” said Ting Rei Tan, lead author of a new Physical Review Letters paper about the experiments.

    The experiments were “chained” Bell tests, meaning that they were constructed from a series of possible sets of manipulations on two ions. Unlike earlier experiments, these were enhanced
    Bell tests in which the number of possible manipulations for each ion was chosen randomly from sets of at least two and as many as 15 choices.

    This method produces stronger statistical results than conventional Bell tests (link is external). That’s because as the number of options grows for manipulating each ion, the chance automatically decreases that the ions are behaving by classical, or non-quantum, rules. According to classical rules, all objects must have definite “local” properties and can only influence each other at the speed of light or slower. Bell tests have been long used to show that through quantum physics, objects can break one or both of these rules, demonstrating spooky action.

    Conventional Bell tests produce data that are a mixture of local and spooky action. Perfect chained Bell tests can, in theory, prove there is zero chance of local influence. The NIST results got down to a 33 percent chance of local influence—lower than conventional Bell tests can achieve, although not the lowest ever reported for a chained test, Tan said.

    However, the NIST experiment broke new ground by closing two of three “loopholes” that could undermine the results, the only chained Bell test to do this using three or more options for manipulating material particles. The results are good enough to infer the high quality of the entangled states using minimal assumptions about the experiment—a rare achievement, Tan said.

    Last year, a different group of NIST researchers and collaborators closed all three loopholes in conventional Bell tests with particles of light. The new ion experiments confirm again that spooky action is real.

    “Actually, I believed in quantum mechanics before this experiment,” Tan said with a chuckle. “Our motivation was we were trying to use this experiment to showcase how good our trapped ion quantum computing technology is, and what we can do with it.”

    The researchers used the same ion trap setup as in previous quantum computing experiments. With this apparatus, researchers use electrodes and lasers to perform all the basic steps needed for quantum computing, including preparing and measuring ions’ quantum states; transporting ions between multiple trap zones; and creating stable quantum bits (qubits), qubit rotations, and reliable two-qubit logic operations. All these features were needed to conduct the chained Bell tests. Quantum computers are expected to one day solve problems that are currently intractable such as simulating superconductivity (the flow of electricity without resistance) and breaking today’s most popular data encryption codes.

    In NIST’s chained Bell tests, the number of settings (options for different manipulations before measurement) ranged from two to 15. The manipulations acted on the ions’ internal energy states called “spin up” or “spin down.” The researchers used lasers to rotate the spins of the ions by specific angles before the final measurements.

    Researchers performed several thousand runs for each setting and collected two data sets 6 months apart. The measurements determined the ions’ spin states. There were four possible final results: (1) both ions spin up, (2) first ion spin up and second ion spin down, (3) first ion spin down and second ion spin up, or (4) both ions spin down. Researchers measured the states based on how much the ions fluoresced or scattered light—bright was spin up and dark was spin down.

    The NIST experiment closed the detection and memory loopholes, which might otherwise allow ordinary classical systems to appear spooky.

    The detection loophole is opened if detectors are inefficient and a subset of the data are used to represent the entire data set. The NIST tests closed this loophole because the fluorescence detection was near 100 percent efficient, and the measurement outcomes of every trial in each experiment were recorded and used to calculate results.

    The memory loophole is opened if one assumes that the outcomes of the trials are identically distributed or there are no experimental drifts. Previous chained Bell tests have relied on this assumption, but the NIST test was able to drop it. The NIST team closed the memory loophole by performing thousands of extra trials over many hours with the set of six possible settings, using a randomly chosen setting for each trial and developing a more robust statistical analysis technique.

    The NIST experiments did not close the locality loophole, which is open if it is possible for the choice of settings to be communicated between the ions. To close this loophole, one would need to separate the ions by such a large distance that communication between them would be impossible, even at light speed. In the NIST experiment, the ions had to be positioned close together (at most, 340 micrometers apart) to be entangled and subsequently measured, Tan explained.

    This work was supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) and the Office of Naval Research.

    Paper: T.R. Tan, Y. Wan, S. Erickson, P. Bierhorst, D. Kienzler, S. Glancy, E. Knill, D. Leibfried and D.J. Wineland. 2017. Chained Bell Inequality Experiment With High-Efficiency Measurements. Physical Review Letters. DOI: 10.1103/PhysRevLett.118.130403

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NIST Campus, Gaitherberg, MD, USA

    NIST Mission, Vision, Core Competencies, and Core Values

    NIST’s mission

    To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
    NIST’s vision

    NIST will be the world’s leader in creating critical measurement solutions and promoting equitable standards. Our efforts stimulate innovation, foster industrial competitiveness, and improve the quality of life.
    NIST’s core competencies

    Measurement science
    Rigorous traceability
    Development and use of standards

    NIST’s core values

    NIST is an organization with strong values, reflected both in our history and our current work. NIST leadership and staff will uphold these values to ensure a high performing environment that is safe and respectful of all.

    Perseverance: We take the long view, planning the future with scientific knowledge and imagination to ensure continued impact and relevance for our stakeholders.
    Integrity: We are ethical, honest, independent, and provide an objective perspective.
    Inclusivity: We work collaboratively to harness the diversity of people and ideas, both inside and outside of NIST, to attain the best solutions to multidisciplinary challenges.
    Excellence: We apply rigor and critical thinking to achieve world-class results and continuous improvement in everything we do.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: