Tagged: Quantum Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:50 pm on April 28, 2018 Permalink | Reply
    Tags: , , , , , , , , Quantum Physics, Thermodynamics   

    From Kavli Institute for the Physics and Mathematics of the Universe: “Study Finds Way to Use Quantum Entanglement to Study Black Holes” 

    KavliFoundation

    The Kavli Foundation

    Kavli IPMU
    Kavli IMPU

    April 23, 2018

    A team of researchers has found a relationship between quantum physics, the study of very tiny phenomena, to thermodynamics, the study of very large phenomena, reports a new study this week in Nature Communications.

    “Our function can describe a variety of systems from quantum states in electrons to, in principle, black holes,” says study author Masataka Watanabe.

    Quantum entanglement is a phenomenon fundamental to quantum mechanics, where two separated regions share the same information. It is invaluable to a variety of applications including being used as a resource in quantum computation, or quantifying the amount of information stored in a black hole.

    Quantum mechanics is known to preserve information, while thermal equilibrium seems to lose some part of it, and so understanding the relationship between these microscopic and macroscopic concepts is important. So a group of graduate students and a researcher at the University of Tokyo, including the Kavli Institute for the Physics and Mathematics of the Universe, investigated the role of the quantum entanglement in thermal equilibrium in an isolated quantum system.

    1
    Figure 1: Graph showing quantum entanglement and spatial distribution. When separating matter A and B, the vertical axis shows how much quantum entanglement there is, while the horizontal axis shows the length of matter A. (Credit: Nakagawa et al.)

    “A pure quantum state stabilizing into thermal equilibrium can be compared to water being poured into a cup. In a quantum-mechanical system, the colliding water molecules create quantum entanglements, and these quantum entanglements will eventually lead a cup of water to thermal equilibrium. However, it has been a challenge to develop a theory which predicts how much quantum entanglement was inside because lots of quantum entanglements are created in complicated manners at thermal equilibrium,” says Watanabe.

    In their study, the team identified a function predicting the spatial distribution of information stored in an equilibrated system, and they revealed that it was determined by thermodynamic entropy alone. Also, by carrying out computer simulations, they found that the spatial distribution remained the same regardless of what systems were used and regardless of how they reached thermal equilibrium.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Kavli IPMU (Kavli Institute for the Physics and Mathematics of the Universe) is an international research institute with English as its official language. The goal of the institute is to discover the fundamental laws of nature and to understand the Universe from the synergistic perspectives of mathematics, astronomy, and theoretical and experimental physics. The Institute for the Physics and Mathematics of the Universe (IPMU) was established in October 2007 under the World Premier International Research Center Initiative (WPI) of the Ministry of Education, Sports, Science and Technology in Japan with the University of Tokyo as the host institution. IPMU was designated as the first research institute within the University of Tokyo Institutes for Advanced Study (UTIAS) in January 2011. It received an endowment from The Kavli Foundation and was renamed the “Kavli Institute for the Physics and Mathematics of the Universe” in April 2012. Kavli IPMU is located on the Kashiwa campus of the University of Tokyo, and more than half of its full-time scientific members come from outside Japan. http://www.ipmu.jp/
    Stem Education Coalition
    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.

    Advertisements
     
  • richardmitnick 10:19 am on April 5, 2018 Permalink | Reply
    Tags: A New State of Quantum Matter Has Been Found in a Material Scientists Thought Was All Chaos, , Photoemission electron microscopy, , Quantum Physics, , , Shakti geometry, Spin ice   

    From Science Alert: “A New State of Quantum Matter Has Been Found in a Material Scientists Thought Was All Chaos” 

    ScienceAlert

    Science Alert

    5 APR 2018
    MIKE MCRAE

    1
    (enot-poloskun/istock)

    What else is lurking in there?

    Experiments carried out on a complex arrangement of magnetic particles have identified a completely new state of matter, and it can only be explained if scientists turn to quantum physics.

    The messy structures behind the research show strange properties that could allow us to study the chaos of exotic particles – if researchers can find order in there, it could help us understand these particles in greater detail, opening up a whole new landscape for quantum technology.

    Physicists from the US carried out their research on the geometrical arrangements of particles in a weird material known as spin ice.

    Like common old water ice, the particles making up spin ice sort themselves into geometric patterns as the temperature drops.

    There are a number of compounds that can be used to build this kind of material, but they all share the same kind of quantum property – their individual magnetic ‘spin’ sets up a bias in how the particles point to one another, creating complex structures.

    So, unlike the predictable crystalline patterns in water ice, the nanoscale magnetic particles making up spin ice can look disordered and chaotic under certain conditions, flipping back and forth wildly.

    The researchers focussed on one particular structure called a Shakti geometry, and measured how its magnetic arrangements fluctuated with changes in temperature.

    States of matter are usually broken down into categories such as solid, liquid, and gas. We’re taught on a fundamental level that a material’s volume and fluidity can change with shifts in its temperature and pressure.

    But there’s another way to think of a state of matter – by considering the points at which there’s a dramatic change in the way particles arrange themselves as they gain or lose energy.

    For example, the freezing of water is one such dramatic change – a sudden restructuring that occurs as pure water is chilled below 0 degrees Celsius (32 degrees Fahrenheit), where its molecules lose the energy they need to remain free and adopt another stable configuration.

    When researchers slowly lowered the temperature on spin ice arranged in a Shakti geometry, they got it to produce a similar behaviour – one that has never been seen before in other forms of spin ice.

    Using a process called photoemission electron microscopy, the team was then able to image the changes in pattern based on how their electrons emitted light.

    They were noticing points at which a specific arrangement persisted even as the temperature continued to drop.

    “The system gets stuck in a way that it cannot rearrange itself, even though a large-scale rearrangement would allow it to fall to a lower energy state,” says senior researcher Peter Schiffer, currently at Yale University.

    Such a ‘sticking point’ is a hallmark of a state of matter, and one that wasn’t expected in the flip-flopping madness of spin ice.

    Most states of matter can be described fairly efficiently using classical models of thermodynamics, with jiggling particles overcoming binding forces as they swap heat energy.

    In this case there was no clear model describing what was balancing the changes in energy with the material’s stable arrangement.

    So the team applied a quantum touch, looking at how entanglement between particles aligned to give rise to a particular topology, or pattern within a changing space.

    “Our research shows for the first time that classical systems such as artificial spin ice can be designed to demonstrate topological ordered phases, which previously have been found only in quantum conditions,” says physicist Cristiano Nisoli from Los Alamos National Laboratory.

    Ten years ago, quasiparticles that behaved like magnetic monopoles [Nature] were observed in another type of spin ice, also pointing at a weird kind of phase transition.

    Quasiparticles are becoming big things in our search for new kinds of matter that behaves in odd but useful ways, as they have pontential to be used in quantum computing. So having better models for understanding this quantum landscape will no doubt come in handy.

    This research was published in Nature.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:14 am on March 20, 2018 Permalink | Reply
    Tags: Albert Einstein’s general theory of relativity, , , , Cosmological-constant problem, , , In 1998 astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, , , , Quantum Physics, Saul Perlmutter UC Berkeley Nobel laureate, , Why Does the Universe Need to Be So Empty?, Zero-point energy of the field   

    From The Atlantic Magazine and Quanta: “Why Does the Universe Need to Be So Empty?” 

    Quanta Magazine
    Quanta Magazine

    Atlantic Magazine

    The Atlantic Magazine

    Mar 19, 2018
    Natalie Wolchover

    Physicists have long grappled with the perplexingly small weight of empty space.

    The controversial idea that our universe is just a random bubble in an endless, frothing multiverse arises logically from nature’s most innocuous-seeming feature: empty space. Specifically, the seed of the multiverse hypothesis is the inexplicably tiny amount of energy infused in empty space—energy known as the vacuum energy, dark energy, or the cosmological constant. Each cubic meter of empty space contains only enough of this energy to light a light bulb for 11 trillionths of a second. “The bone in our throat,” as the Nobel laureate Steven Weinberg once put it [http://hetdex.org/dark_energy.html
    ], is that the vacuum ought to be at least a trillion trillion trillion trillion trillion times more energetic, because of all the matter and force fields coursing through it.

    1

    Somehow the effects of all these fields on the vacuum almost equalize, producing placid stillness. Why is empty space so empty?

    While we don’t know the answer to this question—the infamous “cosmological-constant problem”—the extreme vacuity of our vacuum appears necessary for our existence. In a universe imbued with even slightly more of this gravitationally repulsive energy, space would expand too quickly for structures like galaxies, planets, or people to form. This fine-tuned situation suggests that there might be a huge number of universes, all with different doses of vacuum energy, and that we happen to inhabit an extraordinarily low-energy universe because we couldn’t possibly find ourselves anywhere else.

    Some scientists bristle at the tautology of “anthropic reasoning” and dislike the multiverse for being untestable. Even those open to the multiverse idea would love to have alternative solutions to the cosmological constant problem to explore. But so far it has proved nearly impossible to solve without a multiverse. “The problem of dark energy [is] so thorny, so difficult, that people have not got one or two solutions,” says Raman Sundrum, a theoretical physicist at the University of Maryland.

    To understand why, consider what the vacuum energy actually is. Albert Einstein’s general theory of relativity says that matter and energy tell space-time how to curve, and space-time curvature tells matter and energy how to move. An automatic feature of the equations is that space-time can possess its own energy—the constant amount that remains when nothing else is there, which Einstein dubbed the cosmological constant. For decades, cosmologists assumed its value was exactly zero, given the universe’s reasonably steady rate of expansion, and they wondered why. But then, in 1998, astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, implying the presence of a repulsive energy permeating space. Dubbed dark energy by the astronomers, it’s almost certainly equivalent to Einstein’s cosmological constant. Its presence causes the cosmos to expand ever more quickly, since, as it expands, new space forms, and the total amount of repulsive energy in the cosmos increases.

    However, the inferred density of this vacuum energy contradicts what quantum-field theory, the language of particle physics, has to say about empty space. A quantum field is empty when there are no particle excitations rippling through it. But because of the uncertainty principle in quantum physics, the state of a quantum field is never certain, so its energy can never be exactly zero. Think of a quantum field as consisting of little springs at each point in space. The springs are always wiggling, because they’re only ever within some uncertain range of their most relaxed length. They’re always a bit too compressed or stretched, and therefore always in motion, possessing energy. This is called the zero-point energy of the field. Force fields have positive zero-point energies while matter fields have negative ones, and these energies add to and subtract from the total energy of the vacuum.

    The total vacuum energy should roughly equal the largest of these contributing factors. (Say you receive a gift of $10,000; even after spending $100, or finding $3 in the couch, you’ll still have about $10,000.) Yet the observed rate of cosmic expansion indicates that its value is between 60 and 120 orders of magnitude smaller than some of the zero-point energy contributions to it, as if all the different positive and negative terms have somehow canceled out. Coming up with a physical mechanism for this equalization is extremely difficult for two main reasons.

    First, the vacuum energy’s only effect is gravitational, and so dialing it down would seem to require a gravitational mechanism. But in the universe’s first few moments, when such a mechanism might have operated, the universe was so physically small that its total vacuum energy was negligible compared to the amount of matter and radiation. The gravitational effect of the vacuum energy would have been completely dwarfed by the gravity of everything else. “This is one of the greatest difficulties in solving the cosmological-constant problem,” the physicist Raphael Bousso wrote in 2007. A gravitational feedback mechanism precisely adjusting the vacuum energy amid the conditions of the early universe, he said, “can be roughly compared to an airplane following a prescribed flight path to atomic precision, in a storm.”

    Compounding the difficulty, quantum-field theory calculations indicate that the vacuum energy would have shifted in value in response to phase changes in the cooling universe shortly after the Big Bang. This raises the question of whether the hypothetical mechanism that equalized the vacuum energy kicked in before or after these shifts took place. And how could the mechanism know how big their effects would be, to compensate for them?

    So far, these obstacles have thwarted attempts to explain the tiny weight of empty space without resorting to a multiverse lottery. But recently, some researchers have been exploring one possible avenue: If the universe did not bang into existence, but bounced instead, following an earlier contraction phase, then the contracting universe in the distant past would have been huge and dominated by vacuum energy. Perhaps some gravitational mechanism could have acted on the plentiful vacuum energy then, diluting it in a natural way over time. This idea motivated the physicists Peter Graham, David Kaplan, and Surjeet Rajendran to discover a new cosmic bounce model, though they’ve yet to show how the vacuum dilution in the contracting universe might have worked.

    In an email, Bousso called their approach “a very worthy attempt” and “an informed and honest struggle with a significant problem.” But he added that huge gaps in the model remain, and “the technical obstacles to filling in these gaps and making it work are significant. The construction is already a Rube Goldberg machine, and it will at best get even more convoluted by the time these gaps are filled.” He and other multiverse adherents see their answer as simpler by comparison.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:05 pm on March 19, 2018 Permalink | Reply
    Tags: , , , , , , Quantum Physics   

    From LLNL: “Breaking the Law: Lawrence Livermore, Department of Energy look to shatter Moore’s Law through quantum computing” 


    Lawrence Livermore National Laboratory

    March 19, 2018
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    Lawrence Livermore National Laboratory physicist Jonathan DuBois, who heads the Lab’s Quantum Coherent Device Physics (QCDP) group, examines a prototype quantum computing device designed to solve quantum simulation problems. The device is kept inside a refrigerated vacuum tube (gold-plated to provide solid thermal matching) at temperatures colder than outer space. Photos by Carrie Martin/LLNL.

    The laws of quantum physics impact daily life in rippling undercurrents few people are aware of, from the batteries in our smartphones to the energy generated from solar panels. As the Department of Energy and its national laboratories explore the frontiers of quantum science, such as calculating the energy levels of a single atom or how molecules fit together, more powerful tools are a necessity.

    “The problem basically gets worse the larger the physical system gets — if you get beyond a simple molecule we have no way of resolving those kinds of energy differences,” said Lawrence Livermore National Laboratory (LLNL) physicist Jonathan DuBois, who heads the Lab’s Quantum Coherent Device Physics (QCDP) group. “From a physics perspective, we’re getting more and more amazing, highly controlled physics experiments, and if you tried to simulate what they were doing on a classical computer, it’s almost at the point where it would be kind of impossible.”

    In classical computing, Moore’s Law postulates that the number of transistors in an integrated circuit doubles approximately every two years. However, there are indications that Moore’s Law is slowing down and will eventually hit a wall. That’s where quantum computing comes in. Besides busting through the barriers of Moore’s Law, some are banking on quantum computing as the next evolutionary step in computers. It’s on the priority list for the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program,,which is investigating quantum computing, among other emerging technologies, through its “Beyond Moore’s Law” project. At LLNL, staff scientists DuBois and Eric Holland are leading the effort to develop a comprehensive co-design strategy for near-term application of quantum computing technology to outstanding grand challenge problems in the NNSA mission space.

    Whereas the desktop computers we’re all familiar with store information in binary forms of either a 1 or a zero (on or off), in a quantum system, information can be stored in superpositions, meaning that for a brief moment, mere nanoseconds, data in a quantum bit can exist as either one or zero before being projected into a classical binary state. Theoretically, these machines could solve certain complex problems much faster than any computers ever created before. While classical computers perform functions in serial (generating one answer at a time), quantum computers could potentially perform functions and store data in a highly parallelized way, exponentially increasing speed, performance and storage capacity.

    LLNL recently brought on line a full capability quantum computing lab and testbed facility under the leadership of quantum coherent device group member Eric Holland. Researchers are performing tests on a prototype quantum device birthed under the Lab’s Quantum Computing Strategic Initiative. The initiative, now in its third year, is funded by Laboratory Directed Research & Development (LDRD) and aims to design, fabricate, characterize and build quantum coherent devices. The building and demonstration piece is made possible by DOE’s Advanced Scientific Computing Research (ASCR), a program managed by DOE’s Office of Science that is actively engaged in exploring if and how quantum computation could be useful for DOE applications.

    LLNL researchers are developing algorithms for solving quantum simulation problems on the prototype device, which looks deceptively simple and very strange. It’s a cylindrical metal box, with a sapphire chip suspended in it. The box is kept inside a refrigerated vacuum tube (gold-plated to provide solid thermal matching) at temperatures colder than outer space — negative 460 degrees Fahrenheit. It’s highly superconductive and faces zero resistance in the vacuum, thus extending the lifetime of the superposition state.

    “It’s a perfect electrical conductor, so if you can send an excitation inside here, you’ll get electromagnetic (EM) modes inside the box,” DuBois explained. “We’re using the space inside the box, the quantized EM fields, to store and manipulate quantum information, and the little chip couples to fields and manipulates them, determining the fine splitting in energies between different quantum states. These energy differences are what you use to make changes in quantum space.”

    To “talk” to the box, researchers are using an arbitrary wave form generator, which creates an oscillating signal– the timing of the signal determines what computation is being done in system. DuBois said the physicists are essentially building a quantum solver for Schrödinger’s equation, the bases for almost all physics and the determining factor for the dynamics of a quantum computing system.

    “It turns out that’s actually very hard to solve, and the bigger the system is, the size of what you need to keep track of blows up exponentially,” DuBois said. “The argument here is we can build a system that does that naturally — nature is basically keeping track of all those degrees of freedom for us, and so if we can control it carefully we can get it to basically emulate the quantum dynamics of some problem we’re interested in, a charge transfer in quantum chemistry or biology problem or scattering problem in nuclear physics.”

    Finding out how the device will work is part of the mission of DOE’s Advanced Quantum-Enabled Simulation (AQuES) Testbed Pathfinder program, which is analyzing several different approaches to creating a functional, useful quantum computer for basic science and use in areas such as determining nuclear scattering rates, the electronic structure in molecules or condensed matter or understanding the energy levels in solar panels. In 2017, DOE awarded $1.5 million over three years to a team including DuBois and Lawrence Berkeley National Laboratory physicists Irfan Siddiqi and Jonathan Carter. The team wants to determine the underlying technology for a quantum system, develop a practical, usable quantum computer and build quantum capabilities at the national labs to solve real-world problems.

    The science of quantum computing, according to DuBois, is “at a turning point.” Within the three-year timeframe, he said, the team should be able to assess what type of quantum system is worth pursuing as a testbed system. The researchers first want to demonstrate control over a quantum computer and solve specific quantum dynamics problems. Then, they want to set up a user facility or cloud-based system that any user could log into and solve complex quantum physics problems.

    “There are multiple competing approaches to quantum computing; trapping ions, semiconducting systems, etc., and all have their quirks — none of them are really at the point where it’s actually a quantum computer,” DuBois said. “The hardware side, which is what this is, the question is, ‘what are the first technologies that we can deploy that will help bridge the gap between what actually exists in the lab and how people are thinking of these systems as theoretical objects?'”

    Quantum computers have come a long way since the first superconducting quantum bit, or “qubit,” was created in 1999. In last nearly 20 years, quantum systems have improved exponentially, evidenced by the life span of the qubit’s superposition, or how long it takes the qubit to decay into 0 or 1. In 1999 that figure was a nanosecond. Currently, systems are up to tens to hundreds of milliseconds, which may not sound like much, but every year, the lifetime of the quantum bit has doubled.

    For the Testbed project, LLNL’s first generation quantum device will be roughly 20 qubits, DuBois said, large enough to be interesting, but small enough to be useful. A system of that size could potentially reduce the time it takes for most current supercomputing systems to perform quantum dynamics calculations from about a day down to mere microseconds, DuBois said. To get to that point, LLNL and LBNL physicists will need to understand how to design systems that can extend the quantum state.

    “It needs to last long enough to be quantum and it needs to be controllable,” DuBois said. “There’s a spectrum to that; the bigger the space is, the more powerful it has to be. Then there’s how controllable it would be. The finest level of control would be to change the value to anything I want. That’s what we’re aiming for, but there’s a competition involved. We want to hit that sweet spot.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration

    LLNL/NIF


    DOE Seal
    NNSA

     
  • richardmitnick 8:59 am on January 11, 2018 Permalink | Reply
    Tags: , Grasshopper Theory Might Map the Multiverse, Quantum Physics   

    From Edgy Labs: “Grasshopper Theory Might Map the Multiverse” 

    Edgy Labs

    January 10, 2018
    William McKinney

    1
    Genty | Pixabay.com

    New research suggests quantum theory doesn’t follow the rules of “reality”. Let’s see how hypothetical grasshoppers might lead us to the multiverse.

    Who knew that grasshoppers could help us understand quantum theory?

    Apparently, they can. At least, theoretically speaking they can. Recently, two physicists, Olga Goulko and Adrian Kent, have released a paper [Proceedings of The Royal Society A] that wrestles with something called the grasshopper problem.

    The grasshopper problem is a relatively new puzzle for the field of geometry. The problem is simple to state, but very hard to solve. However, solving it may help us understand the Bell inequalities and so mathematicians and physicists worldwide have attempted to posit an answer.

    It works like this:

    Let’s say that a grasshopper lands on a random point in a lawn, then jumps at a fixed distance in a random direction. What shape does the lawn have to be so that the grasshopper stays on the lawn after it jumps?

    Seems simple, right? In truth, it’s anything but. It sounds like something that Euclid (the Greek father of modern geometry) would have dreamed up. It’s not, though; the grasshopper problem is, surprisingly, pretty new.

    Because the problem is rather new, researchers have been looking at it through a modern lens. Instead of merely trying to solve the problem, they are getting deep into the variables. Those variables are pretty important, too, because they may help us resolve Bell’s inequalities.

    Let’s start with Goulko and Kent’s work.

    Today Grasshoppers, Tomorrow the Multiverse

    Conventional theories state that a disc-shaped lawn is optimal to solve the grasshopper problem, but Goulko and Kent know better.

    According to them, the optimal lawn shape changes depending on the distance of the jump. For distances smaller than 1/π1/2 (the radius of a circle of area 1, or approximately 0.56), for example, a cogwheel shape is best. For larger distances, other shapes such as a ‘three bladed fan’ or a row of stripes is best.

    2

    Oh, and it makes a difference if the surface of the lawn is flat or spherical, but I’ll get back to that.

    Sometimes the pieces of the lawn are connected, sometimes they are not. It all depends on variables, which is where Bell’s inequalities come in.

    One of the open problems regarding the Bell inequalities is determining what the optimal bounds are. These bounds are violated by quantum theory when quantum correlations get measured on a sphere at any angle between 0 and 90 degrees.

    As it turns out, that problem is pretty much equal to the problem of determining the shape of the lawn when it is spherical rather than flat. Goulko and Kent only analyzed the flat version in their paper, though they don’t think it’s a stretch to apply their method to the spherical case.

    The interesting part is that, when accounting for additional constraints, it might be possible to finally resolve the problem of optimal bounds for the Bell inequalities.

    Why is that so interesting? Well, if we can understand the optimal bounds for the Bell inequalities, we may be able to map out universes that we can’t see. How’s that for a final frontier?

    Exploring the Possibilities of the Multiverse

    Adding to that, we have theories about pocket universes, alternate dimensions, and the Upside Down. Okay, that last one was from Stranger Things, but just try and prove to me that the Upside Down isn’t there.

    One thing you always run into with multiverse theory, though, is quantum entanglement. See, many would believe that other universes and other dimensions are the same thing, but they aren’t. They are entirely different realities, but they could be linked through the quantum fabric of reality-at-large.

    For now, though, we can only speculate. Studying one universe is like an ant studying a deity. Studying the multiverse is going to be a much harder nut to crack. That said, we currently don’t have any better leads on it than quantum theory.

    And the latest leap in quantum theory is coming from a hypothetical grasshopper. Don’t you just love quantum physics?

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:49 pm on July 15, 2017 Permalink | Reply
    Tags: An advanced atomic cloud locked up in a small glass cage, Laser light to link caesium atoms and a vibrating membrane, Light 'kicks' object, , QBA-Quantum Back Action, Quantum Physics, Smart atomic cloud solves Heisenberg's observation problem, U Copenhagen   

    From U Copenhagen Niels Bohr Institute: “Smart atomic cloud solves Heisenberg’s observation problem” 

    University of Copenhagen

    Niels Bohr Institute bloc

    Niels Bohr Institute

    13 July 2017
    Eugene Polzik
    polzik@nbi.dk
    +45 2338 2045

    Quantum physics: Scientists at the Niels Bohr Institute, University of Copenhagen have been instrumental in developing a ‘hands-on’ answer to a challenge intricately linked to a very fundamental principle in physics: Heisenberg’s Uncertainty Principle. The NBI-researchers used laser light to link caesium atoms and a vibrating membrane. The research, the first of its kind, points to sensors capable of measuring movement with unseen precision.

    1
    From the left: Phd student Rodrigo Thomas, Professor Eugene Polzik and PhD student Christoffer Møller in front of the experiment demonstrating quantum measurement of motion. Photo: Ola J. Joensen.

    Our lives are packed with sensors gathering all sorts of information – and some of the sensors are integrated in our cell phones which e.g. enables us to measure the distances we cover when we go for a walk – and thereby also calculate how many calories we have burned thanks to the exercise. And this to most people seems rather straight forward.

    When measuring atom structures or light emissions at the quantum level by means of advanced microscopes or other forms of special equipment, things do, however, get a little more complicated due to a problem which during the 1920’s had the full attention of Niels Bohr as well as Werner Heisenberg. And this problem – this has to do with the fact that in-accuracies inevitably taint certain measurements conducted at quantum level – is described in Heisenberg’s Uncertainty Principle.

    In a scientific report published in this week’s issue of Nature, NBI-researchers – based on a number of experiments – demonstrate that Heisenberg’s Uncertainty Principle to some degree can be neutralized. This has never been shown before, and the results may spark development of new measuring equipment as well as new and better sensors.

    Professor Eugene Polzik, head of Quantum Optics (QUANTOP) at the Niels Bohr Institute, has been in charge of the research – which has included the construction of a vibrating membrane and an advanced atomic cloud locked up in a small glass cage.

    2
    If laser light used to measure motion of a vibrating membrane (left) is first transmitted through an atom cloud (center) the measurement sensitivity can be better than standard quantum limits envisioned by Bohr and Heisenberg. Photo: Bastian Leonhardt Strube and Mads Vadsholt.

    Light ‘kicks’ object

    Heisenberg’s Uncertainty Principle basically says that you cannot simultaneously know the exact position and the exact speed of an object.

    Which has to do with the fact that observations conducted via a microscope operating with laser light inevitably will lead to the object being ‘kicked’. This happens because light is a stream of photons which when reflected off the object give it random ‘kicks’ – and as a result of those kicks the object begins to move in a random way.

    This phenomenon is known as Quantum Back Action (QBA) – and these random movements put a limit to the accuracy with which measurements can be carried out at quantum level.

    To conduct the experiments at NBI professor Polzik and his team of “young, enthusiastic and very skilled NBI-researchers” used a ‘tailor-made’ membrane as the object observed at quantum level. The membrane was built by Ph.D. Students Christoffer Møller and Yegishe Tsaturyan, whereas Rodrigo Thomas and Georgios Vasikalis – Ph.D. Student and researcher, respectively – were in charge of the atomic aspects. Furthermore Polzik relied on other NBI-employees, assistant professor Mikhail Balabas, who built the minute glass cage for the atoms, researcher Emil Zeuthen and professor Albert Schliesser who – collaborating with German colleagues – were in charge of the substantial number of mathematical calculations needed before the project was ready for publication in Nature.

    3
    The atomic part of the hybrid experiment. The atoms are contained in a micro-cell inside the magnetic shield seen in the middle. Photo: Ola J. Joensen.

    Over the last decades scientists have tried to find ways of ‘fooling’ Heisenberg’s Uncertainty Principle. Eugene Polzik and his colleagues came up with the idea of implementing the advanced atomic cloud a few years ago – and the cloud consists of 100 million caesium-atoms locked up in a hermetically closed cage, a glass cell, explains the professor:

    “The cell is just 1 centimeter long, 1/3 of a millimeter high and 1/3 of a millimeter wide, and in order to make the atoms work as intended, the inner cell walls have been coated with paraffin. The membrane – whose movements we were following at quantum level – measures 0,5 millimeter, which actually is a considerable size in a quantum perspective”.

    The idea behind the glass cell is to deliberately send the laser light used to study the membrane-movements on quantum level through the encapsulated atomic cloud BEFORE the light reaches the membrane, explains Eugene Polzik: “This results in the laser light-photons ‘kicking’ the object – i.e. the membrane – as well as the atomic cloud, and these ‘kicks’ so to speak cancel out. This means that there is no longer any Quantum Back Action – and therefore no limitations as to how accurately measurements can be carried out at quantum level”.

    4
    The optomechanical part of the hybrid experiment. The cryostat seen in the middle houses the vibrating membrane whose quantum motion is measured. Photo: Ola J. Joensen.

    How can this be utilized?

    “For instance when developing new and much more advanced types of sensors for various analyses of movements than the types we know today from cell phones, GPS and geological surveys”, says professor Eugene Polzik: “Generally speaking sensors operating at the quantum level are receiving a lot of attention these days. One example is the Quantum Technologies Flagship, an extensive EU program which also supports this type of research”.

    The fact that it is indeed possible to ‘fool’ Heisenberg’s Uncertainty Principle may also prove significant in relation to better understanding gravitational waves – waves in the fabric of space-time itself of light.

    In September of 2015 the American LIGO-experiment was able to publish the first direct registrations and measurements of gravitational waves stemming from a collision between two very large black holes.

    However, the equipment used by LIGO is influenced by Quantum Back Action, and the new research from NBI may prove capable of eliminating that problem, says Eugene Polzik.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Niels Bohr Institute Campus

    The Niels Bohr Institute (Danish: Niels Bohr Institutet) is a research institute of the University of Copenhagen. The research of the institute spans astronomy, geophysics, nanotechnology, particle physics, quantum mechanics and biophysics.

    The Institute was founded in 1921, as the Institute for Theoretical Physics of the University of Copenhagen, by the Danish theoretical physicist Niels Bohr, who had been on the staff of the University of Copenhagen since 1914, and who had been lobbying for its creation since his appointment as professor in 1916. On the 80th anniversary of Niels Bohr’s birth – October 7, 1965 – the Institute officially became The Niels Bohr Institute.[1] Much of its original funding came from the charitable foundation of the Carlsberg brewery, and later from the Rockefeller Foundation.[2]

    During the 1920s, and 1930s, the Institute was the center of the developing disciplines of atomic physics and quantum physics. Physicists from across Europe (and sometimes further abroad) often visited the Institute to confer with Bohr on new theories and discoveries. The Copenhagen interpretation of quantum mechanics is named after work done at the Institute during this time.

    On January 1, 1993 the institute was fused with the Astronomic Observatory, the Ørsted Laboratory and the Geophysical Institute. The new resulting institute retained the name Niels Bohr Institute.

    The University of Copenhagen (UCPH) (Danish: Københavns Universitet) is the oldest university and research institution in Denmark. Founded in 1479 as a studium generale, it is the second oldest institution for higher education in Scandinavia after Uppsala University (1477). The university has 23,473 undergraduate students, 17,398 postgraduate students, 2,968 doctoral students and over 9,000 employees. The university has four campuses located in and around Copenhagen, with the headquarters located in central Copenhagen. Most courses are taught in Danish; however, many courses are also offered in English and a few in German. The university has several thousands of foreign students, about half of whom come from Nordic countries.

    The university is a member of the International Alliance of Research Universities (IARU), along with University of Cambridge, Yale University, The Australian National University, and UC Berkeley, amongst others. The 2016 Academic Ranking of World Universities ranks the University of Copenhagen as the best university in Scandinavia and 30th in the world, the 2016-2017 Times Higher Education World University Rankings as 120th in the world, and the 2016-2017 QS World University Rankings as 68th in the world. The university has had 9 alumni become Nobel laureates and has produced one Turing Award recipient

     
  • richardmitnick 12:16 pm on February 11, 2017 Permalink | Reply
    Tags: , Beamsplitter, , Quantum Physics, , What shape are photons? Quantum holography sheds light   

    From COSMOS: “What shape are photons? Quantum holography sheds light” 

    Cosmos Magazine bloc

    COSMOS

    20 July 2016 [Just found this in social media]
    Cathal O’Connell

    1
    Hologram of a single photon reconstructed from raw measurements (left) and theoretically predicted (right).
    FUW

    Imagine a shaft of yellow sunlight beaming through a window. Quantum physics tells us that beam is made of zillions of tiny packets of light, called photons, streaming through the air. But what does an individual photon “look” like? Does it have a shape? Are these questions even meaningful?

    Now, Polish physicists have created the first ever hologram of a single light particle. The feat, achieved by observing the interference of two intersecting light beams, is an important insight into the fundamental quantum nature of light.

    The result could also be important for technologies that require an understanding of the shape of single photons – such as quantum communication and quantum computers.

    ”We performed a relatively simple experiment to measure and view something incredibly difficult to observe: the shape of wavefronts of a single photon,” says Radoslaw Chrapkiewicz, a physicist at the University of Warsaw and lead author of the new paper, published in Nature Photonics.

    For hundreds of years, physicists have been working to figure out what light is made of. In the 19th century, the debate seemed to be settled by Scottish physicist James Clerk Maxwell’s description of light as a wave of electromagnetism.

    But things got a bit more complicated at the turn of the 20th century when German physicist Max Planck, then fellow countryman Albert Einstein, showed light was made up of tiny indivisible packets called photons.

    In the 1920s, Austrian physicist Erwin Schrödinger elaborated on these ideas with his equation for the quantum wave function to describe what a wave looks like, which has proved incredibly powerful in predicting the results of experiments with photons. But, despite the success of Schrödinger’s theory, physicists still debate what the wave function really means.

    Now physicists at the University of Warsaw measured, for the first time, the shape described by Schrödinger’s equation in a real experiment.

    Photons, travelling as waves, can be in step (called having the same phase). If they interact, they produce a bright signal. If they’re out of phase, they cancel each other out. It’s like sound waves from two speakers producing loud and quiet patches in a room.

    The image – which is called a hologram because it holds information on both the photon’s shape and phase – was created by firing two light beams at a beamsplitter, made of calcite crystal, at the same time.

    The beamsplitter acts a bit like a traffic intersection, where each photon can either pass straight on through or make a turn. The Polish team’s experiment hinged on measuring which path each photon took, which depends on the shape of their wave functions.

    2
    Scheme of the experimental setup for measuring holograms of single photons. FUW / dualcolor.pl / jch

    For a photon on its own, each path is equally probable. But when two photons approach the intersection, they interact – and these odds change.

    The team realised that if they knew the wave function of one of the photons, they could figure out the shape of the second from the positions of flashes appearing on a detector.

    It’s a little like firing two bullets to glance off one another mid-air and using the deflected trajectories to figure our shape of each projectile.

    Each run of the experiment produced two flashes on a detector, one for each photon. After more than 2,000 repetitions, a pattern of flashes built up and the team were able to reconstruct the shape of the unknown photon’s wave function.

    The resulting image looks a bit like a Maltese cross, just like the wave function predicted from Schrödinger’s equation. In the arms of the cross, where the photons were in step, the image is bright – and where they weren’t, we see darkness.

    The experiment brings us “a step closer to understanding what the wave function really is,” says Michal Jachura, who co-authored the work, and it could be a new tool for studying the interaction between two photons, on which technologies such as quantum communication and some versions of quantum computing rely.

    The researchers also hope to recreate wave functions of more complex quantum objects, such as atoms.

    “It’s likely that real applications of quantum holography won’t appear for a few decades yet,” says Konrad Banaszek, who was also part of the team, “but if there’s one thing we can be sure of it’s that they will be surprising.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:04 pm on February 7, 2017 Permalink | Reply
    Tags: , Quantum Physics, Reality at the atomic scale,   

    From The New Yorker: “Quantum Theory by Starlight” Gee, Actual Physical Science from The New Yorker 

    new-yorker-bloc-rea-irvin
    Rea Irvin

    The New Yorker

    [Shock of shocks, The New Yorker remembers the physical sciences.Anyone remember Jeremy Bernstein?]

    2.7.17
    David Kaiser

    1
    In parsing the strange dance of subatomic particles, it can be helpful to think of them as twins. IMAGE BY CHRONICLE / ALAMY

    The headquarters of the National Bank of Austria, in central Vienna, are exceptionally secure. During the week, in the basement of the building, employees perform quality-control tests on huge stacks of euros. One night last spring, however, part of the bank was given over to a different sort of testing. A group of young physicists, with temporary I.D. badges and sensitive electronics in tow, were allowed up to the top floor, where they assembled a pair of telescopes. One they aimed skyward, at a distant star in the Milky Way. The other they pointed toward the city, searching for a laser beam shot from a rooftop several blocks away. For all the astronomical equipment, though, their real quarry was a good deal smaller. They were there to conduct a new test of quantum theory, the branch of physics that seeks to explain reality at the atomic scale.

    It is difficult to overstate the weirdness of quantum physics. Even Albert Einstein and Erwin Schrödinger, both major architects of the theory, ultimately found it too outlandish to be wholly true. Throughout the summer of 1935, they aired their frustrations in a series of letters. For one thing, unlike Newtonian physics and Einstein’s relativity, which elegantly explained the behavior of everything from the fall of apples to the motion of galaxies, quantum theory offered only probabilities for various outcomes, not rock-solid predictions. It was an “epistemology-soaked orgy,” Einstein wrote, treating objects in the real world as mere puffs of possibility—both there and not there, or, in the case of Schrödinger’s famous imaginary cat, both alive and dead. Strangest of all was what Schrödinger dubbed “entanglement.” In certain situations, the equations of quantum theory implied that one subatomic particle’s behavior was bound up with another’s, even if the second particle was across the room, or on the other side of the planet, or in the Andromeda galaxy. They couldn’t be communicating, exactly, since the effect seemed to be instantaneous, and Einstein had already demonstrated that nothing could travel faster than light. In a letter to a friend, he dismissed entanglement as “spooky actions at a distance”—more ghost story than respectable science. But how to account for the equations?

    Physicists often invoke twins when trying to articulate the more fantastical elements of their theories. Einstein’s relativity, for instance, introduced the so-called twin paradox, which illustrates how a rapid journey through space and time can make one woman age more slowly than her twin. (Schrödinger’s interest in twins was rather less academic. His exploits with the Junger sisters, who were half his age, compelled his biographer to save a spot in the index for “Lolita complex.”) I am a physicist, and my wife and I actually have twins, so I find it particularly helpful to think about them when trying to parse the strange dance of entanglement.

    Let us call our quantum twins Ellie and Toby. Imagine that, at the same instant, Ellie walks into a restaurant in Cambridge, Massachusetts, and Toby walks into a restaurant in Cambridge, England. They ponder the menus, make their selections, and enjoy their meals. Afterward, their waiters come by to offer dessert. Ellie is given the choice between a brownie and a cookie. She has no real preference, being a fan of both, so she chooses one seemingly at random. Toby, who shares his sister’s catholic attitude toward sweets, does the same. Both siblings like their restaurants so much that they return the following week. This time, when their meals are over, the waiters offer ice cream or frozen yogurt. Again the twins are delighted—so many great options!—and again they choose at random.

    In the ensuing months, Ellie and Toby return to the restaurants often, alternating aimlessly between cookies or brownies and ice cream or frozen yogurt. But when they get together for Thanksgiving, looking rather plumper than last year, they compare notes and find a striking pattern in their selections. It turns out that when both the American and British waiters offered baked goods, the twins usually ordered the same thing—a brownie or a cookie for each. When the offers were different, Toby tended to order ice cream when Ellie ordered brownies, and vice versa. For some reason, though, when they were both offered frozen desserts, they tended to make opposite selections—ice cream for one, frozen yogurt for the other. Toby’s chances of ordering ice cream seemed to depend on what Ellie ordered, an ocean away. Spooky, indeed.

    Einstein believed that particles have definite properties of their own, independent of what we choose to measure, and that local actions produce only local effects—that what Toby orders has no bearing on what Ellie orders. In 1964, the Irish physicist John Bell identified the statistical threshold between Einstein’s world and the quantum world. If Einstein was right, then the outcomes of measurements on pairs of particles should line up only so often; there should be a strict limit on how frequently Toby’s and Ellie’s dessert orders are correlated. But if he was wrong, then the correlations should occur significantly more often. For the past four decades, scientists have tested the boundaries of Bell’s theorem. In place of Ellie and Toby, they have used specially prepared pairs of particles, such as photons of light. In place of friendly waiters recording dessert orders, they have used instruments that can measure some physical property, such as polarization—whether a photon’s electric field oscillates along or at right angles to some direction in space. To date, every single published test has been consistent with quantum theory.

    From the start, however, physicists have recognized that their experiments are subject to various loopholes, circumstances that could, in principle, account for the observed results even if quantum theory were wrong and entanglement merely a chimera. One loophole, known as locality, concerns information flow: could a particle on one side of the experiment, or the instrument measuring it, have sent some kind of message to the other side before the second measurement was completed? Another loophole concerns statistics: what if the particles that were measured somehow represented a biased sample, a few spooky dessert orders amid thousands of unseen boring ones? Physicists have found clever ways of closing one or the other of these loopholes over the years, and in 2015, in a beautiful experiment out of the Netherlands, one group managed to close both at once. But there is a third major loophole, one that Bell overlooked in his original analysis. Known as the freedom-of-choice loophole, it concerns whether some event in the past could have nudged both the choice of measurements to be performed and the behavior of the entangled particles—in our analogy, the desserts being offered and the selections that Ellie and Toby made. Where the locality loophole imagines Ellie and Toby, or their waiters, communicating with each other, the freedom-of-choice loophole supposes that some third party could have rigged things without any of them noticing. It was this loophole that my colleagues and I recently set out to address.

    We performed our experiment last April, spread out in three locations across Schrödinger’s native Vienna. A laser in Anton Zeilinger’s laboratory at the Institute for Quantum Optics and Quantum Information supplied our entangled photons. About three-quarters of a mile to the north, Thomas Scheidl and his colleagues set up two telescopes in a different university building. One was aimed at the institute, ready to receive the entangled photons, and one was pointed in the opposite direction, fixed on a star in the night sky. Several blocks south of the institute, at the National Bank of Austria, a second team, led by Johannes Handsteiner, had a comparable setup. Their second telescope, the one that wasn’t looking at the institute, was turned to the south.

    Our group’s goal was to measure pairs of entangled particles while insuring that the type of measurement we performed on one had nothing to do with how we assessed the other. In short, we wanted to turn the universe into a pair of random-number generators. Handsteiner’s target star was six hundred light-years from Earth, which meant that the light received by his telescope had been travelling for six hundred years. We selected the star carefully, such that the light it emitted at a particular moment all those centuries ago would reach Handsteiner’s telescope first, before it could cover the extra distance to either Zeilinger’s lab or the university. Scheidl’s target star, meanwhile, was nearly two thousand light-years away. Both team’s telescopes were equipped with special filters, which could distinguish extremely rapidly between photons that were more red or more blue than a particular reference wavelength. If Handsteiner’s starlight in a given instant happened to be more red, then the instruments at his station would perform one type of measurement on the entangled photon, which was just then zipping through the night sky, en route from Zeilinger’s laboratory. If Handsteiner’s starlight happened instead to be blue, then the other type of measurement would be performed. The same went for Scheidl’s station. The detector settings on each side changed every few millionths of a second, based on new observations of the stars.

    With this arrangement, it was as if each time Ellie walked into the restaurant, her waiter offered her a dessert based on an event that had occurred several centuries earlier, trillions of miles from the Earth—which neither Ellie, nor Toby, nor Toby’s waiter could have foreseen. Meanwhile, by placing Handsteiner’s and Scheidl’s stations relatively far apart, we were able to close the locality loophole even as we addressed the freedom-of-choice loophole. (Since we only detected a small fraction of all the entangled particles that were emitted from Zeilinger’s lab, though, we had to assume that the photons we did measure represented a fair sample of the whole collection.) We conducted two experiments that night, aiming the stellar telescopes at one pair of stars for three minutes, then another pair for three more. In each case, we detected about a hundred thousand pairs of entangled photons. The results from each experiment showed beautiful agreement with the predictions from quantum theory, with correlations far exceeding what Bell’s inequality would allow. Our results were published on Tuesday in the journal Physical Review Letters.

    How might a devotee of Einstein’s ideas respond? Perhaps our assumption of fair sampling was wrong, or perhaps some strange, unknown mechanism really did exploit the freedom-of-choice loophole, in effect alerting one receiving station of what was about to occur at the other. We can’t rule out such a bizarre scenario, but we can strongly constrain it. In fact, our experiment represents an improvement by sixteen orders of magnitude—a factor of ten million billion—over previous efforts to address the freedom-of-choice loophole. In order to account for the results of our new experiment, the unknown mechanism would need to have been set in place before the emission of the starlight that Handsteiner’s group observed, back when Joan of Arc’s friends still called her Joanie.

    Experiments like ours—and follow-up versions we plan to conduct, using larger telescopes to spy even fainter, more distant astronomical objects—harness some of the largest scales in nature to test its tiniest, and most fundamental, phenomena. Beyond that, our explorations could help shore up the security of next-generation devices, such as quantum-encryption schemes, which depend on entanglement to protect against hackers and eavesdroppers. But, for me, the biggest motivation remains exploring the strange mysteries of quantum theory. The world described by quantum mechanics is fundamentally, stubbornly different from the worlds of Newtonian physics or Einsteinian relativity. If Ellie’s and Toby’s dessert orders are going to keep lining up so spookily, I want to know why.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:32 am on September 8, 2015 Permalink | Reply
    Tags: , , Quantum Physics   

    From Nature: “Quantum physics: What is really real?” 

    Nature Mag
    Nature

    20 May 2015
    Zeeya Merali

    1
    An experiment showing that oil droplets can be propelled across a fluid bath by the waves they generate has prompted physicists to reconsider the idea that something similar allows particles to behave like waves. No image credit

    Owen Maroney worries that physicists have spent the better part of a century engaging in fraud.

    Ever since they invented quantum theory in the early 1900s, explains Maroney, who is himself a physicist at the University of Oxford, UK, they have been talking about how strange it is — how it allows particles and atoms to move in many directions at once, for example, or to spin clockwise and anticlockwise simultaneously. But talk is not proof, says Maroney. “If we tell the public that quantum theory is weird, we better go out and test that’s actually true,” he says. “Otherwise we’re not doing science, we’re just explaining some funny squiggles on a blackboard.”

    It is this sentiment that has led Maroney and others to develop a new series of experiments to uncover the nature of the wavefunction — the mysterious entity that lies at the heart of quantum weirdness. On paper, the wavefunction is simply a mathematical object that physicists denote with the Greek letter psi (Ψ) — one of Maroney’s funny squiggles — and use to describe a particle’s quantum behaviour. Depending on the experiment, the wavefunction allows them to calculate the probability of observing an electron at any particular location, or the chances that its spin is oriented up or down. But the mathematics shed no light on what a wavefunction truly is. Is it a physical thing? Or just a calculating tool for handling an observer’s ignorance about the world?

    The tests being used to work that out are extremely subtle, and have yet to produce a definitive answer. But researchers are optimistic that a resolution is close. If so, they will finally be able to answer questions that have lingered for decades. Can a particle really be in many places at the same time? Is the Universe continually dividing itself into parallel worlds, each with an alternative version of ourselves? Is there such a thing as an objective reality at all?

    “These are the kinds of questions that everybody has asked at some point,” says Alessandro Fedrizzi, a physicist at the University of Queensland in Brisbane, Australia. “What is it that is really real?”

    Debates over the nature of reality go back to physicists’ realization in the early days of quantum theory that particles and waves are two sides of the same coin. A classic example is the double-slit experiment, in which individual electrons are fired at a barrier with two openings: the electron seems to pass through both slits in exactly the same way that a light wave does, creating a banded interference pattern on the other side (see ‘Wave–particle weirdness’). In 1926, the Austrian physicist Erwin Schrödinger invented the wavefunction to describe such behaviour, and devised an equation that allowed physicists to calculate it in any given situation1. But neither he nor anyone else could say anything about the wavefunction’s nature.

    Ignorance is bliss

    From a practical perspective, its nature does not matter. The textbook Copenhagen interpretation of quantum theory, developed in the 1920s mainly by physicists Niels Bohr and Werner Heisenberg, treats the wavefunction as nothing more than a tool for predicting the results of observations, and cautions physicists not to concern themselves with what reality looks like underneath. “You can’t blame most physicists for following this ‘shut up and calculate’ ethos because it has led to tremendous developments in nuclear physics, atomic physics, solid-state physics and particle physics,” says Jean Bricmont, a statistical physicist at the Catholic University of Louvain in Belgium. “So people say, let’s not worry about the big questions.”

    But some physicists worried anyway. By the 1930s, Albert Einstein had rejected the Copenhagen interpretation — not least because it allowed two particles to entangle their wavefunctions, producing a situation in which measurements on one could instantaneously determine the state of the other even if the particles were separated by vast distances. Rather than accept such “spooky action at a distance”, Einstein preferred to believe that the particles’ wavefunctions were incomplete. Perhaps, he suggested, the particles have some kind of ‘hidden variables’ that determine the outcome of the measurement, but that quantum theories do not capture.

    Experiments since then have shown that this spooky action at a distance is quite real, which rules out the particular version of hidden variables that Einstein advocated. But that has not stopped other physicists from coming up with interpretations of their own. These interpretations fall into two broad camps. There are those that agree with Einstein that the wavefunction represents our ignorance — what philosophers call psi-epistemic models. And there are those that view the wavefunction as a real entity — psi-ontic models.

    To appreciate the difference, consider a thought experiment that Schrödinger described in a 1935 letter to Einstein. Imagine that a cat is enclosed in a steel box. And imagine that the box also contains a sample of radioactive material that has a 50% probability of emitting a decay product in one hour, along with an apparatus that will poison the cat if it detects such a decay. Because radioactive decay is a quantum event, wrote Schrödinger, the rules of quantum theory state that, at the end of the hour, the wavefunction for the box’s interior must be an equal mixture of live cat and dead cat.

    2
    No image credit

    “Crudely speaking,” says Fedrizzi, “in a psi-epistemic model the cat in the box is either alive or it’s dead and we just don’t know because the box is closed.” But most psi-ontic models agree with the Copenhagen interpretation: until an observer opens the box and looks, the cat is both alive and dead.

    But this is where the debate gets stuck. Which of quantum theory’s many interpretations — if any — is correct? That is a tough question to answer experimentally, because the differences between the models are subtle: to be viable, they have to predict essentially the same quantum phenomena as the very successful Copenhagen interpretation. Andrew White, a physicist at the University of Queensland, says that for most of his 20-year career in quantum technologies “the problem was like a giant smooth mountain with no footholds, no way to attack it”.

    That changed in 2011, with the publication of a theorem about quantum measurements that seemed to rule out the wavefunction-as-ignorance models. On closer inspection, however, the theorem turned out to leave enough wiggle room for them to survive. Nonetheless, it inspired physicists to think seriously about ways to settle the debate by actually testing the reality of the wavefunction. Maroney had already devised an experiment that should work in principle, and he and others soon found ways to make it work in practice. The experiment was carried out last year by Fedrizzi, White and others7.

    To illustrate the idea behind the test, imagine two stacks of playing cards. One contains only red cards; the other contains only aces. “You’re given a card and asked to identify which deck it came from,” says Martin Ringbauer, a physicist also at the University of Queensland. If it is a red ace, he says, “there’s an overlap and you won’t be able to say where it came from”. But if you know how many of each type of card is in each deck, you can at least calculate how often such ambiguous situations will arise.

    Out on a limb

    A similar ambiguity occurs in quantum systems. It is not always possible for a single measurement in the lab to distinguish how a photon is polarized, for example. “In real life, it’s pretty easy to tell west from slightly south of west, but in quantum systems, it’s not that simple,” says White. According to the standard Copenhagen interpretation, there is no point in asking what the polarization is because the question does not have an answer — or at least, not until another measurement can determine that answer precisely. But according to the wavefunction-as-ignorance models, the question is perfectly meaningful; it is just that the experimenters — like the card-game player — do not have enough information from that one measurement to answer. As with the cards, it is possible to estimate how much ambiguity can be explained by such ignorance, and compare it with the larger amount of ambiguity allowed by standard theory.

    That is essentially what Fedrizzi’s team tested. The group measured polarization and other features in a beam of photons and found a level of overlap that could not be explained by the ignorance models. The results support the alternative view that, if objective reality exists, then the wavefunction is real. “It’s really impressive that the team was able to address a profound issue, with what’s actually a very simple experiment,” says Andrea Alberti, a physicist at the University of Bonn in Germany.

    The conclusion is still not ironclad, however: because the detectors picked up only about one-fifth of the photons used in the test, the team had to assume that the lost photons were behaving in the same way. That is a big assumption, and the group is currently working on closing the sampling gap to produce a definitive result. In the meantime, Maroney’s team at Oxford is collaborating with a group at the University of New South Wales in Australia, to perform similar tests with ions, which are easier to track than photons. “Within the next six months we could have a watertight version of this experiment,” says Maroney.

    But even if their efforts succeed and the wavefunction-as-reality models are favoured, those models come in a variety of flavours — and experimenters will still have to pick them apart.

    One of the earliest such interpretations was set out in the 1920s by French physicist Louis de Broglie8, and expanded in the 1950s by US physicist David Bohm. According to de Broglie–Bohm models, particles have definite locations and properties, but are guided by some kind of ‘pilot wave’ that is often identified with the wavefunction. This would explain the double-slit experiment because the pilot wave would be able to travel through both slits and produce an interference pattern on the far side, even though the electron it guided would have to pass through one slit or the other.

    In 2005, de Broglie–Bohmian mechanics received an experimental boost from an unexpected source. Physicists Emmanuel Fort, now at the Langevin Institute in Paris, and Yves Couder at the University of Paris Diderot gave the students in an undergraduate laboratory class what they thought would be a fairly straightforward task: build an experiment to see how oil droplets falling into a tray filled with oil would coalesce as the tray was vibrated. Much to everyone’s surprise, ripples began to form around the droplets when the tray hit a certain vibration frequency. “The drops were self-propelled — surfing or walking on their own waves,” says Fort. “This was a dual object we were seeing — a particle driven by a wave.”

    Since then, Fort and Couder have shown that such waves can guide these ‘walkers’ through the double-slit experiment as predicted by pilot-wave theory, and can mimic other quantum effects, too11. This does not prove that pilot waves exist in the quantum realm, cautions Fort. But it does show how an atomic-scale pilot wave might work. “We were told that such effects cannot happen classically,” he says, “and here we are, showing that they do.”

    Another set of reality-based models, devised in the 1980s, tries to explain the strikingly different properties of small and large objects. “Why electrons and atoms can be in two different places at the same time, but tables, chairs, people and cats can’t,” says Angelo Bassi, a physicist at the University of Trieste, Italy. Known as ‘collapse models’, these theories postulate that the wavefunctions of individual particles are real, but can spontaneously lose their quantum properties and snap the particle into, say, a single location. The models are set up so that the odds of this happening are infinitesimal for a single particle, so that quantum effects dominate at the atomic scale. But the probability of collapse grows astronomically as particles clump together, so that macroscopic objects lose their quantum features and behave classically.

    One way to test this idea is to look for quantum behaviour in larger and larger objects. If standard quantum theory is correct, there is no limit. And physicists have already carried out double-slit interference experiments with large molecules. But if collapse models are correct, then quantum effects will not be apparent above a certain mass. Various groups are planning to search for such a cut-off using cold atoms, molecules, metal clusters and nanoparticles. They hope to see results within a decade. “What’s great about all these kinds of experiments is that we’ll be subjecting quantum theory to high-precision tests, where it’s never been tested before,” says Maroney.

    Parallel worlds

    One wavefunction-as-reality model is already famous and beloved by science-fiction writers: the many-worlds interpretation developed in the 1950s by Hugh Everett, who was then a graduate student at Princeton University in New Jersey. In the many-worlds picture, the wavefunction governs the evolution of reality so profoundly that whenever a quantum measurement is made, the Universe splits into parallel copies. Open the cat’s box, in other words, and two parallel worlds will branch out — one with a living cat and another containing a corpse.

    Distinguishing Everett’s many-worlds interpretation from standard quantum theory is tough because both make exactly the same predictions. But last year, Howard Wiseman at Griffith University in Brisbane and his colleagues proposed a testable multiverse model13. Their framework does not contain a wavefunction: particles obey classical rules such as Newton’s laws of motion. The weird effects seen in quantum experiments arise because there is a repulsive force between particles and their clones in parallel universes. “The repulsive force between them sets up ripples that propagate through all of these parallel worlds,” Wiseman says.

    Using computer simulations with as many as 41 interacting worlds, they have shown that this model roughly reproduces a number of quantum effects, including the trajectories of particles in the double-slit experiment13. The interference pattern becomes closer to that predicted by standard quantum theory as the number of worlds increases. Because the theory predicts different results depending on the number of universes, says Wiseman, it should be possible to devise ways to check whether his multiverse model is right — meaning that there is no wavefunction, and reality is entirely classical.

    Because Wiseman’s model does not need a wavefunction, it will remain viable even if future experiments rule out the ignorance models. Also surviving would be models, such as the Copenhagen interpretation, that maintain there is no objective reality — just measurements.

    But then, says White, that is the ultimate challenge. Although no one knows how to do it yet, he says, “what would be really exciting is to devise a test for whether there is in fact any objective reality out there at all.”

    See the original article for References.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 7:39 am on August 25, 2015 Permalink | Reply
    Tags: , Quantum Physics,   

    From Oxford: “Randomness and order” 

    U Oxford bloc

    University of Oxford

    8.25.15
    This series grew out of The Oxford Research Centre in the Humanities (TORCH) conference ‘Randomness and Order’, at which academics in the fields of quantum physics, music, probability and medieval history discussed what randomness meant in their different disciplines.

    1
    Professor Ian Walmsley, FRS

    2
    Ian Walmsley is Hooke Professor of Experimental Physics, a Professorial Fellow of St Hugh’s College and was appointed Pro-Vice-Chancellor for Research in February 2009.

    I’m talking to Professor Ian Walmsley, Pro-Vice-Chancellor for Research and Hooke Professor of Experimental Physics.

    What does randomness and order mean in quantum physics?
    It’s more than the usual sort of randomness we might encounter in everyday life, where you’re not sure what the stock market’s going to do or what the weather’s going to be like. Because, although those things are definite – in the sense that there will be stocks that have real prices to the buyer and seller – the whole edifice is so complicated that it’s impossible to know what those tiny details are at any one time.

    But, in quantum mechanics, the notion of randomness is embedded very much in the theory itself: that it is intrinsically unknowable – not just that you don’t know it, but that it is not knowable itself.

    There’s been a long debate as to whether this is simply that we have an inadequate theory and that at the bottom of it all there’s really stuff there that we can talk about definitely, or whether it really is that unknowable. People are still doing experiments, still thinking of ways to test that very concept, which is remarkable given how successful we’ve been in applying that theory to do all sorts of things. So it’s strange that this very successful theory somehow seems to be built on foundations that we don’t properly understand.

    When you first came across the extent of randomness in the world’s structure, did it change your perspective?
    Certainly it’s something that is very starkly evident as you begin to learn quantum mechanics as an undergraduate, and it does affect how you understand the very nature of what physics is about.

    Yet one does wonder whether in a sense it’s a modern disease – that is, the reason it feels so strange is that we’re used to the idea that science dissects things to the point where you reach irreducible elements that are real things (and then you can build up concepts and ideas on top of those). Quantum mechanics seems to shake that picture. Then the question is: was our earlier picture just something we were comfortable with, not any more real?

    Nonetheless, there is a dichotomy between the concept that things are fuzzy at the foundations and yet in everyday life we find things apparently completely certain: this is a solid table; we know it’s there and we don’t seem to feel there’s an uncertainty about it at all. So the question as to how this certainty arises out of this picture of underlying fuzziness is of great interest to physicists.

    There’s always a tendency in physics to tie the concepts that appear in your calculations to things that actually exist in the world. That’s not a uniquely quantum mechanical thing: [Sir Isaac] Newton was challenged when he came up with his ideas about gravity, which required there to be a force – an action-at-a-distance – between planets, and people felt, because he couldn’t describe in physical terms what that connection was, that he was introducing ideas of ‘the occult’ into science. He had a very impressive tool to calculate orbits based on a concept that at the time people felt was just ridiculous – the objection that it didn’t have a correspondence in the universe is the same as what we find now. The idea that things in your equations must correspond to things in the real world is always a tension in physics, and quantum mechanics just raises that in a new and very profound way – a way that challenges our conception of what the scientific enterprise is about.

    Do you think there’s something problematic about human desire to find order, when there’s a lot about the structure of the universe that is random?
    This is outside my realm of expertise, but I think the enterprise of physics is about deeper understanding. Our understanding of the universe’s structure does give us a perspective of our place in the world. In the case of quantum mechanics, people have been working for hundreds of years to discover just what this structure is telling us. There are very creative ways to think about how randomness arises within our experience of quantum mechanics. One conception, for example is embodied in the Many Worlds model.

    Outside of randomness, what is your general research interest?
    My research has been how to prepare, manipulate and probe quantum states of light and matter. Working in atomic physics and optical physics is nice because you can work in ambient conditions, with a laboratory of relatively small scale. When you want to explore quantum phenomena in such conditions, you have a couple of choices: one is you can work on very fast timescales, because when you create a quantum state, it tends to dissipate into the environment very quickly (that’s why you don’t generally see these things at room temperature); the other way is to use light itself to explore the quantum structure of the world.

    One of the big projects that we’re currently engaged in together with many colleagues across this university and several partner universities is to combine photons and atoms in order to try and build large-scale quantum states. That’s become feasible with some serious engineering, and it’s very exciting, for two reasons. First of all, when quantum states get big enough there’s no other way you can study them, other than to build them. Because it’s not possible to calculate using a normal computer what they are, what their structure is, what their dynamics are; they are too complicated. What that means, which Richard Feynman pointed out some 30 or so years ago, is that the information these states contain is vastly different from anything we know how to process.

    He hinted that we could also use these states to build computers whose power vastly exceeds any conventional computer you could imagine building. So you open this door to new discovery, new science and new technologies that could be truly amazing: fully secure communications, really precise sensors, simulation of new materials and molecules, perhaps leading to new drugs. This dual road, where you can see a really fruitful area, a new frontier of science, and new applications is really exciting.

    Has the potential for that sped up in the last decade, as technological improvement has?
    Yes, I think particularly in the UK the government has identified the technological potential of quantum science and felt it was something the UK could take a lead on, based on the long history of innovation in this country in the underpinning science. They’ve invested a lot of money and that’s really enabled us to begin to tackle some of the serious engineering and technology questions that weren’t possible before. It’s a good time to be in the field.

    Where in Oxford are you building these structures?
    There’s a new building being built in the Physics Department, just on Keble Road, and part of the laboratory space will be for this new technology centre – that’s where this machine will be built.

    You’re also Pro-Vice-Chancellor for research; what does that involve?
    My role as Pro-Vice-Chancellor is really to have sight of the research activities, and help drive some of the research policies and the overarching research strategy for the institution. It’s also to do with the wider engagement agenda, especially around innovation: how do we ensure that, where it’s appropriate and possible, the fruits of our research are utilised for the benefit of society? That’s also a very exciting part of the work: seeing this ferment of ideas and being able to facilitate where some of them, at the right time, have possible applications is really fantastic.

    Having worked at various different universities, is there anything you think is particularly distinctive about Oxford?
    Well, I think it’s a place that respects the creative autonomy of individuals and works hard to make sure that people can pursue the ideas they want to pursue. And the structure, whereby you can get to talk to people of many different backgrounds and expertise, is, I think, something that is different from many places. I think the scale of excellence across the University institution is something that gives Oxford a distinctive flavour.

    When you stop researching, what would you like to consider the ultimate legacy of your work to be?
    On the science end, if we’re able to really show how you can build these quantum machines and use them for new machines and applications – it would be great to have contributed something substantive towards that. Moreover, to have enabled the University to continue to excel and to realise its potential as a critical part of a modern society.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Oxford Campus

    Oxford is a collegiate university, consisting of the central University and colleges. The central University is composed of academic departments and research centres, administrative departments, libraries and museums. The 38 colleges are self-governing and financially independent institutions, which are related to the central University in a federal system. There are also six permanent private halls, which were founded by different Christian denominations and which still retain their Christian character.

    The different roles of the colleges and the University have evolved over time.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: