Tagged: Quantum Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:49 pm on July 15, 2017 Permalink | Reply
    Tags: An advanced atomic cloud locked up in a small glass cage, Laser light to link caesium atoms and a vibrating membrane, Light 'kicks' object, , QBA-Quantum Back Action, Quantum Physics, Smart atomic cloud solves Heisenberg's observation problem, U Copenhagen   

    From U Copenhagen Niels Bohr Institute: “Smart atomic cloud solves Heisenberg’s observation problem” 

    University of Copenhagen

    Niels Bohr Institute bloc

    Niels Bohr Institute

    13 July 2017
    Eugene Polzik
    +45 2338 2045

    Quantum physics: Scientists at the Niels Bohr Institute, University of Copenhagen have been instrumental in developing a ‘hands-on’ answer to a challenge intricately linked to a very fundamental principle in physics: Heisenberg’s Uncertainty Principle. The NBI-researchers used laser light to link caesium atoms and a vibrating membrane. The research, the first of its kind, points to sensors capable of measuring movement with unseen precision.

    From the left: Phd student Rodrigo Thomas, Professor Eugene Polzik and PhD student Christoffer Møller in front of the experiment demonstrating quantum measurement of motion. Photo: Ola J. Joensen.

    Our lives are packed with sensors gathering all sorts of information – and some of the sensors are integrated in our cell phones which e.g. enables us to measure the distances we cover when we go for a walk – and thereby also calculate how many calories we have burned thanks to the exercise. And this to most people seems rather straight forward.

    When measuring atom structures or light emissions at the quantum level by means of advanced microscopes or other forms of special equipment, things do, however, get a little more complicated due to a problem which during the 1920’s had the full attention of Niels Bohr as well as Werner Heisenberg. And this problem – this has to do with the fact that in-accuracies inevitably taint certain measurements conducted at quantum level – is described in Heisenberg’s Uncertainty Principle.

    In a scientific report published in this week’s issue of Nature, NBI-researchers – based on a number of experiments – demonstrate that Heisenberg’s Uncertainty Principle to some degree can be neutralized. This has never been shown before, and the results may spark development of new measuring equipment as well as new and better sensors.

    Professor Eugene Polzik, head of Quantum Optics (QUANTOP) at the Niels Bohr Institute, has been in charge of the research – which has included the construction of a vibrating membrane and an advanced atomic cloud locked up in a small glass cage.

    If laser light used to measure motion of a vibrating membrane (left) is first transmitted through an atom cloud (center) the measurement sensitivity can be better than standard quantum limits envisioned by Bohr and Heisenberg. Photo: Bastian Leonhardt Strube and Mads Vadsholt.

    Light ‘kicks’ object

    Heisenberg’s Uncertainty Principle basically says that you cannot simultaneously know the exact position and the exact speed of an object.

    Which has to do with the fact that observations conducted via a microscope operating with laser light inevitably will lead to the object being ‘kicked’. This happens because light is a stream of photons which when reflected off the object give it random ‘kicks’ – and as a result of those kicks the object begins to move in a random way.

    This phenomenon is known as Quantum Back Action (QBA) – and these random movements put a limit to the accuracy with which measurements can be carried out at quantum level.

    To conduct the experiments at NBI professor Polzik and his team of “young, enthusiastic and very skilled NBI-researchers” used a ‘tailor-made’ membrane as the object observed at quantum level. The membrane was built by Ph.D. Students Christoffer Møller and Yegishe Tsaturyan, whereas Rodrigo Thomas and Georgios Vasikalis – Ph.D. Student and researcher, respectively – were in charge of the atomic aspects. Furthermore Polzik relied on other NBI-employees, assistant professor Mikhail Balabas, who built the minute glass cage for the atoms, researcher Emil Zeuthen and professor Albert Schliesser who – collaborating with German colleagues – were in charge of the substantial number of mathematical calculations needed before the project was ready for publication in Nature.

    The atomic part of the hybrid experiment. The atoms are contained in a micro-cell inside the magnetic shield seen in the middle. Photo: Ola J. Joensen.

    Over the last decades scientists have tried to find ways of ‘fooling’ Heisenberg’s Uncertainty Principle. Eugene Polzik and his colleagues came up with the idea of implementing the advanced atomic cloud a few years ago – and the cloud consists of 100 million caesium-atoms locked up in a hermetically closed cage, a glass cell, explains the professor:

    “The cell is just 1 centimeter long, 1/3 of a millimeter high and 1/3 of a millimeter wide, and in order to make the atoms work as intended, the inner cell walls have been coated with paraffin. The membrane – whose movements we were following at quantum level – measures 0,5 millimeter, which actually is a considerable size in a quantum perspective”.

    The idea behind the glass cell is to deliberately send the laser light used to study the membrane-movements on quantum level through the encapsulated atomic cloud BEFORE the light reaches the membrane, explains Eugene Polzik: “This results in the laser light-photons ‘kicking’ the object – i.e. the membrane – as well as the atomic cloud, and these ‘kicks’ so to speak cancel out. This means that there is no longer any Quantum Back Action – and therefore no limitations as to how accurately measurements can be carried out at quantum level”.

    The optomechanical part of the hybrid experiment. The cryostat seen in the middle houses the vibrating membrane whose quantum motion is measured. Photo: Ola J. Joensen.

    How can this be utilized?

    “For instance when developing new and much more advanced types of sensors for various analyses of movements than the types we know today from cell phones, GPS and geological surveys”, says professor Eugene Polzik: “Generally speaking sensors operating at the quantum level are receiving a lot of attention these days. One example is the Quantum Technologies Flagship, an extensive EU program which also supports this type of research”.

    The fact that it is indeed possible to ‘fool’ Heisenberg’s Uncertainty Principle may also prove significant in relation to better understanding gravitational waves – waves in the fabric of space-time itself of light.

    In September of 2015 the American LIGO-experiment was able to publish the first direct registrations and measurements of gravitational waves stemming from a collision between two very large black holes.

    However, the equipment used by LIGO is influenced by Quantum Back Action, and the new research from NBI may prove capable of eliminating that problem, says Eugene Polzik.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Niels Bohr Institute Campus

    The Niels Bohr Institute (Danish: Niels Bohr Institutet) is a research institute of the University of Copenhagen. The research of the institute spans astronomy, geophysics, nanotechnology, particle physics, quantum mechanics and biophysics.

    The Institute was founded in 1921, as the Institute for Theoretical Physics of the University of Copenhagen, by the Danish theoretical physicist Niels Bohr, who had been on the staff of the University of Copenhagen since 1914, and who had been lobbying for its creation since his appointment as professor in 1916. On the 80th anniversary of Niels Bohr’s birth – October 7, 1965 – the Institute officially became The Niels Bohr Institute.[1] Much of its original funding came from the charitable foundation of the Carlsberg brewery, and later from the Rockefeller Foundation.[2]

    During the 1920s, and 1930s, the Institute was the center of the developing disciplines of atomic physics and quantum physics. Physicists from across Europe (and sometimes further abroad) often visited the Institute to confer with Bohr on new theories and discoveries. The Copenhagen interpretation of quantum mechanics is named after work done at the Institute during this time.

    On January 1, 1993 the institute was fused with the Astronomic Observatory, the Ørsted Laboratory and the Geophysical Institute. The new resulting institute retained the name Niels Bohr Institute.

    The University of Copenhagen (UCPH) (Danish: Københavns Universitet) is the oldest university and research institution in Denmark. Founded in 1479 as a studium generale, it is the second oldest institution for higher education in Scandinavia after Uppsala University (1477). The university has 23,473 undergraduate students, 17,398 postgraduate students, 2,968 doctoral students and over 9,000 employees. The university has four campuses located in and around Copenhagen, with the headquarters located in central Copenhagen. Most courses are taught in Danish; however, many courses are also offered in English and a few in German. The university has several thousands of foreign students, about half of whom come from Nordic countries.

    The university is a member of the International Alliance of Research Universities (IARU), along with University of Cambridge, Yale University, The Australian National University, and UC Berkeley, amongst others. The 2016 Academic Ranking of World Universities ranks the University of Copenhagen as the best university in Scandinavia and 30th in the world, the 2016-2017 Times Higher Education World University Rankings as 120th in the world, and the 2016-2017 QS World University Rankings as 68th in the world. The university has had 9 alumni become Nobel laureates and has produced one Turing Award recipient

  • richardmitnick 12:16 pm on February 11, 2017 Permalink | Reply
    Tags: , Beamsplitter, , Quantum Physics, Wave function, What shape are photons? Quantum holography sheds light   

    From COSMOS: “What shape are photons? Quantum holography sheds light” 

    Cosmos Magazine bloc


    20 July 2016 [Just found this in social media]
    Cathal O’Connell

    Hologram of a single photon reconstructed from raw measurements (left) and theoretically predicted (right).

    Imagine a shaft of yellow sunlight beaming through a window. Quantum physics tells us that beam is made of zillions of tiny packets of light, called photons, streaming through the air. But what does an individual photon “look” like? Does it have a shape? Are these questions even meaningful?

    Now, Polish physicists have created the first ever hologram of a single light particle. The feat, achieved by observing the interference of two intersecting light beams, is an important insight into the fundamental quantum nature of light.

    The result could also be important for technologies that require an understanding of the shape of single photons – such as quantum communication and quantum computers.

    ”We performed a relatively simple experiment to measure and view something incredibly difficult to observe: the shape of wavefronts of a single photon,” says Radoslaw Chrapkiewicz, a physicist at the University of Warsaw and lead author of the new paper, published in Nature Photonics.

    For hundreds of years, physicists have been working to figure out what light is made of. In the 19th century, the debate seemed to be settled by Scottish physicist James Clerk Maxwell’s description of light as a wave of electromagnetism.

    But things got a bit more complicated at the turn of the 20th century when German physicist Max Planck, then fellow countryman Albert Einstein, showed light was made up of tiny indivisible packets called photons.

    In the 1920s, Austrian physicist Erwin Schrödinger elaborated on these ideas with his equation for the quantum wave function to describe what a wave looks like, which has proved incredibly powerful in predicting the results of experiments with photons. But, despite the success of Schrödinger’s theory, physicists still debate what the wave function really means.

    Now physicists at the University of Warsaw measured, for the first time, the shape described by Schrödinger’s equation in a real experiment.

    Photons, travelling as waves, can be in step (called having the same phase). If they interact, they produce a bright signal. If they’re out of phase, they cancel each other out. It’s like sound waves from two speakers producing loud and quiet patches in a room.

    The image – which is called a hologram because it holds information on both the photon’s shape and phase – was created by firing two light beams at a beamsplitter, made of calcite crystal, at the same time.

    The beamsplitter acts a bit like a traffic intersection, where each photon can either pass straight on through or make a turn. The Polish team’s experiment hinged on measuring which path each photon took, which depends on the shape of their wave functions.

    Scheme of the experimental setup for measuring holograms of single photons. FUW / dualcolor.pl / jch

    For a photon on its own, each path is equally probable. But when two photons approach the intersection, they interact – and these odds change.

    The team realised that if they knew the wave function of one of the photons, they could figure out the shape of the second from the positions of flashes appearing on a detector.

    It’s a little like firing two bullets to glance off one another mid-air and using the deflected trajectories to figure our shape of each projectile.

    Each run of the experiment produced two flashes on a detector, one for each photon. After more than 2,000 repetitions, a pattern of flashes built up and the team were able to reconstruct the shape of the unknown photon’s wave function.

    The resulting image looks a bit like a Maltese cross, just like the wave function predicted from Schrödinger’s equation. In the arms of the cross, where the photons were in step, the image is bright – and where they weren’t, we see darkness.

    The experiment brings us “a step closer to understanding what the wave function really is,” says Michal Jachura, who co-authored the work, and it could be a new tool for studying the interaction between two photons, on which technologies such as quantum communication and some versions of quantum computing rely.

    The researchers also hope to recreate wave functions of more complex quantum objects, such as atoms.

    “It’s likely that real applications of quantum holography won’t appear for a few decades yet,” says Konrad Banaszek, who was also part of the team, “but if there’s one thing we can be sure of it’s that they will be surprising.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 7:04 pm on February 7, 2017 Permalink | Reply
    Tags: , Quantum Physics, Reality at the atomic scale,   

    From The New Yorker: “Quantum Theory by Starlight” Gee, Actual Physical Science from The New Yorker 

    Rea Irvin

    The New Yorker

    [Shock of shocks, The New Yorker remembers the physical sciences.Anyone remember Jeremy Bernstein?]

    David Kaiser

    In parsing the strange dance of subatomic particles, it can be helpful to think of them as twins. IMAGE BY CHRONICLE / ALAMY

    The headquarters of the National Bank of Austria, in central Vienna, are exceptionally secure. During the week, in the basement of the building, employees perform quality-control tests on huge stacks of euros. One night last spring, however, part of the bank was given over to a different sort of testing. A group of young physicists, with temporary I.D. badges and sensitive electronics in tow, were allowed up to the top floor, where they assembled a pair of telescopes. One they aimed skyward, at a distant star in the Milky Way. The other they pointed toward the city, searching for a laser beam shot from a rooftop several blocks away. For all the astronomical equipment, though, their real quarry was a good deal smaller. They were there to conduct a new test of quantum theory, the branch of physics that seeks to explain reality at the atomic scale.

    It is difficult to overstate the weirdness of quantum physics. Even Albert Einstein and Erwin Schrödinger, both major architects of the theory, ultimately found it too outlandish to be wholly true. Throughout the summer of 1935, they aired their frustrations in a series of letters. For one thing, unlike Newtonian physics and Einstein’s relativity, which elegantly explained the behavior of everything from the fall of apples to the motion of galaxies, quantum theory offered only probabilities for various outcomes, not rock-solid predictions. It was an “epistemology-soaked orgy,” Einstein wrote, treating objects in the real world as mere puffs of possibility—both there and not there, or, in the case of Schrödinger’s famous imaginary cat, both alive and dead. Strangest of all was what Schrödinger dubbed “entanglement.” In certain situations, the equations of quantum theory implied that one subatomic particle’s behavior was bound up with another’s, even if the second particle was across the room, or on the other side of the planet, or in the Andromeda galaxy. They couldn’t be communicating, exactly, since the effect seemed to be instantaneous, and Einstein had already demonstrated that nothing could travel faster than light. In a letter to a friend, he dismissed entanglement as “spooky actions at a distance”—more ghost story than respectable science. But how to account for the equations?

    Physicists often invoke twins when trying to articulate the more fantastical elements of their theories. Einstein’s relativity, for instance, introduced the so-called twin paradox, which illustrates how a rapid journey through space and time can make one woman age more slowly than her twin. (Schrödinger’s interest in twins was rather less academic. His exploits with the Junger sisters, who were half his age, compelled his biographer to save a spot in the index for “Lolita complex.”) I am a physicist, and my wife and I actually have twins, so I find it particularly helpful to think about them when trying to parse the strange dance of entanglement.

    Let us call our quantum twins Ellie and Toby. Imagine that, at the same instant, Ellie walks into a restaurant in Cambridge, Massachusetts, and Toby walks into a restaurant in Cambridge, England. They ponder the menus, make their selections, and enjoy their meals. Afterward, their waiters come by to offer dessert. Ellie is given the choice between a brownie and a cookie. She has no real preference, being a fan of both, so she chooses one seemingly at random. Toby, who shares his sister’s catholic attitude toward sweets, does the same. Both siblings like their restaurants so much that they return the following week. This time, when their meals are over, the waiters offer ice cream or frozen yogurt. Again the twins are delighted—so many great options!—and again they choose at random.

    In the ensuing months, Ellie and Toby return to the restaurants often, alternating aimlessly between cookies or brownies and ice cream or frozen yogurt. But when they get together for Thanksgiving, looking rather plumper than last year, they compare notes and find a striking pattern in their selections. It turns out that when both the American and British waiters offered baked goods, the twins usually ordered the same thing—a brownie or a cookie for each. When the offers were different, Toby tended to order ice cream when Ellie ordered brownies, and vice versa. For some reason, though, when they were both offered frozen desserts, they tended to make opposite selections—ice cream for one, frozen yogurt for the other. Toby’s chances of ordering ice cream seemed to depend on what Ellie ordered, an ocean away. Spooky, indeed.

    Einstein believed that particles have definite properties of their own, independent of what we choose to measure, and that local actions produce only local effects—that what Toby orders has no bearing on what Ellie orders. In 1964, the Irish physicist John Bell identified the statistical threshold between Einstein’s world and the quantum world. If Einstein was right, then the outcomes of measurements on pairs of particles should line up only so often; there should be a strict limit on how frequently Toby’s and Ellie’s dessert orders are correlated. But if he was wrong, then the correlations should occur significantly more often. For the past four decades, scientists have tested the boundaries of Bell’s theorem. In place of Ellie and Toby, they have used specially prepared pairs of particles, such as photons of light. In place of friendly waiters recording dessert orders, they have used instruments that can measure some physical property, such as polarization—whether a photon’s electric field oscillates along or at right angles to some direction in space. To date, every single published test has been consistent with quantum theory.

    From the start, however, physicists have recognized that their experiments are subject to various loopholes, circumstances that could, in principle, account for the observed results even if quantum theory were wrong and entanglement merely a chimera. One loophole, known as locality, concerns information flow: could a particle on one side of the experiment, or the instrument measuring it, have sent some kind of message to the other side before the second measurement was completed? Another loophole concerns statistics: what if the particles that were measured somehow represented a biased sample, a few spooky dessert orders amid thousands of unseen boring ones? Physicists have found clever ways of closing one or the other of these loopholes over the years, and in 2015, in a beautiful experiment out of the Netherlands, one group managed to close both at once. But there is a third major loophole, one that Bell overlooked in his original analysis. Known as the freedom-of-choice loophole, it concerns whether some event in the past could have nudged both the choice of measurements to be performed and the behavior of the entangled particles—in our analogy, the desserts being offered and the selections that Ellie and Toby made. Where the locality loophole imagines Ellie and Toby, or their waiters, communicating with each other, the freedom-of-choice loophole supposes that some third party could have rigged things without any of them noticing. It was this loophole that my colleagues and I recently set out to address.

    We performed our experiment last April, spread out in three locations across Schrödinger’s native Vienna. A laser in Anton Zeilinger’s laboratory at the Institute for Quantum Optics and Quantum Information supplied our entangled photons. About three-quarters of a mile to the north, Thomas Scheidl and his colleagues set up two telescopes in a different university building. One was aimed at the institute, ready to receive the entangled photons, and one was pointed in the opposite direction, fixed on a star in the night sky. Several blocks south of the institute, at the National Bank of Austria, a second team, led by Johannes Handsteiner, had a comparable setup. Their second telescope, the one that wasn’t looking at the institute, was turned to the south.

    Our group’s goal was to measure pairs of entangled particles while insuring that the type of measurement we performed on one had nothing to do with how we assessed the other. In short, we wanted to turn the universe into a pair of random-number generators. Handsteiner’s target star was six hundred light-years from Earth, which meant that the light received by his telescope had been travelling for six hundred years. We selected the star carefully, such that the light it emitted at a particular moment all those centuries ago would reach Handsteiner’s telescope first, before it could cover the extra distance to either Zeilinger’s lab or the university. Scheidl’s target star, meanwhile, was nearly two thousand light-years away. Both team’s telescopes were equipped with special filters, which could distinguish extremely rapidly between photons that were more red or more blue than a particular reference wavelength. If Handsteiner’s starlight in a given instant happened to be more red, then the instruments at his station would perform one type of measurement on the entangled photon, which was just then zipping through the night sky, en route from Zeilinger’s laboratory. If Handsteiner’s starlight happened instead to be blue, then the other type of measurement would be performed. The same went for Scheidl’s station. The detector settings on each side changed every few millionths of a second, based on new observations of the stars.

    With this arrangement, it was as if each time Ellie walked into the restaurant, her waiter offered her a dessert based on an event that had occurred several centuries earlier, trillions of miles from the Earth—which neither Ellie, nor Toby, nor Toby’s waiter could have foreseen. Meanwhile, by placing Handsteiner’s and Scheidl’s stations relatively far apart, we were able to close the locality loophole even as we addressed the freedom-of-choice loophole. (Since we only detected a small fraction of all the entangled particles that were emitted from Zeilinger’s lab, though, we had to assume that the photons we did measure represented a fair sample of the whole collection.) We conducted two experiments that night, aiming the stellar telescopes at one pair of stars for three minutes, then another pair for three more. In each case, we detected about a hundred thousand pairs of entangled photons. The results from each experiment showed beautiful agreement with the predictions from quantum theory, with correlations far exceeding what Bell’s inequality would allow. Our results were published on Tuesday in the journal Physical Review Letters.

    How might a devotee of Einstein’s ideas respond? Perhaps our assumption of fair sampling was wrong, or perhaps some strange, unknown mechanism really did exploit the freedom-of-choice loophole, in effect alerting one receiving station of what was about to occur at the other. We can’t rule out such a bizarre scenario, but we can strongly constrain it. In fact, our experiment represents an improvement by sixteen orders of magnitude—a factor of ten million billion—over previous efforts to address the freedom-of-choice loophole. In order to account for the results of our new experiment, the unknown mechanism would need to have been set in place before the emission of the starlight that Handsteiner’s group observed, back when Joan of Arc’s friends still called her Joanie.

    Experiments like ours—and follow-up versions we plan to conduct, using larger telescopes to spy even fainter, more distant astronomical objects—harness some of the largest scales in nature to test its tiniest, and most fundamental, phenomena. Beyond that, our explorations could help shore up the security of next-generation devices, such as quantum-encryption schemes, which depend on entanglement to protect against hackers and eavesdroppers. But, for me, the biggest motivation remains exploring the strange mysteries of quantum theory. The world described by quantum mechanics is fundamentally, stubbornly different from the worlds of Newtonian physics or Einsteinian relativity. If Ellie’s and Toby’s dessert orders are going to keep lining up so spookily, I want to know why.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 10:32 am on September 8, 2015 Permalink | Reply
    Tags: , , Quantum Physics   

    From Nature: “Quantum physics: What is really real?” 

    Nature Mag

    20 May 2015
    Zeeya Merali

    An experiment showing that oil droplets can be propelled across a fluid bath by the waves they generate has prompted physicists to reconsider the idea that something similar allows particles to behave like waves. No image credit

    Owen Maroney worries that physicists have spent the better part of a century engaging in fraud.

    Ever since they invented quantum theory in the early 1900s, explains Maroney, who is himself a physicist at the University of Oxford, UK, they have been talking about how strange it is — how it allows particles and atoms to move in many directions at once, for example, or to spin clockwise and anticlockwise simultaneously. But talk is not proof, says Maroney. “If we tell the public that quantum theory is weird, we better go out and test that’s actually true,” he says. “Otherwise we’re not doing science, we’re just explaining some funny squiggles on a blackboard.”

    It is this sentiment that has led Maroney and others to develop a new series of experiments to uncover the nature of the wavefunction — the mysterious entity that lies at the heart of quantum weirdness. On paper, the wavefunction is simply a mathematical object that physicists denote with the Greek letter psi (Ψ) — one of Maroney’s funny squiggles — and use to describe a particle’s quantum behaviour. Depending on the experiment, the wavefunction allows them to calculate the probability of observing an electron at any particular location, or the chances that its spin is oriented up or down. But the mathematics shed no light on what a wavefunction truly is. Is it a physical thing? Or just a calculating tool for handling an observer’s ignorance about the world?

    The tests being used to work that out are extremely subtle, and have yet to produce a definitive answer. But researchers are optimistic that a resolution is close. If so, they will finally be able to answer questions that have lingered for decades. Can a particle really be in many places at the same time? Is the Universe continually dividing itself into parallel worlds, each with an alternative version of ourselves? Is there such a thing as an objective reality at all?

    “These are the kinds of questions that everybody has asked at some point,” says Alessandro Fedrizzi, a physicist at the University of Queensland in Brisbane, Australia. “What is it that is really real?”

    Debates over the nature of reality go back to physicists’ realization in the early days of quantum theory that particles and waves are two sides of the same coin. A classic example is the double-slit experiment, in which individual electrons are fired at a barrier with two openings: the electron seems to pass through both slits in exactly the same way that a light wave does, creating a banded interference pattern on the other side (see ‘Wave–particle weirdness’). In 1926, the Austrian physicist Erwin Schrödinger invented the wavefunction to describe such behaviour, and devised an equation that allowed physicists to calculate it in any given situation1. But neither he nor anyone else could say anything about the wavefunction’s nature.

    Ignorance is bliss

    From a practical perspective, its nature does not matter. The textbook Copenhagen interpretation of quantum theory, developed in the 1920s mainly by physicists Niels Bohr and Werner Heisenberg, treats the wavefunction as nothing more than a tool for predicting the results of observations, and cautions physicists not to concern themselves with what reality looks like underneath. “You can’t blame most physicists for following this ‘shut up and calculate’ ethos because it has led to tremendous developments in nuclear physics, atomic physics, solid-state physics and particle physics,” says Jean Bricmont, a statistical physicist at the Catholic University of Louvain in Belgium. “So people say, let’s not worry about the big questions.”

    But some physicists worried anyway. By the 1930s, Albert Einstein had rejected the Copenhagen interpretation — not least because it allowed two particles to entangle their wavefunctions, producing a situation in which measurements on one could instantaneously determine the state of the other even if the particles were separated by vast distances. Rather than accept such “spooky action at a distance”, Einstein preferred to believe that the particles’ wavefunctions were incomplete. Perhaps, he suggested, the particles have some kind of ‘hidden variables’ that determine the outcome of the measurement, but that quantum theories do not capture.

    Experiments since then have shown that this spooky action at a distance is quite real, which rules out the particular version of hidden variables that Einstein advocated. But that has not stopped other physicists from coming up with interpretations of their own. These interpretations fall into two broad camps. There are those that agree with Einstein that the wavefunction represents our ignorance — what philosophers call psi-epistemic models. And there are those that view the wavefunction as a real entity — psi-ontic models.

    To appreciate the difference, consider a thought experiment that Schrödinger described in a 1935 letter to Einstein. Imagine that a cat is enclosed in a steel box. And imagine that the box also contains a sample of radioactive material that has a 50% probability of emitting a decay product in one hour, along with an apparatus that will poison the cat if it detects such a decay. Because radioactive decay is a quantum event, wrote Schrödinger, the rules of quantum theory state that, at the end of the hour, the wavefunction for the box’s interior must be an equal mixture of live cat and dead cat.

    No image credit

    “Crudely speaking,” says Fedrizzi, “in a psi-epistemic model the cat in the box is either alive or it’s dead and we just don’t know because the box is closed.” But most psi-ontic models agree with the Copenhagen interpretation: until an observer opens the box and looks, the cat is both alive and dead.

    But this is where the debate gets stuck. Which of quantum theory’s many interpretations — if any — is correct? That is a tough question to answer experimentally, because the differences between the models are subtle: to be viable, they have to predict essentially the same quantum phenomena as the very successful Copenhagen interpretation. Andrew White, a physicist at the University of Queensland, says that for most of his 20-year career in quantum technologies “the problem was like a giant smooth mountain with no footholds, no way to attack it”.

    That changed in 2011, with the publication of a theorem about quantum measurements that seemed to rule out the wavefunction-as-ignorance models. On closer inspection, however, the theorem turned out to leave enough wiggle room for them to survive. Nonetheless, it inspired physicists to think seriously about ways to settle the debate by actually testing the reality of the wavefunction. Maroney had already devised an experiment that should work in principle, and he and others soon found ways to make it work in practice. The experiment was carried out last year by Fedrizzi, White and others7.

    To illustrate the idea behind the test, imagine two stacks of playing cards. One contains only red cards; the other contains only aces. “You’re given a card and asked to identify which deck it came from,” says Martin Ringbauer, a physicist also at the University of Queensland. If it is a red ace, he says, “there’s an overlap and you won’t be able to say where it came from”. But if you know how many of each type of card is in each deck, you can at least calculate how often such ambiguous situations will arise.

    Out on a limb

    A similar ambiguity occurs in quantum systems. It is not always possible for a single measurement in the lab to distinguish how a photon is polarized, for example. “In real life, it’s pretty easy to tell west from slightly south of west, but in quantum systems, it’s not that simple,” says White. According to the standard Copenhagen interpretation, there is no point in asking what the polarization is because the question does not have an answer — or at least, not until another measurement can determine that answer precisely. But according to the wavefunction-as-ignorance models, the question is perfectly meaningful; it is just that the experimenters — like the card-game player — do not have enough information from that one measurement to answer. As with the cards, it is possible to estimate how much ambiguity can be explained by such ignorance, and compare it with the larger amount of ambiguity allowed by standard theory.

    That is essentially what Fedrizzi’s team tested. The group measured polarization and other features in a beam of photons and found a level of overlap that could not be explained by the ignorance models. The results support the alternative view that, if objective reality exists, then the wavefunction is real. “It’s really impressive that the team was able to address a profound issue, with what’s actually a very simple experiment,” says Andrea Alberti, a physicist at the University of Bonn in Germany.

    The conclusion is still not ironclad, however: because the detectors picked up only about one-fifth of the photons used in the test, the team had to assume that the lost photons were behaving in the same way. That is a big assumption, and the group is currently working on closing the sampling gap to produce a definitive result. In the meantime, Maroney’s team at Oxford is collaborating with a group at the University of New South Wales in Australia, to perform similar tests with ions, which are easier to track than photons. “Within the next six months we could have a watertight version of this experiment,” says Maroney.

    But even if their efforts succeed and the wavefunction-as-reality models are favoured, those models come in a variety of flavours — and experimenters will still have to pick them apart.

    One of the earliest such interpretations was set out in the 1920s by French physicist Louis de Broglie8, and expanded in the 1950s by US physicist David Bohm. According to de Broglie–Bohm models, particles have definite locations and properties, but are guided by some kind of ‘pilot wave’ that is often identified with the wavefunction. This would explain the double-slit experiment because the pilot wave would be able to travel through both slits and produce an interference pattern on the far side, even though the electron it guided would have to pass through one slit or the other.

    In 2005, de Broglie–Bohmian mechanics received an experimental boost from an unexpected source. Physicists Emmanuel Fort, now at the Langevin Institute in Paris, and Yves Couder at the University of Paris Diderot gave the students in an undergraduate laboratory class what they thought would be a fairly straightforward task: build an experiment to see how oil droplets falling into a tray filled with oil would coalesce as the tray was vibrated. Much to everyone’s surprise, ripples began to form around the droplets when the tray hit a certain vibration frequency. “The drops were self-propelled — surfing or walking on their own waves,” says Fort. “This was a dual object we were seeing — a particle driven by a wave.”

    Since then, Fort and Couder have shown that such waves can guide these ‘walkers’ through the double-slit experiment as predicted by pilot-wave theory, and can mimic other quantum effects, too11. This does not prove that pilot waves exist in the quantum realm, cautions Fort. But it does show how an atomic-scale pilot wave might work. “We were told that such effects cannot happen classically,” he says, “and here we are, showing that they do.”

    Another set of reality-based models, devised in the 1980s, tries to explain the strikingly different properties of small and large objects. “Why electrons and atoms can be in two different places at the same time, but tables, chairs, people and cats can’t,” says Angelo Bassi, a physicist at the University of Trieste, Italy. Known as ‘collapse models’, these theories postulate that the wavefunctions of individual particles are real, but can spontaneously lose their quantum properties and snap the particle into, say, a single location. The models are set up so that the odds of this happening are infinitesimal for a single particle, so that quantum effects dominate at the atomic scale. But the probability of collapse grows astronomically as particles clump together, so that macroscopic objects lose their quantum features and behave classically.

    One way to test this idea is to look for quantum behaviour in larger and larger objects. If standard quantum theory is correct, there is no limit. And physicists have already carried out double-slit interference experiments with large molecules. But if collapse models are correct, then quantum effects will not be apparent above a certain mass. Various groups are planning to search for such a cut-off using cold atoms, molecules, metal clusters and nanoparticles. They hope to see results within a decade. “What’s great about all these kinds of experiments is that we’ll be subjecting quantum theory to high-precision tests, where it’s never been tested before,” says Maroney.

    Parallel worlds

    One wavefunction-as-reality model is already famous and beloved by science-fiction writers: the many-worlds interpretation developed in the 1950s by Hugh Everett, who was then a graduate student at Princeton University in New Jersey. In the many-worlds picture, the wavefunction governs the evolution of reality so profoundly that whenever a quantum measurement is made, the Universe splits into parallel copies. Open the cat’s box, in other words, and two parallel worlds will branch out — one with a living cat and another containing a corpse.

    Distinguishing Everett’s many-worlds interpretation from standard quantum theory is tough because both make exactly the same predictions. But last year, Howard Wiseman at Griffith University in Brisbane and his colleagues proposed a testable multiverse model13. Their framework does not contain a wavefunction: particles obey classical rules such as Newton’s laws of motion. The weird effects seen in quantum experiments arise because there is a repulsive force between particles and their clones in parallel universes. “The repulsive force between them sets up ripples that propagate through all of these parallel worlds,” Wiseman says.

    Using computer simulations with as many as 41 interacting worlds, they have shown that this model roughly reproduces a number of quantum effects, including the trajectories of particles in the double-slit experiment13. The interference pattern becomes closer to that predicted by standard quantum theory as the number of worlds increases. Because the theory predicts different results depending on the number of universes, says Wiseman, it should be possible to devise ways to check whether his multiverse model is right — meaning that there is no wavefunction, and reality is entirely classical.

    Because Wiseman’s model does not need a wavefunction, it will remain viable even if future experiments rule out the ignorance models. Also surviving would be models, such as the Copenhagen interpretation, that maintain there is no objective reality — just measurements.

    But then, says White, that is the ultimate challenge. Although no one knows how to do it yet, he says, “what would be really exciting is to devise a test for whether there is in fact any objective reality out there at all.”

    See the original article for References.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

  • richardmitnick 7:39 am on August 25, 2015 Permalink | Reply
    Tags: , Quantum Physics,   

    From Oxford: “Randomness and order” 

    U Oxford bloc

    University of Oxford

    This series grew out of The Oxford Research Centre in the Humanities (TORCH) conference ‘Randomness and Order’, at which academics in the fields of quantum physics, music, probability and medieval history discussed what randomness meant in their different disciplines.

    Professor Ian Walmsley, FRS

    Ian Walmsley is Hooke Professor of Experimental Physics, a Professorial Fellow of St Hugh’s College and was appointed Pro-Vice-Chancellor for Research in February 2009.

    I’m talking to Professor Ian Walmsley, Pro-Vice-Chancellor for Research and Hooke Professor of Experimental Physics.

    What does randomness and order mean in quantum physics?
    It’s more than the usual sort of randomness we might encounter in everyday life, where you’re not sure what the stock market’s going to do or what the weather’s going to be like. Because, although those things are definite – in the sense that there will be stocks that have real prices to the buyer and seller – the whole edifice is so complicated that it’s impossible to know what those tiny details are at any one time.

    But, in quantum mechanics, the notion of randomness is embedded very much in the theory itself: that it is intrinsically unknowable – not just that you don’t know it, but that it is not knowable itself.

    There’s been a long debate as to whether this is simply that we have an inadequate theory and that at the bottom of it all there’s really stuff there that we can talk about definitely, or whether it really is that unknowable. People are still doing experiments, still thinking of ways to test that very concept, which is remarkable given how successful we’ve been in applying that theory to do all sorts of things. So it’s strange that this very successful theory somehow seems to be built on foundations that we don’t properly understand.

    When you first came across the extent of randomness in the world’s structure, did it change your perspective?
    Certainly it’s something that is very starkly evident as you begin to learn quantum mechanics as an undergraduate, and it does affect how you understand the very nature of what physics is about.

    Yet one does wonder whether in a sense it’s a modern disease – that is, the reason it feels so strange is that we’re used to the idea that science dissects things to the point where you reach irreducible elements that are real things (and then you can build up concepts and ideas on top of those). Quantum mechanics seems to shake that picture. Then the question is: was our earlier picture just something we were comfortable with, not any more real?

    Nonetheless, there is a dichotomy between the concept that things are fuzzy at the foundations and yet in everyday life we find things apparently completely certain: this is a solid table; we know it’s there and we don’t seem to feel there’s an uncertainty about it at all. So the question as to how this certainty arises out of this picture of underlying fuzziness is of great interest to physicists.

    There’s always a tendency in physics to tie the concepts that appear in your calculations to things that actually exist in the world. That’s not a uniquely quantum mechanical thing: [Sir Isaac] Newton was challenged when he came up with his ideas about gravity, which required there to be a force – an action-at-a-distance – between planets, and people felt, because he couldn’t describe in physical terms what that connection was, that he was introducing ideas of ‘the occult’ into science. He had a very impressive tool to calculate orbits based on a concept that at the time people felt was just ridiculous – the objection that it didn’t have a correspondence in the universe is the same as what we find now. The idea that things in your equations must correspond to things in the real world is always a tension in physics, and quantum mechanics just raises that in a new and very profound way – a way that challenges our conception of what the scientific enterprise is about.

    Do you think there’s something problematic about human desire to find order, when there’s a lot about the structure of the universe that is random?
    This is outside my realm of expertise, but I think the enterprise of physics is about deeper understanding. Our understanding of the universe’s structure does give us a perspective of our place in the world. In the case of quantum mechanics, people have been working for hundreds of years to discover just what this structure is telling us. There are very creative ways to think about how randomness arises within our experience of quantum mechanics. One conception, for example is embodied in the Many Worlds model.

    Outside of randomness, what is your general research interest?
    My research has been how to prepare, manipulate and probe quantum states of light and matter. Working in atomic physics and optical physics is nice because you can work in ambient conditions, with a laboratory of relatively small scale. When you want to explore quantum phenomena in such conditions, you have a couple of choices: one is you can work on very fast timescales, because when you create a quantum state, it tends to dissipate into the environment very quickly (that’s why you don’t generally see these things at room temperature); the other way is to use light itself to explore the quantum structure of the world.

    One of the big projects that we’re currently engaged in together with many colleagues across this university and several partner universities is to combine photons and atoms in order to try and build large-scale quantum states. That’s become feasible with some serious engineering, and it’s very exciting, for two reasons. First of all, when quantum states get big enough there’s no other way you can study them, other than to build them. Because it’s not possible to calculate using a normal computer what they are, what their structure is, what their dynamics are; they are too complicated. What that means, which Richard Feynman pointed out some 30 or so years ago, is that the information these states contain is vastly different from anything we know how to process.

    He hinted that we could also use these states to build computers whose power vastly exceeds any conventional computer you could imagine building. So you open this door to new discovery, new science and new technologies that could be truly amazing: fully secure communications, really precise sensors, simulation of new materials and molecules, perhaps leading to new drugs. This dual road, where you can see a really fruitful area, a new frontier of science, and new applications is really exciting.

    Has the potential for that sped up in the last decade, as technological improvement has?
    Yes, I think particularly in the UK the government has identified the technological potential of quantum science and felt it was something the UK could take a lead on, based on the long history of innovation in this country in the underpinning science. They’ve invested a lot of money and that’s really enabled us to begin to tackle some of the serious engineering and technology questions that weren’t possible before. It’s a good time to be in the field.

    Where in Oxford are you building these structures?
    There’s a new building being built in the Physics Department, just on Keble Road, and part of the laboratory space will be for this new technology centre – that’s where this machine will be built.

    You’re also Pro-Vice-Chancellor for research; what does that involve?
    My role as Pro-Vice-Chancellor is really to have sight of the research activities, and help drive some of the research policies and the overarching research strategy for the institution. It’s also to do with the wider engagement agenda, especially around innovation: how do we ensure that, where it’s appropriate and possible, the fruits of our research are utilised for the benefit of society? That’s also a very exciting part of the work: seeing this ferment of ideas and being able to facilitate where some of them, at the right time, have possible applications is really fantastic.

    Having worked at various different universities, is there anything you think is particularly distinctive about Oxford?
    Well, I think it’s a place that respects the creative autonomy of individuals and works hard to make sure that people can pursue the ideas they want to pursue. And the structure, whereby you can get to talk to people of many different backgrounds and expertise, is, I think, something that is different from many places. I think the scale of excellence across the University institution is something that gives Oxford a distinctive flavour.

    When you stop researching, what would you like to consider the ultimate legacy of your work to be?
    On the science end, if we’re able to really show how you can build these quantum machines and use them for new machines and applications – it would be great to have contributed something substantive towards that. Moreover, to have enabled the University to continue to excel and to realise its potential as a critical part of a modern society.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Oxford Campus

    Oxford is a collegiate university, consisting of the central University and colleges. The central University is composed of academic departments and research centres, administrative departments, libraries and museums. The 38 colleges are self-governing and financially independent institutions, which are related to the central University in a federal system. There are also six permanent private halls, which were founded by different Christian denominations and which still retain their Christian character.

    The different roles of the colleges and the University have evolved over time.

  • richardmitnick 5:03 pm on August 19, 2015 Permalink | Reply
    Tags: , , Quantum Physics   

    From MIT Tech Review: “Physicists Unveil First Quantum Interconnect” 

    MIT Technology Review
    M.I.T Technology Review

    August 18, 2015
    No Writer Credit

    An international team of physicists has found a way to connect quantum devices in a way that transports entanglement between them.


    One of the unsung workhorses of modern technology is the humble interconnect. This is essentially a wire or set of wires that link one part of an electronic system to another. In ordinary silicon chips, interconnect can take up most of the area of a chip; and the speed and efficiency with which information can travel along these interconnects, is a major limiting factor in computing performance.

    So it’s no wonder that physicists and engineers are creating new generations of interconnect that will become the backbone of information processing machines of the future.

    One of the most promising forms of number crunching is the quantum computer and its various associate quantum technologies, such as quantum communication, quantum cryptography, quantum metrology, and so on.

    Physicists have made great strides in building proof-of-principle devices that exploit the laws of quantum physics to perform feats that would be impossible with purely classical mechanics. And yet a significant problem remains. These devices must work in isolation since nobody has perfected a way of joining them together effectively.

    Today, that changes thanks to the work of Mark Thompson at the University of Bristol in the U.K. and a few pals around the world. These guys have built and tested a quantum interconnect that links separate silicon photonic chips and carries photons and, crucially, entanglement between them.

    Quantum interconnect is a tricky proposition because of the fragile nature of entanglement, the bizarre way in which quantum particles share the same existence, even when they are far apart.

    However, this state is extremely brittle — sneeze and it disappears. So quantum interconnect must preserve entanglement while transporting it from one place to another.

    Thompson and co do this using a simple optical fiber and a clever quantum trick. Their silicon chips have two sources of photons that travel along photonic channels that overlap. When photons meet in the region of overlap, they become entangled and then carry this entanglement along separate paths through the device.

    The role of the quantum interconnect is to transmit the photons to another chip where they retain their path-encoded entanglement. But how can this be done when the interconnect consists of a single path along a fiber?

    The trick that Thomson and pals have perfected is to convert the path-entanglement into a different kind of entanglement, in this case involving polarization. They do this by allowing the path-entangled photons to interfere with newly created photons in a way that causes them to become polarized. This also entangles the newly created photons, which pass into the optical fiber and travel to the second silicon photonic chip.

    The second chip reverses this process. There, the polarized-entangled photons are converted back into the path-entangled variety which then continue into the device as if they had come directly from the first chip.

    The team has experimented with this proof-of-principle device and show that the entanglement is preserved throughout. “We demonstrate high-fidelity entanglement throughout the generation, manipulation, interconversion, distribution and measurement processes, across two integrated photonic circuits, successfully demonstrating the chip-to-chip quantum photonic interconnect,” they say.

    It’s not perfect of course. Thompson and co admit they need to reduce the losses in the machine. But they say all this can be improved in future by optimizing various aspects of the design.

    Overall, that’s an important step forward. Quantum interconnect is an enabling technology that should help to make possible a wide variety of new quantum devices that require different quantum subsystems to be linked together.

    Ref: arxiv.org/abs/1508.03214 : Quantum Photonic Interconnect

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 6:11 pm on February 9, 2015 Permalink | Reply
    Tags: , , Quantum Physics   

    From phys.org: “No Big Bang? Quantum equation predicts universe has no beginning” 


    February 9, 2015
    Lisa Zyga

    This is an artist’s concept of the metric expansion of space, where space (including hypothetical non-observable portions of the universe) is represented at each time by the circular sections. Note on the left the dramatic expansion (not to scale) occurring in the inflationary epoch, and at the center the expansion acceleration. The scheme is decorated with WMAP images on the left and with the representation of stars at the appropriate level of development. Credit: NASA

    The universe may have existed forever, according to a new model that applies quantum correction terms to complement [Albert] Einstein’s theory of general relativity. The model may also account for dark matter and dark energy, resolving multiple problems at once.

    The widely accepted age of the universe, as estimated by general relativity, is 13.8 billion years. In the beginning, everything in existence is thought to have occupied a single infinitely dense point, or singularity. Only after this point began to expand in a “Big Bang” did the universe officially begin.

    Although the Big Bang singularity arises directly and unavoidably from the mathematics of general relativity, some scientists see it as problematic because the math can explain only what happened immediately after—not at or before—the singularity.

    “The Big Bang singularity is the most serious problem of general relativity because the laws of physics appear to break down there,” Ahmed Farag Ali at Benha University and the Zewail City of Science and Technology, both in Egypt, told Phys.org.

    Ali and coauthor Saurya Das at the University of Lethbridge in Alberta, Canada, have shown in a paper published in Physics Letters B that the Big Bang singularity can be resolved by their new model in which the universe has no beginning and no end.

    Old ideas revisited

    The physicists emphasize that their quantum correction terms are not applied ad hoc in an attempt to specifically eliminate the Big Bang singularity. Their work is based on ideas by the theoretical physicist David Bohm, who is also known for his contributions to the philosophy of physics. Starting in the 1950s, Bohm explored replacing classical geodesics (the shortest path between two points on a curved surface) with quantum trajectories.

    In their paper, Ali and Das applied these Bohmian trajectories to an equation developed in the 1950s by physicist Amal Kumar Raychaudhuri at Presidency University in Kolkata, India. Raychaudhuri was also Das’s teacher when he was an undergraduate student of that institution in the ’90s.

    Using the quantum-corrected Raychaudhuri equation, Ali and Das derived quantum-corrected Friedmann equations, which describe the expansion and evolution of universe (including the Big Bang) within the context of general relativity. Although it’s not a true theory of quantum gravity, the model does contain elements from both quantum theory and general relativity. Ali and Das also expect their results to hold even if and when a full theory of quantum gravity is formulated.

    No singularities nor dark stuff

    In addition to not predicting a Big Bang singularity, the new model does not predict a “big crunch” singularity, either. In general relativity, one possible fate of the universe is that it starts to shrink until it collapses in on itself in a big crunch and becomes an infinitely dense point once again.

    Ali and Das explain in their paper that their model avoids singularities because of a key difference between classical geodesics and Bohmian trajectories. Classical geodesics eventually cross each other, and the points at which they converge are singularities. In contrast, Bohmian trajectories never cross each other, so singularities do not appear in the equations.

    In cosmological terms, the scientists explain that the quantum corrections can be thought of as a cosmological constant term (without the need for dark energy) and a radiation term. These terms keep the universe at a finite size, and therefore give it an infinite age. The terms also make predictions that agree closely with current observations of the cosmological constant and density of the universe.

    New gravity particle

    In physical terms, the model describes the universe as being filled with a quantum fluid. The scientists propose that this fluid might be composed of gravitons—hypothetical massless particles that mediate the force of gravity. If they exist, gravitons are thought to play a key role in a theory of quantum gravity.

    In a related paper, Das and another collaborator, Rajat Bhaduri of McMaster University, Canada, have lent further credence to this model. They show that gravitons can form a Bose-Einstein condensate (named after Einstein and another Indian physicist, Satyendranath Bose) at temperatures that were present in the universe at all epochs.

    Motivated by the model’s potential to resolve the Big Bang singularity and account for dark matter and dark energy, the physicists plan to analyze their model more rigorously in the future. Their future work includes redoing their study while taking into account small inhomogeneous and anisotropic perturbations, but they do not expect small perturbations to significantly affect the results.

    “It is satisfying to note that such straightforward corrections can potentially resolve so many issues at once,” Das said.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Phys.org in 100 Words

    Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

  • richardmitnick 9:54 am on December 18, 2014 Permalink | Reply
    Tags: , , Quantum Physics,   

    From Ethan Siegel: “Quantum Immortality” 

    Starts with a bang
    Starts with a Bang

    This article was written by Paul Halpern, the author of Einstein’s Dice and Schrödinger’s Cat: How Two Great Minds Battled Quantum Randomness to Create a Unified Theory of Physics.

    Observers are the necessary, but unliked, bouncers in the elegant nightclub of quantum physics. While, no one is entirely comfortable with having doormen checking IDs, they persist; otherwise everyone and everything gets in, contrary to ordinary experience.

    Image credit: AIP Emilio Segre Visual Archives, Physics Today Collection of [Paul]Dirac and [Werner] Heisenberg;

    © Los Alamos National Laboratory of [John] von Neumann.

    In the late 1920s and early 1930s, Heisenberg, Dirac, and John von Neumann, codified the formalism of quantum mechanics as a two-step process. One part involves the continous evolution of states via the deterministic

    Schrödinger equation.
    Image credit: Wikimedia Commons user YassineMrabet.

    Map out a system’s potential energy distribution — in the form of a well, for example — and the spectrum of possible quantum states is set. If the states are time-dependent, then they predictably transform. That could set out, for instance, a superposition of states that spreads out in position space over time, like an expanding puddle of water.

    Yet experiments show that if an apparatus is designed to measure a particular quantity, such as the position, momentum or spin-state of a particle, quantum measurements yield specific values of that respective physical parameter. Such specificity requires a second type of quantum operation that is instantaneous and discrete, rather than gradual and continuous: the process of collapse.

    Image credit: A Friedman, via http://blogs.scientificamerican.com/the-curious-wavefunction/2014/01/15/what-scientific-idea-is-ready-for-retirement/.

    Collapse occurs when a measurement of a certain physical parameter — position, let’s say — precipitates a sudden transformation into one of the “eigenstates” (solution states) of the operator (mathematical function) corresponding to that parameter — the position operator, in that case.

    Image credit: Nick Trefethen, via http://www.chebfun.org/examples/ode-eig/Eigenstates.html.

    Then the measured value of that quantity is the “eigenvalue” associated with that eigenstate — the specific position of the particle, for instance. Eigenstates represent the spectrum of possible states and eigenvalues the measurements associated with those states.

    We can imagine the situation of quantum collapse as being something like a slot machine with a mixture of dollar coins and quarters; some old enough to be valuable, others shining new.

    Image credit: © 2014 Marco Jewelers, via http://marcojewelers.net/sell-buy-silver-gold-coins.

    Its front panel has two buttons: one red and the other blue. Press the red button and the coins instantly become sorted according to denomination. A number of dollar coins drop out (a mixture of old and new). Press the blue button and the sorting is instantly done by date. A bunch of old coins (of both denominations) are released. While someone seeking quick bucks might press red, a coin collector might push blue. The machine is set that you are not permitted to press both buttons. Similarly, in quantum physics, according to Heisenberg’s famous uncertainty principle certain quantities such as position and momentum are not measurable at once with any degree of precision.

    Over the years, a number of critics have attacked this interpretation.

    Albert Einstein
    Image credit: Oren Jack Turner, Princeton, N.J., via Wikimedia Commons user Jaakobou.

    Suggesting that quantum physics, though experimentally correct, must be incomplete, Einstein argued that random, instantaneous transitions had no place in a fundamental description of nature. Schrödinger cleverly developed his well-known feline thought experiment to demonstrate the absurdity of the observer’s role in quantum collapse. In his hypothetical scheme, he imagined a set-up in which a cat in a closed box, whose survival (or not) was tied to the random decay of a radioactive material, was in a mixed state of life and death until the box was opened and the system observed.

    Image credit: retrieved from Øystein Elgarøy at http://fritanke.no/index.php?page=vis_nyhet&NyhetID=8513.

    More recently, physicist Bryce DeWitt, who theorized how quantum mechanics might apply to gravity and the dynamics of the universe itself, argued that because there are presumably no observers outside the cosmos to view it (and trigger collapse into quantum gravity eigenstates), a complete accounting of quantum physics could not include observers.

    Instead, DeWitt, until his death in 2004, was an ardent advocate of an alternative to the Copenhagen (standard) interpretation of quantum mechanics that he dubbed the Many Worlds Interpretation (MWI).

    Image credit: University of Texas of Bryce DeWitt;

    Professor Jeffrey A. Barrett and UC Irvine, of Hugh Everett III.

    He based his views on the seminal work of Hugh Everett, who as a graduate student at Princeton, developed a way of avoiding the need in quantum mechanics for an observer. Instead, each time a quantum measurement is taken, the universe, including any observers, seamlessly and simultaneously splits into the spectrum of possible values for that measurement. For example, in the case of the measurement of the spin of an electron, in one branch it has spin up, and all observers see it that way; in the other it has spin down. Schrödinger’s cat would be happily alive in one reality, to the joy of its owner, while cruelly deceased in the other, much to the horror of the same owner (but in a different branch). Each observer in each branch would have no conscious awareness of his near-doppelgangers.

    As Everett wrote to DeWitt in explaining his theory:

    “The theory is in full accord with our experience (at least insofar as ordinary quantum mechanics is)… because it is possible to show that no observer would ever be aware of any ‘branching.’”

    If Schrödinger’s thought experiment were repeated each day, there would always be one branch of the universe in which the cat survives. Hypothetically, rather than the proverbial “nine lives,” the cat could have an indefinite number of “lives” or at least chances at life. There would always be one copy of the experimenter who is gratified, but perplexed, that his cat has beaten the odds and lived to see another day. The other copy, in mourning, would lament that the cat’s luck had finally run out.

    Image credit: Ethan Zuckerman, from Garrett Lisi’s talk (2008), via http://www.ethanzuckerman.com/blog/2008/02/28/ted2008-garrett-lisi-looks-for-balance/.

    What about human survival? We are each a collection of particles, governed on the deepest level by quantum rules. If each time a quantum transition took place, our bodies and consciousness split, there would be copies that experienced each possible result, including those that might determine our life or death. Suppose in one case a particular set of quantum transitions resulted in faulty cell division and ultimately a fatal form of cancer. For each of the transitions, there would always be an alternative that did not lead to cancer. Therefore, there would always be branches with survivors. Add in the assumption that our conscious awareness would flow only to the living copies, and we could survive any number of potentially hazardous events related to quantum transitions.

    Everett reportedly believed in this kind of “quantum immortality.” Fourteen years after his death in 1982, his daughter Liz took her own life, explaining in her suicide note that in some branch of the universe, she hoped to reunite with her father.

    There are major issues with the prospects for quantum immortality however. For one thing the MWI is still a minority hypothesis. Even if it is true, how do we know that our stream of conscious thought would flow only to branches in which we survive? Are all possible modes of death escapable by an alternative array of quantum transitions? Remember that quantum events must obey conservation laws, so there could be situations in which there was no way out that follows natural rules. For example, if you fall out of a spaceship hatch into frigid space, there might be no permissible quantum events (according to energy conservation) that could lead you to stay warm enough to survive.

    Finally, suppose you do somehow manage to achieve quantum immortality — with your conscious existence following each auspicious branch. You would eventually outlive all your friends and family members — because in your web of branches you would eventually encounter copies of them that didn’t survive. Quantum immortality would be lonely indeed!

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible.

  • richardmitnick 8:35 pm on November 1, 2014 Permalink | Reply
    Tags: , , , , , , Quantum Physics   

    From AAAS: “Dark matter: Out with the WIMPs, in with the SIMPs?” 



    30 October 2014
    Adrian Cho

    Like cops tracking the wrong person, physicists seeking to identify dark matter—the mysterious stuff whose gravity appears to bind the galaxies—may have been stalking the wrong particle. In fact, a particle with some properties opposite to those of physicists’ current favorite dark matter candidate—the weakly interacting massive particle, or WIMP—would do just as good a job at explaining the stuff, a quartet of theorists says. Hypothetical strongly interacting massive particles—or SIMPs—would also better account for some astrophysical observations, they argue.

    “We’ve been searching for WIMPs for quite some time, but we haven’t found them yet, so I think it’s important to think outside the box,” says Yonit Hochberg, a theorist at Lawrence Berkeley National Laboratory and the University of California (UC), Berkeley, and an author of the new paper.

    Theorists dreamed up WIMPs 30 years ago to help explain why galaxies don’t just fly apart. The particles would have a mass between one and 1000 times that of a proton and, in addition to gravity, would interact with one another and with ordinary matter through only the weak nuclear force, one of two forces of nature that normally exert themselves only in the atomic nucleus.

    The infant universe would have produced a huge number of WIMPs as subatomic particles crashed into one another. Some of those WIMPs would then disappear when two of them collided and annihilated each other to produce two ordinary particles. As the universe expanded, such collisions would become ever rarer and, given the strength of the weak force, just enough WIMPs would survive to provide the right amount of dark matter today—about five times that of ordinary matter. That coincidence, or “WIMP miracle,” has made WIMPs a favorite of theorists, even if experimenters have yet to spot them floating about.

    However, Hochberg and colleagues argue that dark matter could also consist of lighter particles that have a mass somewhere around one-tenth that of the proton and interact with one another—but not ordinary matter—very strongly. Such SIMPs would pull on one another almost as strongly as the quarks in a proton, which cling to each other so fiercely that it’s impossible to isolate a quark.

    SIMPs can also provide just the right amount of dark matter, assuming the theorists add a couple of wrinkles. The SIMPs must disappear primarily through collisions in which three SIMPs go in and only two SIMPs come out. These events must be more common than ones in which two SIMPs annihilate each other to produce two ordinary particles. Moreover, the theorists argue, SIMPs must interact with ordinary matter, although much more weakly than WIMPs. That’s because the three-to-two collisions would heat up the SIMPs if they could not interact and share heat with ordinary matter.

    That may seem like a lot to ask, but those conditions are easy to meet so long as the SIMPs aren’t too heavy, Hochberg says. So the WIMP miracle could easily be replaced with a SIMP miracle, as the team reports this month in Physical Review Letters.

    Moreover, the fact that SIMPs must interact with ordinary matter guarantees that, in principle, they should be detectable in some way, Hochberg says. Whereas physicists are now searching for signs of WIMPs colliding with massive atomic nuclei, researchers would probably have to look for SIMPs smacking into lighter electrons because the bantamweight particles would not pack enough punch to send a nucleus flying.

    Compared with WIMPy dark matter, SIMPy dark matter would also have another desirable property. As the universe evolved, dark matter coalesced into clumps, or halos, in which the galaxies then formed. But computer simulations suggest that dark matter that doesn’t interact with itself would form myriad little clumps that are very dense in the center. And little “dwarf galaxies” aren’t as abundant and the centers of galaxies aren’t as dense as the simulations suggest. But strongly interacting dark matter would smooth out the distribution of dark matter and solve those problems, Hochberg says. “This isn’t some independent thing that we’ve just forced into the model,” she says. “It just naturally happens.”

    The new analysis “has the flavor of the WIMP miracle, which is nice,” says Jonathan Feng, a theorist at UC Irvine who was not involved in the work. Feng says he’s been working on similar ideas and that the ability to reconcile the differences between dark matter simulations and the observed properties of galaxies makes strongly interacting dark matter attractive conceptually.

    However, he cautions, it may be possible that, feeble as they may be, the interactions between dark and ordinary matter might smooth out the dark matter distribution on their own. And Feng says he has some doubts about the claim that SIMPs must interact with ordinary matter strongly enough to be detected. So the SIMP probably won’t knock WIMP off its perch as the best guess for the dark matter particle just yet, Feng says: “At the moment, it’s not as well motivated as the WIMP, but it’s definitely worth exploring.”

    See the full article here.

    The American Association for the Advancement of Science is an international non-profit organization dedicated to advancing science for the benefit of all people.

    ScienceSprings relies on technology from

    MAINGEAR computers



  • richardmitnick 3:12 pm on October 17, 2014 Permalink | Reply
    Tags: , , , , Quantum Physics   

    From Perimeter: “The Last Gasp of a Black Hole” 

    Perimeter Institute
    Perimeter Institute

    October 17, 2014
    No Writer Credit

    New research from Perimeter shows that two of the strangest features of quantum mechanicsentanglement and negative energy – might be two faces of one coin.

    Quantum mechanics is, notoriously, weird. Take entanglement: when two or more particles are entangled, their states are linked together, no matter how far apart they go.

    If the idea makes your classical mind twitch, you’re in good company. At the heart of everything, according to quantum mechanics, nature has a certain amount of irreducible jitter. Even nothing – the vacuum of space – can jitter, or as physicists say, fluctuate. When it does, a particle and its anti-particle can pop into existence.

    For example, an electron and an anti-electron (these are called positrons) might pop into existence out of the vacuum. We know that they each have a spin of one half, which might be either up or down. We also know that these particles were created from nothing and so, to balance the books, the total spin must add up to zero. Finally, we know that the spin of either particle is not determined until it is measured.

    So suppose the electron and the positron fly apart a few metres or a few light years, and then a physicist comes by to measure the spin of, say, the electron. She discovers that the electron is spin up, and in that moment, the electron becomes spin up. Meanwhile, a few metres or a few light years away, the positron becomes spin down. Instantly. That is the strangeness of quantum entanglement.

    Negative energy is less well known than entanglement, but no less weird. It begins with the idea – perhaps already implied by the positron and electron popping out of nowhere – that empty space is not empty. It is filled with quantum fields, and the energy of those fields can fluctuate a little bit.

    In fact, the energy of these fields can dip under the zero mark, albeit briefly. When that happens, a small region of space can, for a short span of time, weigh less than nothing – or at least less than the vacuum. It’s a little bit like finding dry land below sea level.

    Despite their air of strangeness, entanglement and negative energy are both well-explored topics. But now, new research, published as a Rapid Communication in Physical Review D, is hinting that these two strange phenomena may be linked in a surprising way.

    The work was done by Perimeter postdoctoral fellow Matteo Smerlak and former postdoc Eugenio Bianchi (now on the faculty at Penn State and a Visiting Fellow at Perimeter). “Negative energy and entanglement are two of the most striking features of quantum mechanics,” says Smerlak. “Now, we think they might be two sides of the same coin.”

    Perimeter Postdoctoral Researcher Matteo Smerlak

    Perimeter Visiting Fellow Eugenio Bianchi

    Specifically, the researchers proved mathematically that any external influence that changes the entanglement of a system in its vacuum state must also produce some amount of negative energy. The reverse, they say, is also true: negative energy densities can never be produced without entanglement being directly affected.

    At the moment, the result only applies to certain quantum fields in two dimensions – to light pulses travelling up and down a thin cable, for instance. And it is with light that the Perimeter researchers hope that their new idea can be directly tested.

    “Some quantum states which have negative energy are known, and one of them is called a ‘squeezed state,’ and they can be produced in the lab, by optical devices called squeezers,” says Smerlak. The squeezers manipulate light to produce an observable pattern of negative energy.

    Remember that Smerlak and Bianchi’s basic argument is that if an external influence affects vacuum entanglement, it will also release some negative energy. In a quantum optics setup, the squeezers are the external influence.

    Experimentalists should be able to look for the correlation between the entanglement patterns and the negative energy densities which this new research predicts. If these results hold up – always a big if in brand new work – and if they can make the difficult leap from two dimensions to the real world, then there will be startling implications for black holes.

    Like optical squeezers, black holes also produce changes in entanglement and energy density. They do this by separating entangled pairs of particles and preferentially selecting the ones with negative energy.

    Remember that the vacuum is full of pairs of particles and antiparticles blinking into existence. Under normal circumstances, they blink out again just as quickly, as the particle and the antiparticle annihilate each other. But just at a black hole’s event horizon, it sometimes happens that one of the particles is sucked in, while the other escapes. The small stream of escaping particles is known as Hawking radiation.

    By emitting such radiation, black holes slowly give up their energy and mass, and eventually disappear. Black hole evaporation, as the process is known, is a hot topic in physics. This new research has the potential to change the way we think about it.

    “In the late stages of the evaporation of a black hole, the energy released from the black hole will turn negative,” says Smerlak. And if a black hole releases negative energy, then its total energy goes up, not down. “It means that the black hole will shrink and shrink and shrink – for zillions of years – but in the end, it will release its negative energy in a gasp before dying. Its mass will briefly go up.”

    Call it the last gasp of a black hole.

    See the full article here.

    About Perimeter

    Perimeter Institute is a leading centre for scientific research, training and educational outreach in foundational theoretical physics. Founded in 1999 in Waterloo, Ontario, Canada, its mission is to advance our understanding of the universe at the most fundamental level, stimulating the breakthroughs that could transform our future. Perimeter also trains the next generation of physicists through innovative programs, and shares the excitement and wonder of science with students, teachers and the general public.

    ScienceSprings relies on technology from

    MAINGEAR computers



Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: