Tagged: Quantum Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:18 am on September 5, 2018 Permalink | Reply
    Tags: Black body radiation, , , Planck's law of radiative heat transfer has held up well under a century of intense testing but a new analysis has found it fails on the smallest of scales, Quantum Physics, , University of Michigan, William & Mary   

    From University of Michigan and William & Mary via Science Alert: “A Fundamental Physics Law Just Failed a Test Using Nanoscale Objects” 

    U Michigan bloc

    University of Michigan

    1

    William & Mary

    via

    ScienceAlert

    From Science Alert

    1
    (Xanya69/istock)

    5 SEP 2018
    MIKE MCRAE

    Planck’s law of radiative heat transfer has held up well under a century of intense testing, but a new analysis has found it fails on the smallest of scales.

    Exactly what this means isn’t all that clear yet, but where laws fail, new discoveries can follow. Such a find wouldn’t just affect physics on an atomic scale – it could impact everything from climate models to our understanding of planetary formation.

    The foundational law of quantum physics was recently put to the test by researchers from William & Mary in Virginia and the University of Michigan, who were curious about whether the age-old rule could describe the way heat radiation was emitted by nanoscale objects.

    Not only does the law fail, the experimental result is 100 times greater than the predicted figure, suggesting nanoscale objects can emit and absorb heat with far greater efficiency than current models can explain.

    “That’s the thing with physics,” says William & Mary physicist Mumtaz Qazilbash.

    “It’s important to experimentally measure something, but also important to actually understand what is going on.”

    Planck is one of the big names in physics. While it’d be misleading to attribute the birth of quantum mechanics to a single individual, his work played a key role in getting the ball rolling.

    Humans have known since ancient times that hot things glow with light. We’ve also understood for quite a while that there’s a relationship between the colour of that light and its temperature.

    To study this in detail, physicists in the 19th century would measure the colour of light inside a black, heated box, watching through a tiny hole. This ‘black body radiation’ provided a reasonably precise measure of that relationship.

    Coming up with simple formulae to describe the wavelengths of colour and their temperatures proved to be rather challenging, and so Planck came at it from a slightly different angle.

    His approach was to treat the way light was absorbed and emitted like a pendulum’s swing, with discrete quantities of energy being soaked up and spat out. Not that he really thought this was the case – it was just a convenient way to model light.

    As strange as it seemed at first, the model worked perfectly. This ‘quantity’ of energy approach generated decades of debate over the nature of reality, and has come to form the underpinnings of physics as we know it.

    Planck’s law of radiative heat transfer informs a theory describing a maximum frequency at which heat energy can be emitted from an object at a given temperature.

    This works extremely well for visible objects separated at a visible distance. But what if we push those objects together, so the space between them isn’t quite a single wavelength of the light being emitted? What happens to that ‘pendulum swing’?

    Physicists well versed in the dynamics of electromagnetism already know weird things happen here in this area, known as the ‘near field’ region.

    For one thing, the relationship between the electrical and magnetic aspects of the electromagnetic field becomes more complex.

    Just how this might affect the way heated objects interact has already been the focus of previous research, which has established some big differences in how heat moves in the near field as compared with the far field observed by Planck.

    But that’s just if the gap is confined to a distance smaller than the wavelength of emitted radiation. What about the size of the objects themselves?

    The researchers had quite a challenge ahead of them. They had to engineer objects smaller than about 10 microns in size – the approximate length of a wave of infrared light.

    They settled on two membranes of silicon nitride a mere half micron thick, separated by a distance that put them well into the far field.

    Heating one and measuring the second allowed them to test Planck’s law with a fair degree of precision.

    “Planck’s radiation law says if you apply the ideas that he formulated to two objects, then you should get a defined rate of energy transfer between the two,” says Qazilbash.

    “Well, what we have observed experimentally is that rate is actually 100 times higher than Planck’s law predicts if the objects are very, very small.”

    Qazilbash likens it to the plucking of a guitar string at different places along its length. “If you pluck it in those places, it’s going to resonate at certain wavelengths more efficiently.”

    The analogy is a useful way to visualise the phenomenon, but understanding the details of the physics behind the discovery could have some big impacts. Not just in nanotechnology, but on a far bigger scale.

    This hyper-efficient rate of energy transfer could feasibly change how we understand heat transfer in the atmosphere, or in a cooling body the size of a planet. The extent of this difference is still a mystery, but one with some potentially profound implications.

    “Wherever you have radiation playing an important role in physics and science, that’s where this discovery is important,” says Qazilbash.

    This research was published in Nature.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Advertisements
     
  • richardmitnick 10:55 am on July 24, 2018 Permalink | Reply
    Tags: , Daniel Bowring at FNAL, , , , , Quantum Physics,   

    From Fermilab: “Daniel Bowring receives $2.5 million from DOE to search for axions with quantum sensors” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermilab , an enduring source of strength for the US contribution to scientific research world wide.

    July 19, 2018
    Jordan Rice

    1
    Daniel Bowring examines a superconducting qubit mounted in a copper microwave cavity. Photo: Reidar Hahn

    Dark matter makes up nearly 80 percent of all matter in the universe, yet its nature has eluded scientists.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al


    Caterpillar Project A Milky-Way-size dark-matter halo and its subhalos circled, an enormous suite of simulations . Griffen et al. 2016

    Scientists theorize that it could take the form of a subatomic particle, and one possible candidate comes in the form of a small, theoretical particle called the axion. If it exists, the axion will interact incredibly weakly with matter, so detecting one requires an incredibly sensitive detector.

    Fermilab scientist Daniel Bowring is planning to build just such an instrument. The Department of Energy has selected Bowring for a 2018 Early Career Research Award to build a detector that would ferret out the hypothesized particle. He will receive $2.5 million over five years to build and operate his experiment. The award funds equipment, engineers, technicians and a postdoctoral researcher.

    “We are very motivated to find the axion because it would solve several interesting problems for us in the particle physics community,” Bowring said.

    Not only would the axion’s discovery explain, at least in part, the nature of dark matter, it could also solve the strong CP problem, a long-standing thorn in the side of theoretical physics models.

    The strong CP problem is an inconsistency in particle physics. Particles behave differently from their mirror-reversed, antimatter counterparts — at least, they do under the influence of the electromagnetic force and the weak nuclear force (which governs nuclear decay).

    But under the influence of the strong force (which holds matter together), particles and their mirror-image antiparticles behave similarly. Or, in physics speak, they’re CP-symmetric under the strong force. (CP stands for charge-parity. It’s the property that’s flipped when you take a mirror image of a particle’s antimatter partner.) Why is the strong force the exception?

    One potential answer lies in the existence of the axion. In the math of strong interactions, the addition of the axion enables theoretical models to reflect the reality of strong-force CP symmetry.

    Bowring is following the axion math where it leads — to the construction of a device that can pick up the signal of the fundamental particle, whose mass is predicted to be vanishingly small, between 1 billion and 1 trillion times smaller than an electron.

    One way to look for the axion is to look for light: In the presence of a strong electromagnetic field — Bowring’s experiment will use about 14 Tesla, or roughly 10 times stronger than an MRI magnet — an axion should convert into a single particle of light, called a photon, which is more easily observed.

    “Physicists have gotten pretty good at detecting photons over the years,” Bowring said.

    When an axion enters the detector filled with the electromagnetic field, the particle will spontaneously convert into a photon with a specific frequency. The frequency corresponds to the axion’s mass, so scientists can measure the axion mass indirectly, thanks to the detection of particles of light.

    Much like someone tuning a sensitive AM radio, researchers will scan slowly through the relevant range of photon frequencies until they pick up a signal, which would point to the presence of an axion.

    It’s a subtle business, one that requires being able to detect single photons. While photon detection is an old hat for physicists, discerning a lone photon amid the experimental noise of a particle detector is a job for new technology. Bowring’s experiment will use supersensitive, superconducting quantum bits, or qubits, to pluck the solo photon signal from the noise and thus accurately count the number of detected photons.

    Bowring’s experiment will be an opportunity to bridge the gap between particle physics and the science behind quantum computing.

    Quantum computing – IBM

    “Daniel’s proposed experiment will demonstrate how qubits, the essential elements of quantum computing, can be used to detect a range of axion masses,” said Fermilab scientist Keith Gollwitzer. “Quantum computing may be the next large step in computing power and particle physics experiments.”

    In that respect, the application of technologies in their infancy to century-old problems is a reflection of the larger scientific field.

    “Fermilab’s mission is doing particle physics, and qubits are just a way for us to meet the requirements of that mission,” Bowring says. “It is a way for us to build new experiments that address the problems of particle physics at the forefront of where the field is.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.


    FNAL/MINERvA

    FNAL DAMIC

    FNAL Muon g-2 studio

    FNAL Short-Baseline Near Detector under construction

    FNAL Mu2e solenoid

    Dark Energy Camera [DECam], built at FNAL

    FNAL DUNE Argon tank at SURF

    FNAL/MicrobooNE

    FNAL Don Lincoln

    FNAL/MINOS

    FNAL Cryomodule Testing Facility

    FNAL Minos Far Detector

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    FNAL ICARUS

    FNAL Holometer

     
  • richardmitnick 12:54 pm on July 23, 2018 Permalink | Reply
    Tags: , , Quantum Physics   

    From Niels Bohr Institute: “One more spin makes the whole difference. Success with complex quantum states at the Niels Bohr Institute” 

    Niels Bohr Institute bloc

    From Niels Bohr Institute

    23 July 2018
    Kasper Grove-Rasmussen, Associate professor
    Niels Bohr Institute, University of Copenhagen
    Email: k_grove@nbi.ku.dk
    Phone: +45 21 32 86 15

    Gorm Ole Steffensen, Ph.d. student
    Niels Bohr Institute, University of Copenhagen
    Email: gorm.steffensen@nbi.ku.dk
    Phone: +45 35 33 38 04

    Publication:
    Scientists from the Niels Bohr Institute at the University of Copenhagen have, for the first time, succeeded in producing, controlling and understanding complex quantum states based on two electron spins connected to a superconductor. The result has been published in Nature Communications, and has come about in a collaboration between the scientists of the Niels Bohr Institute, a scientist from abroad and last, but not least, a Master’s thesis student.

    1
    Scanning electron microscope micrograph of a semiconductor nanowire, made from Indium Arsenide, connected electrically to a superconductor and a normal metal. The location on the nanowire of the two spins – the microscopic magnets – are illustrated by the arrows. In this case the microscopic magnets are created by electron spins.

    Quantum technology is based on understanding and controlling quantum states in e.g. nanoelectronic devices with components at the nanoscale. The control could be via electrical signals, like in the components of a computer. The devices are just significantly more complex, when we are dealing with quantum components at nanoscale, and the scientists are still examining and attempting to understand the phenomena that arise on this tiny scale. In this case it is about the quantum states in nanoelectronic devices made from semiconductor nanowires and superconducting material. This requires understanding two fundamental phenomena in modern physics, magnetism and superconductivity.

    Accumulating new knowledge is like playing with building blocks

    The scientists have defined microscopic magnets electrically along a semiconductor nanowire. This is done by placing an electron spin close to a superconductor and then observing how it changes the quantum states. By placing two microscopic magnets rather than one, as has been done before, the possibilities for observing new quantum states arise. In this way the scientists accumulate knowledge by adding more and more complexity to the systems. “It is a bit like playing with building blocks. Initially we control one single electron spin, then we expand to two, we can modify the coupling between them, tune the magnetic properties etc. Somewhat like building a house with each additional brick increasing our knowledge of these quantum states.”, says Kasper Grove-Rasmussen, who has been in charge of the experimental part of the work.

    Quantum theory from 1960 revitalized in nano devices

    It is all about categorizing the different quantum states and their relations to one another, in order to achieve an overview of how the individual parts interact. During the 1960s, the theoretical foundation for this work was done, as three physicists, L. Yu, H. Shiba and A.I. Rusinov published three independent theoretical works on how magnetic impurities on the surface of the superconductor can cause new types of quantum states. The states, now achieved experimentally by the scientists at the Niels Bohr Institute, are named after the physicists: Yu-Shiba-Rusinov states. But they are significantly more complex than the Yu-Shiba-Rusinov states with a single spin previously achieved. This could be a step on the way to more complex structures that would enhance our understanding of potential quantum computer components, based on semiconductor-superconductor materials. Kasper Grove-Rasmussen emphasizes that what they are doing now is basic research.

    2
    3D model of the Yu-Shiba-Rusinov device. Two electron spins are defined along the nanowire, by placing appropriate voltages on the tiny electrodes under the nanowire. By coupling the spins to the superconductor Yu-Shiba-Rusinov states can be realized. Observation of these states are achieved by analyzing the current through the device from the normal metal to the superconductor.

    Theoretical basis provided by a Master’s thesis student

    Gorm Steffensen, now a PhD student at the Niels Bohr Institute, was writing his Master’s thesis at the time of the article, and has played an important role for the result. He was studying theoretical physics and has collaborated with his supervisor, Jens Paaske, on describing the quantum phenomena theoretically. So the article also demonstrates that collaboration on a scientific result at the Niels Bohr Institute can include the students. The task for Gorm Steffensen was to develop a theoretical model that encompassed all the phenomena in the experiments in collaboration with his supervisor and the Slovenian scientist, Rok Žitko, on. The nanowires in the experiment were developed by PhD students in the research group of Professor Jesper Nygaard. It is a common modus operandi for scientists at the Niels Bohr Institute to work together, applying many different competences across all scientific levels, from student to professor.

    The Scientific publication: “Yu–Shiba–Rusinov screening of spins in double quantum dots” https://www.nature.com/articles/s41467-018-04683-x

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings


    Stem Education Coalition

    Niels Bohr Institute Campus

    The Niels Bohr Institute (Danish: Niels Bohr Institutet) is a research institute of the University of Copenhagen. The research of the institute spans astronomy, geophysics, nanotechnology, particle physics, quantum mechanics and biophysics.

    The Institute was founded in 1921, as the Institute for Theoretical Physics of the University of Copenhagen, by the Danish theoretical physicist Niels Bohr, who had been on the staff of the University of Copenhagen since 1914, and who had been lobbying for its creation since his appointment as professor in 1916. On the 80th anniversary of Niels Bohr’s birth – October 7, 1965 – the Institute officially became The Niels Bohr Institute.[1] Much of its original funding came from the charitable foundation of the Carlsberg brewery, and later from the Rockefeller Foundation.[2]

    During the 1920s, and 1930s, the Institute was the center of the developing disciplines of atomic physics and quantum physics. Physicists from across Europe (and sometimes further abroad) often visited the Institute to confer with Bohr on new theories and discoveries. The Copenhagen interpretation of quantum mechanics is named after work done at the Institute during this time.

    On January 1, 1993 the institute was fused with the Astronomic Observatory, the Ørsted Laboratory and the Geophysical Institute. The new resulting institute retained the name Niels Bohr Institute.

     
  • richardmitnick 12:50 pm on April 28, 2018 Permalink | Reply
    Tags: , , , , , , , , Quantum Physics, Thermodynamics   

    From Kavli Institute for the Physics and Mathematics of the Universe: “Study Finds Way to Use Quantum Entanglement to Study Black Holes” 

    KavliFoundation

    The Kavli Foundation

    Kavli IPMU
    Kavli IMPU

    April 23, 2018

    A team of researchers has found a relationship between quantum physics, the study of very tiny phenomena, to thermodynamics, the study of very large phenomena, reports a new study this week in Nature Communications.

    “Our function can describe a variety of systems from quantum states in electrons to, in principle, black holes,” says study author Masataka Watanabe.

    Quantum entanglement is a phenomenon fundamental to quantum mechanics, where two separated regions share the same information. It is invaluable to a variety of applications including being used as a resource in quantum computation, or quantifying the amount of information stored in a black hole.

    Quantum mechanics is known to preserve information, while thermal equilibrium seems to lose some part of it, and so understanding the relationship between these microscopic and macroscopic concepts is important. So a group of graduate students and a researcher at the University of Tokyo, including the Kavli Institute for the Physics and Mathematics of the Universe, investigated the role of the quantum entanglement in thermal equilibrium in an isolated quantum system.

    1
    Figure 1: Graph showing quantum entanglement and spatial distribution. When separating matter A and B, the vertical axis shows how much quantum entanglement there is, while the horizontal axis shows the length of matter A. (Credit: Nakagawa et al.)

    “A pure quantum state stabilizing into thermal equilibrium can be compared to water being poured into a cup. In a quantum-mechanical system, the colliding water molecules create quantum entanglements, and these quantum entanglements will eventually lead a cup of water to thermal equilibrium. However, it has been a challenge to develop a theory which predicts how much quantum entanglement was inside because lots of quantum entanglements are created in complicated manners at thermal equilibrium,” says Watanabe.

    In their study, the team identified a function predicting the spatial distribution of information stored in an equilibrated system, and they revealed that it was determined by thermodynamic entropy alone. Also, by carrying out computer simulations, they found that the spatial distribution remained the same regardless of what systems were used and regardless of how they reached thermal equilibrium.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Kavli IPMU (Kavli Institute for the Physics and Mathematics of the Universe) is an international research institute with English as its official language. The goal of the institute is to discover the fundamental laws of nature and to understand the Universe from the synergistic perspectives of mathematics, astronomy, and theoretical and experimental physics. The Institute for the Physics and Mathematics of the Universe (IPMU) was established in October 2007 under the World Premier International Research Center Initiative (WPI) of the Ministry of Education, Sports, Science and Technology in Japan with the University of Tokyo as the host institution. IPMU was designated as the first research institute within the University of Tokyo Institutes for Advanced Study (UTIAS) in January 2011. It received an endowment from The Kavli Foundation and was renamed the “Kavli Institute for the Physics and Mathematics of the Universe” in April 2012. Kavli IPMU is located on the Kashiwa campus of the University of Tokyo, and more than half of its full-time scientific members come from outside Japan. http://www.ipmu.jp/
    Stem Education Coalition
    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.

     
  • richardmitnick 10:19 am on April 5, 2018 Permalink | Reply
    Tags: A New State of Quantum Matter Has Been Found in a Material Scientists Thought Was All Chaos, , Photoemission electron microscopy, , Quantum Physics, , , Shakti geometry, Spin ice   

    From Science Alert: “A New State of Quantum Matter Has Been Found in a Material Scientists Thought Was All Chaos” 

    ScienceAlert

    Science Alert

    5 APR 2018
    MIKE MCRAE

    1
    (enot-poloskun/istock)

    What else is lurking in there?

    Experiments carried out on a complex arrangement of magnetic particles have identified a completely new state of matter, and it can only be explained if scientists turn to quantum physics.

    The messy structures behind the research show strange properties that could allow us to study the chaos of exotic particles – if researchers can find order in there, it could help us understand these particles in greater detail, opening up a whole new landscape for quantum technology.

    Physicists from the US carried out their research on the geometrical arrangements of particles in a weird material known as spin ice.

    Like common old water ice, the particles making up spin ice sort themselves into geometric patterns as the temperature drops.

    There are a number of compounds that can be used to build this kind of material, but they all share the same kind of quantum property – their individual magnetic ‘spin’ sets up a bias in how the particles point to one another, creating complex structures.

    So, unlike the predictable crystalline patterns in water ice, the nanoscale magnetic particles making up spin ice can look disordered and chaotic under certain conditions, flipping back and forth wildly.

    The researchers focussed on one particular structure called a Shakti geometry, and measured how its magnetic arrangements fluctuated with changes in temperature.

    States of matter are usually broken down into categories such as solid, liquid, and gas. We’re taught on a fundamental level that a material’s volume and fluidity can change with shifts in its temperature and pressure.

    But there’s another way to think of a state of matter – by considering the points at which there’s a dramatic change in the way particles arrange themselves as they gain or lose energy.

    For example, the freezing of water is one such dramatic change – a sudden restructuring that occurs as pure water is chilled below 0 degrees Celsius (32 degrees Fahrenheit), where its molecules lose the energy they need to remain free and adopt another stable configuration.

    When researchers slowly lowered the temperature on spin ice arranged in a Shakti geometry, they got it to produce a similar behaviour – one that has never been seen before in other forms of spin ice.

    Using a process called photoemission electron microscopy, the team was then able to image the changes in pattern based on how their electrons emitted light.

    They were noticing points at which a specific arrangement persisted even as the temperature continued to drop.

    “The system gets stuck in a way that it cannot rearrange itself, even though a large-scale rearrangement would allow it to fall to a lower energy state,” says senior researcher Peter Schiffer, currently at Yale University.

    Such a ‘sticking point’ is a hallmark of a state of matter, and one that wasn’t expected in the flip-flopping madness of spin ice.

    Most states of matter can be described fairly efficiently using classical models of thermodynamics, with jiggling particles overcoming binding forces as they swap heat energy.

    In this case there was no clear model describing what was balancing the changes in energy with the material’s stable arrangement.

    So the team applied a quantum touch, looking at how entanglement between particles aligned to give rise to a particular topology, or pattern within a changing space.

    “Our research shows for the first time that classical systems such as artificial spin ice can be designed to demonstrate topological ordered phases, which previously have been found only in quantum conditions,” says physicist Cristiano Nisoli from Los Alamos National Laboratory.

    Ten years ago, quasiparticles that behaved like magnetic monopoles [Nature] were observed in another type of spin ice, also pointing at a weird kind of phase transition.

    Quasiparticles are becoming big things in our search for new kinds of matter that behaves in odd but useful ways, as they have pontential to be used in quantum computing. So having better models for understanding this quantum landscape will no doubt come in handy.

    This research was published in Nature.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:14 am on March 20, 2018 Permalink | Reply
    Tags: , , , , Cosmological-constant problem, , , In 1998 astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, , , , Quantum Physics, Saul Perlmutter UC Berkeley Nobel laureate, , Why Does the Universe Need to Be So Empty?, Zero-point energy of the field   

    From The Atlantic Magazine and Quanta: “Why Does the Universe Need to Be So Empty?” 

    Quanta Magazine
    Quanta Magazine

    Atlantic Magazine

    The Atlantic Magazine

    Mar 19, 2018
    Natalie Wolchover

    Physicists have long grappled with the perplexingly small weight of empty space.

    The controversial idea that our universe is just a random bubble in an endless, frothing multiverse arises logically from nature’s most innocuous-seeming feature: empty space. Specifically, the seed of the multiverse hypothesis is the inexplicably tiny amount of energy infused in empty space—energy known as the vacuum energy, dark energy, or the cosmological constant. Each cubic meter of empty space contains only enough of this energy to light a light bulb for 11 trillionths of a second. “The bone in our throat,” as the Nobel laureate Steven Weinberg once put it [http://hetdex.org/dark_energy.html
    ], is that the vacuum ought to be at least a trillion trillion trillion trillion trillion times more energetic, because of all the matter and force fields coursing through it.

    1

    Somehow the effects of all these fields on the vacuum almost equalize, producing placid stillness. Why is empty space so empty?

    While we don’t know the answer to this question—the infamous “cosmological-constant problem”—the extreme vacuity of our vacuum appears necessary for our existence. In a universe imbued with even slightly more of this gravitationally repulsive energy, space would expand too quickly for structures like galaxies, planets, or people to form. This fine-tuned situation suggests that there might be a huge number of universes, all with different doses of vacuum energy, and that we happen to inhabit an extraordinarily low-energy universe because we couldn’t possibly find ourselves anywhere else.

    Some scientists bristle at the tautology of “anthropic reasoning” and dislike the multiverse for being untestable. Even those open to the multiverse idea would love to have alternative solutions to the cosmological constant problem to explore. But so far it has proved nearly impossible to solve without a multiverse. “The problem of dark energy [is] so thorny, so difficult, that people have not got one or two solutions,” says Raman Sundrum, a theoretical physicist at the University of Maryland.

    To understand why, consider what the vacuum energy actually is. Albert Einstein’s general theory of relativity says that matter and energy tell space-time how to curve, and space-time curvature tells matter and energy how to move. An automatic feature of the equations is that space-time can possess its own energy—the constant amount that remains when nothing else is there, which Einstein dubbed the cosmological constant. For decades, cosmologists assumed its value was exactly zero, given the universe’s reasonably steady rate of expansion, and they wondered why. But then, in 1998, astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, implying the presence of a repulsive energy permeating space. Dubbed dark energy by the astronomers, it’s almost certainly equivalent to Einstein’s cosmological constant. Its presence causes the cosmos to expand ever more quickly, since, as it expands, new space forms, and the total amount of repulsive energy in the cosmos increases.

    However, the inferred density of this vacuum energy contradicts what quantum-field theory, the language of particle physics, has to say about empty space. A quantum field is empty when there are no particle excitations rippling through it. But because of the uncertainty principle in quantum physics, the state of a quantum field is never certain, so its energy can never be exactly zero. Think of a quantum field as consisting of little springs at each point in space. The springs are always wiggling, because they’re only ever within some uncertain range of their most relaxed length. They’re always a bit too compressed or stretched, and therefore always in motion, possessing energy. This is called the zero-point energy of the field. Force fields have positive zero-point energies while matter fields have negative ones, and these energies add to and subtract from the total energy of the vacuum.

    The total vacuum energy should roughly equal the largest of these contributing factors. (Say you receive a gift of $10,000; even after spending $100, or finding $3 in the couch, you’ll still have about $10,000.) Yet the observed rate of cosmic expansion indicates that its value is between 60 and 120 orders of magnitude smaller than some of the zero-point energy contributions to it, as if all the different positive and negative terms have somehow canceled out. Coming up with a physical mechanism for this equalization is extremely difficult for two main reasons.

    First, the vacuum energy’s only effect is gravitational, and so dialing it down would seem to require a gravitational mechanism. But in the universe’s first few moments, when such a mechanism might have operated, the universe was so physically small that its total vacuum energy was negligible compared to the amount of matter and radiation. The gravitational effect of the vacuum energy would have been completely dwarfed by the gravity of everything else. “This is one of the greatest difficulties in solving the cosmological-constant problem,” the physicist Raphael Bousso wrote in 2007. A gravitational feedback mechanism precisely adjusting the vacuum energy amid the conditions of the early universe, he said, “can be roughly compared to an airplane following a prescribed flight path to atomic precision, in a storm.”

    Compounding the difficulty, quantum-field theory calculations indicate that the vacuum energy would have shifted in value in response to phase changes in the cooling universe shortly after the Big Bang. This raises the question of whether the hypothetical mechanism that equalized the vacuum energy kicked in before or after these shifts took place. And how could the mechanism know how big their effects would be, to compensate for them?

    So far, these obstacles have thwarted attempts to explain the tiny weight of empty space without resorting to a multiverse lottery. But recently, some researchers have been exploring one possible avenue: If the universe did not bang into existence, but bounced instead, following an earlier contraction phase, then the contracting universe in the distant past would have been huge and dominated by vacuum energy. Perhaps some gravitational mechanism could have acted on the plentiful vacuum energy then, diluting it in a natural way over time. This idea motivated the physicists Peter Graham, David Kaplan, and Surjeet Rajendran to discover a new cosmic bounce model, though they’ve yet to show how the vacuum dilution in the contracting universe might have worked.

    In an email, Bousso called their approach “a very worthy attempt” and “an informed and honest struggle with a significant problem.” But he added that huge gaps in the model remain, and “the technical obstacles to filling in these gaps and making it work are significant. The construction is already a Rube Goldberg machine, and it will at best get even more convoluted by the time these gaps are filled.” He and other multiverse adherents see their answer as simpler by comparison.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 9:05 pm on March 19, 2018 Permalink | Reply
    Tags: , , , , , , Quantum Physics   

    From LLNL: “Breaking the Law: Lawrence Livermore, Department of Energy look to shatter Moore’s Law through quantum computing” 


    Lawrence Livermore National Laboratory

    March 19, 2018
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    Lawrence Livermore National Laboratory physicist Jonathan DuBois, who heads the Lab’s Quantum Coherent Device Physics (QCDP) group, examines a prototype quantum computing device designed to solve quantum simulation problems. The device is kept inside a refrigerated vacuum tube (gold-plated to provide solid thermal matching) at temperatures colder than outer space. Photos by Carrie Martin/LLNL.

    The laws of quantum physics impact daily life in rippling undercurrents few people are aware of, from the batteries in our smartphones to the energy generated from solar panels. As the Department of Energy and its national laboratories explore the frontiers of quantum science, such as calculating the energy levels of a single atom or how molecules fit together, more powerful tools are a necessity.

    “The problem basically gets worse the larger the physical system gets — if you get beyond a simple molecule we have no way of resolving those kinds of energy differences,” said Lawrence Livermore National Laboratory (LLNL) physicist Jonathan DuBois, who heads the Lab’s Quantum Coherent Device Physics (QCDP) group. “From a physics perspective, we’re getting more and more amazing, highly controlled physics experiments, and if you tried to simulate what they were doing on a classical computer, it’s almost at the point where it would be kind of impossible.”

    In classical computing, Moore’s Law postulates that the number of transistors in an integrated circuit doubles approximately every two years. However, there are indications that Moore’s Law is slowing down and will eventually hit a wall. That’s where quantum computing comes in. Besides busting through the barriers of Moore’s Law, some are banking on quantum computing as the next evolutionary step in computers. It’s on the priority list for the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program,,which is investigating quantum computing, among other emerging technologies, through its “Beyond Moore’s Law” project. At LLNL, staff scientists DuBois and Eric Holland are leading the effort to develop a comprehensive co-design strategy for near-term application of quantum computing technology to outstanding grand challenge problems in the NNSA mission space.

    Whereas the desktop computers we’re all familiar with store information in binary forms of either a 1 or a zero (on or off), in a quantum system, information can be stored in superpositions, meaning that for a brief moment, mere nanoseconds, data in a quantum bit can exist as either one or zero before being projected into a classical binary state. Theoretically, these machines could solve certain complex problems much faster than any computers ever created before. While classical computers perform functions in serial (generating one answer at a time), quantum computers could potentially perform functions and store data in a highly parallelized way, exponentially increasing speed, performance and storage capacity.

    LLNL recently brought on line a full capability quantum computing lab and testbed facility under the leadership of quantum coherent device group member Eric Holland. Researchers are performing tests on a prototype quantum device birthed under the Lab’s Quantum Computing Strategic Initiative. The initiative, now in its third year, is funded by Laboratory Directed Research & Development (LDRD) and aims to design, fabricate, characterize and build quantum coherent devices. The building and demonstration piece is made possible by DOE’s Advanced Scientific Computing Research (ASCR), a program managed by DOE’s Office of Science that is actively engaged in exploring if and how quantum computation could be useful for DOE applications.

    LLNL researchers are developing algorithms for solving quantum simulation problems on the prototype device, which looks deceptively simple and very strange. It’s a cylindrical metal box, with a sapphire chip suspended in it. The box is kept inside a refrigerated vacuum tube (gold-plated to provide solid thermal matching) at temperatures colder than outer space — negative 460 degrees Fahrenheit. It’s highly superconductive and faces zero resistance in the vacuum, thus extending the lifetime of the superposition state.

    “It’s a perfect electrical conductor, so if you can send an excitation inside here, you’ll get electromagnetic (EM) modes inside the box,” DuBois explained. “We’re using the space inside the box, the quantized EM fields, to store and manipulate quantum information, and the little chip couples to fields and manipulates them, determining the fine splitting in energies between different quantum states. These energy differences are what you use to make changes in quantum space.”

    To “talk” to the box, researchers are using an arbitrary wave form generator, which creates an oscillating signal– the timing of the signal determines what computation is being done in system. DuBois said the physicists are essentially building a quantum solver for Schrödinger’s equation, the bases for almost all physics and the determining factor for the dynamics of a quantum computing system.

    “It turns out that’s actually very hard to solve, and the bigger the system is, the size of what you need to keep track of blows up exponentially,” DuBois said. “The argument here is we can build a system that does that naturally — nature is basically keeping track of all those degrees of freedom for us, and so if we can control it carefully we can get it to basically emulate the quantum dynamics of some problem we’re interested in, a charge transfer in quantum chemistry or biology problem or scattering problem in nuclear physics.”

    Finding out how the device will work is part of the mission of DOE’s Advanced Quantum-Enabled Simulation (AQuES) Testbed Pathfinder program, which is analyzing several different approaches to creating a functional, useful quantum computer for basic science and use in areas such as determining nuclear scattering rates, the electronic structure in molecules or condensed matter or understanding the energy levels in solar panels. In 2017, DOE awarded $1.5 million over three years to a team including DuBois and Lawrence Berkeley National Laboratory physicists Irfan Siddiqi and Jonathan Carter. The team wants to determine the underlying technology for a quantum system, develop a practical, usable quantum computer and build quantum capabilities at the national labs to solve real-world problems.

    The science of quantum computing, according to DuBois, is “at a turning point.” Within the three-year timeframe, he said, the team should be able to assess what type of quantum system is worth pursuing as a testbed system. The researchers first want to demonstrate control over a quantum computer and solve specific quantum dynamics problems. Then, they want to set up a user facility or cloud-based system that any user could log into and solve complex quantum physics problems.

    “There are multiple competing approaches to quantum computing; trapping ions, semiconducting systems, etc., and all have their quirks — none of them are really at the point where it’s actually a quantum computer,” DuBois said. “The hardware side, which is what this is, the question is, ‘what are the first technologies that we can deploy that will help bridge the gap between what actually exists in the lab and how people are thinking of these systems as theoretical objects?'”

    Quantum computers have come a long way since the first superconducting quantum bit, or “qubit,” was created in 1999. In last nearly 20 years, quantum systems have improved exponentially, evidenced by the life span of the qubit’s superposition, or how long it takes the qubit to decay into 0 or 1. In 1999 that figure was a nanosecond. Currently, systems are up to tens to hundreds of milliseconds, which may not sound like much, but every year, the lifetime of the quantum bit has doubled.

    For the Testbed project, LLNL’s first generation quantum device will be roughly 20 qubits, DuBois said, large enough to be interesting, but small enough to be useful. A system of that size could potentially reduce the time it takes for most current supercomputing systems to perform quantum dynamics calculations from about a day down to mere microseconds, DuBois said. To get to that point, LLNL and LBNL physicists will need to understand how to design systems that can extend the quantum state.

    “It needs to last long enough to be quantum and it needs to be controllable,” DuBois said. “There’s a spectrum to that; the bigger the space is, the more powerful it has to be. Then there’s how controllable it would be. The finest level of control would be to change the value to anything I want. That’s what we’re aiming for, but there’s a competition involved. We want to hit that sweet spot.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration

    LLNL/NIF


    DOE Seal
    NNSA

     
  • richardmitnick 8:59 am on January 11, 2018 Permalink | Reply
    Tags: , Grasshopper Theory Might Map the Multiverse, Quantum Physics   

    From Edgy Labs: “Grasshopper Theory Might Map the Multiverse” 

    Edgy Labs

    January 10, 2018
    William McKinney

    1
    Genty | Pixabay.com

    New research suggests quantum theory doesn’t follow the rules of “reality”. Let’s see how hypothetical grasshoppers might lead us to the multiverse.

    Who knew that grasshoppers could help us understand quantum theory?

    Apparently, they can. At least, theoretically speaking they can. Recently, two physicists, Olga Goulko and Adrian Kent, have released a paper [Proceedings of The Royal Society A] that wrestles with something called the grasshopper problem.

    The grasshopper problem is a relatively new puzzle for the field of geometry. The problem is simple to state, but very hard to solve. However, solving it may help us understand the Bell inequalities and so mathematicians and physicists worldwide have attempted to posit an answer.

    It works like this:

    Let’s say that a grasshopper lands on a random point in a lawn, then jumps at a fixed distance in a random direction. What shape does the lawn have to be so that the grasshopper stays on the lawn after it jumps?

    Seems simple, right? In truth, it’s anything but. It sounds like something that Euclid (the Greek father of modern geometry) would have dreamed up. It’s not, though; the grasshopper problem is, surprisingly, pretty new.

    Because the problem is rather new, researchers have been looking at it through a modern lens. Instead of merely trying to solve the problem, they are getting deep into the variables. Those variables are pretty important, too, because they may help us resolve Bell’s inequalities.

    Let’s start with Goulko and Kent’s work.

    Today Grasshoppers, Tomorrow the Multiverse

    Conventional theories state that a disc-shaped lawn is optimal to solve the grasshopper problem, but Goulko and Kent know better.

    According to them, the optimal lawn shape changes depending on the distance of the jump. For distances smaller than 1/π1/2 (the radius of a circle of area 1, or approximately 0.56), for example, a cogwheel shape is best. For larger distances, other shapes such as a ‘three bladed fan’ or a row of stripes is best.

    2

    Oh, and it makes a difference if the surface of the lawn is flat or spherical, but I’ll get back to that.

    Sometimes the pieces of the lawn are connected, sometimes they are not. It all depends on variables, which is where Bell’s inequalities come in.

    One of the open problems regarding the Bell inequalities is determining what the optimal bounds are. These bounds are violated by quantum theory when quantum correlations get measured on a sphere at any angle between 0 and 90 degrees.

    As it turns out, that problem is pretty much equal to the problem of determining the shape of the lawn when it is spherical rather than flat. Goulko and Kent only analyzed the flat version in their paper, though they don’t think it’s a stretch to apply their method to the spherical case.

    The interesting part is that, when accounting for additional constraints, it might be possible to finally resolve the problem of optimal bounds for the Bell inequalities.

    Why is that so interesting? Well, if we can understand the optimal bounds for the Bell inequalities, we may be able to map out universes that we can’t see. How’s that for a final frontier?

    Exploring the Possibilities of the Multiverse

    Adding to that, we have theories about pocket universes, alternate dimensions, and the Upside Down. Okay, that last one was from Stranger Things, but just try and prove to me that the Upside Down isn’t there.

    One thing you always run into with multiverse theory, though, is quantum entanglement. See, many would believe that other universes and other dimensions are the same thing, but they aren’t. They are entirely different realities, but they could be linked through the quantum fabric of reality-at-large.

    For now, though, we can only speculate. Studying one universe is like an ant studying a deity. Studying the multiverse is going to be a much harder nut to crack. That said, we currently don’t have any better leads on it than quantum theory.

    And the latest leap in quantum theory is coming from a hypothetical grasshopper. Don’t you just love quantum physics?

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:49 pm on July 15, 2017 Permalink | Reply
    Tags: An advanced atomic cloud locked up in a small glass cage, Laser light to link caesium atoms and a vibrating membrane, Light 'kicks' object, , QBA-Quantum Back Action, Quantum Physics, Smart atomic cloud solves Heisenberg's observation problem, U Copenhagen   

    From U Copenhagen Niels Bohr Institute: “Smart atomic cloud solves Heisenberg’s observation problem” 

    University of Copenhagen

    Niels Bohr Institute bloc

    Niels Bohr Institute

    13 July 2017
    Eugene Polzik
    polzik@nbi.dk
    +45 2338 2045

    Quantum physics: Scientists at the Niels Bohr Institute, University of Copenhagen have been instrumental in developing a ‘hands-on’ answer to a challenge intricately linked to a very fundamental principle in physics: Heisenberg’s Uncertainty Principle. The NBI-researchers used laser light to link caesium atoms and a vibrating membrane. The research, the first of its kind, points to sensors capable of measuring movement with unseen precision.

    1
    From the left: Phd student Rodrigo Thomas, Professor Eugene Polzik and PhD student Christoffer Møller in front of the experiment demonstrating quantum measurement of motion. Photo: Ola J. Joensen.

    Our lives are packed with sensors gathering all sorts of information – and some of the sensors are integrated in our cell phones which e.g. enables us to measure the distances we cover when we go for a walk – and thereby also calculate how many calories we have burned thanks to the exercise. And this to most people seems rather straight forward.

    When measuring atom structures or light emissions at the quantum level by means of advanced microscopes or other forms of special equipment, things do, however, get a little more complicated due to a problem which during the 1920’s had the full attention of Niels Bohr as well as Werner Heisenberg. And this problem – this has to do with the fact that in-accuracies inevitably taint certain measurements conducted at quantum level – is described in Heisenberg’s Uncertainty Principle.

    In a scientific report published in this week’s issue of Nature, NBI-researchers – based on a number of experiments – demonstrate that Heisenberg’s Uncertainty Principle to some degree can be neutralized. This has never been shown before, and the results may spark development of new measuring equipment as well as new and better sensors.

    Professor Eugene Polzik, head of Quantum Optics (QUANTOP) at the Niels Bohr Institute, has been in charge of the research – which has included the construction of a vibrating membrane and an advanced atomic cloud locked up in a small glass cage.

    2
    If laser light used to measure motion of a vibrating membrane (left) is first transmitted through an atom cloud (center) the measurement sensitivity can be better than standard quantum limits envisioned by Bohr and Heisenberg. Photo: Bastian Leonhardt Strube and Mads Vadsholt.

    Light ‘kicks’ object

    Heisenberg’s Uncertainty Principle basically says that you cannot simultaneously know the exact position and the exact speed of an object.

    Which has to do with the fact that observations conducted via a microscope operating with laser light inevitably will lead to the object being ‘kicked’. This happens because light is a stream of photons which when reflected off the object give it random ‘kicks’ – and as a result of those kicks the object begins to move in a random way.

    This phenomenon is known as Quantum Back Action (QBA) – and these random movements put a limit to the accuracy with which measurements can be carried out at quantum level.

    To conduct the experiments at NBI professor Polzik and his team of “young, enthusiastic and very skilled NBI-researchers” used a ‘tailor-made’ membrane as the object observed at quantum level. The membrane was built by Ph.D. Students Christoffer Møller and Yegishe Tsaturyan, whereas Rodrigo Thomas and Georgios Vasikalis – Ph.D. Student and researcher, respectively – were in charge of the atomic aspects. Furthermore Polzik relied on other NBI-employees, assistant professor Mikhail Balabas, who built the minute glass cage for the atoms, researcher Emil Zeuthen and professor Albert Schliesser who – collaborating with German colleagues – were in charge of the substantial number of mathematical calculations needed before the project was ready for publication in Nature.

    3
    The atomic part of the hybrid experiment. The atoms are contained in a micro-cell inside the magnetic shield seen in the middle. Photo: Ola J. Joensen.

    Over the last decades scientists have tried to find ways of ‘fooling’ Heisenberg’s Uncertainty Principle. Eugene Polzik and his colleagues came up with the idea of implementing the advanced atomic cloud a few years ago – and the cloud consists of 100 million caesium-atoms locked up in a hermetically closed cage, a glass cell, explains the professor:

    “The cell is just 1 centimeter long, 1/3 of a millimeter high and 1/3 of a millimeter wide, and in order to make the atoms work as intended, the inner cell walls have been coated with paraffin. The membrane – whose movements we were following at quantum level – measures 0,5 millimeter, which actually is a considerable size in a quantum perspective”.

    The idea behind the glass cell is to deliberately send the laser light used to study the membrane-movements on quantum level through the encapsulated atomic cloud BEFORE the light reaches the membrane, explains Eugene Polzik: “This results in the laser light-photons ‘kicking’ the object – i.e. the membrane – as well as the atomic cloud, and these ‘kicks’ so to speak cancel out. This means that there is no longer any Quantum Back Action – and therefore no limitations as to how accurately measurements can be carried out at quantum level”.

    4
    The optomechanical part of the hybrid experiment. The cryostat seen in the middle houses the vibrating membrane whose quantum motion is measured. Photo: Ola J. Joensen.

    How can this be utilized?

    “For instance when developing new and much more advanced types of sensors for various analyses of movements than the types we know today from cell phones, GPS and geological surveys”, says professor Eugene Polzik: “Generally speaking sensors operating at the quantum level are receiving a lot of attention these days. One example is the Quantum Technologies Flagship, an extensive EU program which also supports this type of research”.

    The fact that it is indeed possible to ‘fool’ Heisenberg’s Uncertainty Principle may also prove significant in relation to better understanding gravitational waves – waves in the fabric of space-time itself of light.

    In September of 2015 the American LIGO-experiment was able to publish the first direct registrations and measurements of gravitational waves stemming from a collision between two very large black holes.

    However, the equipment used by LIGO is influenced by Quantum Back Action, and the new research from NBI may prove capable of eliminating that problem, says Eugene Polzik.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Niels Bohr Institute Campus

    The Niels Bohr Institute (Danish: Niels Bohr Institutet) is a research institute of the University of Copenhagen. The research of the institute spans astronomy, geophysics, nanotechnology, particle physics, quantum mechanics and biophysics.

    The Institute was founded in 1921, as the Institute for Theoretical Physics of the University of Copenhagen, by the Danish theoretical physicist Niels Bohr, who had been on the staff of the University of Copenhagen since 1914, and who had been lobbying for its creation since his appointment as professor in 1916. On the 80th anniversary of Niels Bohr’s birth – October 7, 1965 – the Institute officially became The Niels Bohr Institute.[1] Much of its original funding came from the charitable foundation of the Carlsberg brewery, and later from the Rockefeller Foundation.[2]

    During the 1920s, and 1930s, the Institute was the center of the developing disciplines of atomic physics and quantum physics. Physicists from across Europe (and sometimes further abroad) often visited the Institute to confer with Bohr on new theories and discoveries. The Copenhagen interpretation of quantum mechanics is named after work done at the Institute during this time.

    On January 1, 1993 the institute was fused with the Astronomic Observatory, the Ørsted Laboratory and the Geophysical Institute. The new resulting institute retained the name Niels Bohr Institute.

    The University of Copenhagen (UCPH) (Danish: Københavns Universitet) is the oldest university and research institution in Denmark. Founded in 1479 as a studium generale, it is the second oldest institution for higher education in Scandinavia after Uppsala University (1477). The university has 23,473 undergraduate students, 17,398 postgraduate students, 2,968 doctoral students and over 9,000 employees. The university has four campuses located in and around Copenhagen, with the headquarters located in central Copenhagen. Most courses are taught in Danish; however, many courses are also offered in English and a few in German. The university has several thousands of foreign students, about half of whom come from Nordic countries.

    The university is a member of the International Alliance of Research Universities (IARU), along with University of Cambridge, Yale University, The Australian National University, and UC Berkeley, amongst others. The 2016 Academic Ranking of World Universities ranks the University of Copenhagen as the best university in Scandinavia and 30th in the world, the 2016-2017 Times Higher Education World University Rankings as 120th in the world, and the 2016-2017 QS World University Rankings as 68th in the world. The university has had 9 alumni become Nobel laureates and has produced one Turing Award recipient

     
  • richardmitnick 12:16 pm on February 11, 2017 Permalink | Reply
    Tags: , Beamsplitter, , Quantum Physics, , What shape are photons? Quantum holography sheds light   

    From COSMOS: “What shape are photons? Quantum holography sheds light” 

    Cosmos Magazine bloc

    COSMOS

    20 July 2016 [Just found this in social media]
    Cathal O’Connell

    1
    Hologram of a single photon reconstructed from raw measurements (left) and theoretically predicted (right).
    FUW

    Imagine a shaft of yellow sunlight beaming through a window. Quantum physics tells us that beam is made of zillions of tiny packets of light, called photons, streaming through the air. But what does an individual photon “look” like? Does it have a shape? Are these questions even meaningful?

    Now, Polish physicists have created the first ever hologram of a single light particle. The feat, achieved by observing the interference of two intersecting light beams, is an important insight into the fundamental quantum nature of light.

    The result could also be important for technologies that require an understanding of the shape of single photons – such as quantum communication and quantum computers.

    ”We performed a relatively simple experiment to measure and view something incredibly difficult to observe: the shape of wavefronts of a single photon,” says Radoslaw Chrapkiewicz, a physicist at the University of Warsaw and lead author of the new paper, published in Nature Photonics.

    For hundreds of years, physicists have been working to figure out what light is made of. In the 19th century, the debate seemed to be settled by Scottish physicist James Clerk Maxwell’s description of light as a wave of electromagnetism.

    But things got a bit more complicated at the turn of the 20th century when German physicist Max Planck, then fellow countryman Albert Einstein, showed light was made up of tiny indivisible packets called photons.

    In the 1920s, Austrian physicist Erwin Schrödinger elaborated on these ideas with his equation for the quantum wave function to describe what a wave looks like, which has proved incredibly powerful in predicting the results of experiments with photons. But, despite the success of Schrödinger’s theory, physicists still debate what the wave function really means.

    Now physicists at the University of Warsaw measured, for the first time, the shape described by Schrödinger’s equation in a real experiment.

    Photons, travelling as waves, can be in step (called having the same phase). If they interact, they produce a bright signal. If they’re out of phase, they cancel each other out. It’s like sound waves from two speakers producing loud and quiet patches in a room.

    The image – which is called a hologram because it holds information on both the photon’s shape and phase – was created by firing two light beams at a beamsplitter, made of calcite crystal, at the same time.

    The beamsplitter acts a bit like a traffic intersection, where each photon can either pass straight on through or make a turn. The Polish team’s experiment hinged on measuring which path each photon took, which depends on the shape of their wave functions.

    2
    Scheme of the experimental setup for measuring holograms of single photons. FUW / dualcolor.pl / jch

    For a photon on its own, each path is equally probable. But when two photons approach the intersection, they interact – and these odds change.

    The team realised that if they knew the wave function of one of the photons, they could figure out the shape of the second from the positions of flashes appearing on a detector.

    It’s a little like firing two bullets to glance off one another mid-air and using the deflected trajectories to figure our shape of each projectile.

    Each run of the experiment produced two flashes on a detector, one for each photon. After more than 2,000 repetitions, a pattern of flashes built up and the team were able to reconstruct the shape of the unknown photon’s wave function.

    The resulting image looks a bit like a Maltese cross, just like the wave function predicted from Schrödinger’s equation. In the arms of the cross, where the photons were in step, the image is bright – and where they weren’t, we see darkness.

    The experiment brings us “a step closer to understanding what the wave function really is,” says Michal Jachura, who co-authored the work, and it could be a new tool for studying the interaction between two photons, on which technologies such as quantum communication and some versions of quantum computing rely.

    The researchers also hope to recreate wave functions of more complex quantum objects, such as atoms.

    “It’s likely that real applications of quantum holography won’t appear for a few decades yet,” says Konrad Banaszek, who was also part of the team, “but if there’s one thing we can be sure of it’s that they will be surprising.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: