Tagged: Quantum Mechanics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:16 am on December 23, 2018 Permalink | Reply
    Tags: , , , , Hacking into a quantum mechanics message would cause it to self-destruct, Quantum Mechanics, Russian GLONASS satellites, Space Geodesy Centre on the ground run by the Italian Space Agency, Study Confirms: Global Quantum Internet Really Is Possible, The key to the successful data exchange here was the use of passive retro-reflectors mounted on the satellites to keep the long-distance light signals intact, Using quantum information protocols as quantum key distribution (QKD)   

    From Science Alert: “Study Confirms: Global Quantum Internet Really Is Possible” 


    From Science Alert

    23 DEC 2018


    Quantum internet promises ultra-secure, next-generation communications, but is it actually feasible on a global scale?

    Absolutely, according to a new experiment carried out between satellites in orbit and a station on the ground.

    The team of scientists was able to exchange several carefully managed photons in pulses of infrared light, carried between Russian GLONASS satellites and the Space Geodesy Centre on the ground run by the Italian Space Agency.

    GLONASS-M Russian satellite depiction

    The Matera Space Geodesy Centre- Italian Space Agency

    Getting these signals to pass through some 20,000 kilometres (12,427 miles) of air and space without any interference or data loss is no easy task – but the signs are promising that such a global network could indeed be functional.

    “Space quantum communications (QC) represent a promising way to guarantee unconditional security for satellite-to-ground and inter-satellite optical links, by using quantum information protocols as quantum key distribution (QKD),” says one of the researchers, Giuseppe Vallone from the University of Padova in Italy.

    The quantum key distribution or QKD method Vallone mentions refers to data encrypted using the power of quantum mechanics: thanks to the delicate nature of the technology, any interference is quickly detected, making QKD communications impossible to intercept.

    In fact, hacking into a quantum mechanics message would cause it to self-destruct.

    So far so good in theory, but keeping these secure channels open across long distances has proved tricky.

    The key to the successful data exchange here was the use of passive retro-reflectors mounted on the satellites to keep the long-distance light signals intact, breaking the previous record distance for this type of quantum communication by an extra 15,000 kilometres (9,321 miles).

    While satellites placed higher in orbit, like the GLONASS ones, are more difficult to communicate reliably with, they pass within sight of ground stations more regularly, potentially enabling an unhackable quantum network that can span the globe.

    We’re only just getting started with this type of technology – not least because scientists are still trying to figure out if it can actually work – and for the moment it’s not clear exactly what a quantum internet would be used for or how it might be operated.

    One idea is that it might become a specialised, very secure extension to the normal internet, used by a small selection of apps and devices.

    What we do now know is that quantum communications are possible between the ground and high orbit satellites, extending the potential reach of the new technology.

    That’s important as the satellite networks we rely on continue to get developed and upgraded.

    “Satellite-based technologies enable a wide range of civil, scientific and military applications like communications, navigation and timing, remote sensing, meteorology, reconnaissance, search and rescue, space exploration and astronomy,” says Vallone.

    “The core of these systems is to safely transmit information and data from orbiting satellites to ground stations on Earth. Protection of these channels from a malicious adversary is therefore crucial for both military and civilian operations.”

    The research has been published in Quantum Science and Technology.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:56 pm on October 15, 2018 Permalink | Reply
    Tags: , , , , , , , , Quantum Mechanics,   

    From The Guardian: “Black holes and soft hair: why Stephen Hawking’s final work is important” 

    The Guardian Logo

    From The Guardian

    Malcolm Perry, who worked with Hawking on his final paper, explains how it improves our understanding of one of universe’s enduring mysteries.

    10 Oct 2018
    Ian Sample

    Black Hole Entropy and Soft Hair was completed in the days before the physicist’s death in March.

    An artist’s impression of a star being torn apart by a black hole. Photograph: NASA’s Goddard Space Flight Center.

    Stephen Hawking by Jason Bye/REX/Shutterstock

    The information paradox is perhaps the most puzzling problem in fundamental theoretical physics today. It was discovered by Stephen Hawking 43 years ago, and until recently has puzzled many.

    Starting in 2015, Stephen, Andrew Strominger and I started to wonder if we could understand a way out of this difficulty by questioning the basic assumptions that underlie the difficulties. We published our first paper on the subject in 2016 and have been working hard on this problem ever since.

    The most recent work, and perhaps the last paper that Stephen was involved in, has just come out. While we have not solved the information paradox, we hope that we have paved the way, and we are continuing our intensive work in this area.

    Physics is really about being able to predict the future given how things are now. For example, if you throw a ball, once you know its initial position and velocity, then you can figure out where it will be in the future. That kind of reasoning is fine for what we call classical physics but for small things, like atoms and electrons, the rules need some modifications, as described by quantum mechanics. In quantum mechanics, instead of describing precise outcomes, one finds that one can only calculate the probabilities for various things to happen. In the case of a ball being thrown, one would not know its precise trajectory, but only the probability that it would be in some particular place given its initial conditions.

    What Hawking discovered was that in black hole physics, there seemed to be even greater uncertainty than in quantum mechanics. However, this kind of uncertainty seemed to be completely unacceptable in that it resulted in many of the laws of physics appearing to break down. It would deprive us of the ability to predict anything about the future of a black hole.

    That might not have mattered – except that black holes are real physical objects. There are huge black holes at the centres of many galaxies. We know this because observations of the centre of our galaxy show that there is a compact object with a mass of a few million times that of our sun there; such a huge concentration of mass could only be a black hole. Quasars, extremely luminous objects at the centres of very distant galaxies, are powered by matter falling onto black holes. The observatory Ligo has recently discovered ripples in spacetime, gravitational waves, produced by the collision of black holes.

    The root of the problem is that it was once thought that black holes were completely described by their mass and their spin. If you threw something into a black hole, once it was inside you would be unable to tell what it was that was thrown in.

    These ideas were encapsulated in the phrase “a black hole has no hair”. We can often tell people apart by looking their hair, but black holes seemed to be completely bald. Back in 1974, Stephen discovered that black holes, rather than being perfect absorbers, behave more like what we call “black bodies”. A black body is characterised by a temperature, and all bodies with a temperature produce thermal radiation.

    If you go to a doctor, it is quite likely your temperature will be measured by having a device pointed at you. This is an infrared sensor and it measures your temperature by detecting the thermal radiation you produce. A piece of metal heated up in a fire will glow because it produces thermal radiation.

    Black holes are no different. They have a temperature and produce thermal radiation. The formula for this temperature, universally known as the Hawking temperature, is inscribed on the memorial to Stephen’s life in Westminster Abbey. Any object that has a temperature also has an entropy. The entropy is a measure of how many different ways an object could be made from its microscopic ingredients and still look the same. So, for a particular piece of red hot metal, it would be the number of ways the atoms that make it up could be arranged so as to look like the lump of metal you were observing. Stephen’s formula for the temperature of a black hole allowed him to find the entropy of a black hole.

    The problem then was: how did this entropy arise? Since all black holes appear to be the same, the origin of the entropy was at the centre of the information paradox.

    What we have done recently is to discover a gap in the mathematics that led to the idea that black holes are totally bald. In 2016, Stephen, Andy and I found that black holes have an infinite collection of what we call “soft hair”. This discovery allows us to question the idea that black holes lead to a breakdown in the laws of physics.

    Stephen kept working with us up to the end of his life, and we have now published a paper that describes our current thoughts on the matter. In this paper, we describe a way of calculating the entropy of black holes. The entropy is basically a quantitative measure of what one knows about a black hole apart from its mass or spin.
    While this is not a resolution of the information paradox, we believe it provides some considerable insight into it. Further work is needed but we feel greatly encouraged to continue our research in this area. The information paradox is intimately tied up with our quest to find a theory of gravity that is compatible with quantum mechanics.

    Einstein’s general theory of relativity is extremely successful at describing spacetime and gravitation on large scales, but to see how the world works on small scales requires quantum theory. There are spectacularly successful theories of the non-gravitational forces of nature as explained by the “standard model” of particle physics. Such theories have been exhaustively tested and the recent discovery of the Higgs particle at Cern by the Large Hadron Collider is a marvellous confirmation of these ideas.

    Yet the incorporation of gravitation into this picture is still something that eludes us. As well as his work on black holes, Stephen was pursuing ideas that he hoped would lead to a unification of gravitation with the other forces of nature in a way that would unite Einstein’s ideas with those of quantum theory. Our work on black holes does indeed shed light on this other puzzle. Sadly, Stephen is no longer with us to share our excitement about the possibility of resolving these issues, which have now been around for half a century.

    The origins of the puzzle can be traced back to Albert Einstein. In 1915, Einstein published his theory of general relativity, a tour-de-force that described how gravity arises from the spacetime-bending effects of matter, and so why the planets circle the sun. But Einstein’s theory made important predictions about black holes too, notably that a black hole can be completely defined by only three features: its mass, charge, and spin.

    Nearly 60 years later, Hawking added to the picture. He argued that black holes also have a temperature. And because hot objects lose heat into space, the ultimate fate of a black hole is to evaporate out of existence. But this throws up a problem. The rules of the quantum world demand that information is never lost. So what happens to all the information contained in an object – the nature of a moon’s atoms, for instance – when it tumbles into a black hole?

    “The difficulty is that if you throw something into a black hole it looks like it disappears,” said Perry. “How could the information in that object ever be recovered if the black hole then disappears itself?”

    In the latest paper, Hawking and his colleagues show how some information at least may be preserved. Toss an object into a black hole and the black hole’s temperature ought to change. So too will a property called entropy, a measure of an object’s internal disorder, which rises the hotter it gets.

    The physicists, including Sasha Haco at Cambridge and Andrew Strominger at Harvard, show that a black hole’s entropy may be recorded by photons that surround the black hole’s event horizon, the point at which light cannot escape the intense gravitational pull. They call this sheen of photons “soft hair”.

    “What this paper does is show that ‘soft hair’ can account for the entropy,” said Perry. “It’s telling you that soft hair really is doing the right stuff.”

    It is not the end of the information paradox though. “We don’t know that Hawking entropy accounts for everything you could possibly throw at a black hole, so this is really a step along the way,” said Perry. “We think it’s a pretty good step, but there is a lot more work to be done.”

    Days before Hawking died, Perry was at Harvard working on the paper with Strominger. He was not aware how ill Hawking was and called to give the physicist an update. It may have been the last scientific exchange Hawking had. “It was very difficult for Stephen to communicate and I was put on a loudspeaker to explain where we had got to. When I explained it, he simply produced an enormous smile. I told him we’d got somewhere. He knew the final result.”

    Among the unknowns that Perry and his colleagues must now explore are how information associated with entropy is physically stored in soft hair and how that information comes out of a black hole when it evaporates.

    “If I throw something in, is all of the information about what it is stored on the black hole’s horizon?” said Perry. “That is what is required to solve the information paradox. If it’s only half of it, or 99%, that is not enough, you have not solved the information paradox problem.

    “It’s a step on the way, but it is definitely not the entire answer. We have slightly fewer puzzles than we had before, but there are definitely some perplexing issues left.”

    Marika Taylor, professor of theoretical physics at Southampton University and a former student of Hawking’s, said: “Understanding the microscopic origin of this entropy – what are the underlying quantum states that the entropy counts? – has been one of the great challenges of the last 40 years.

    “This paper proposes a way to understand entropy for astrophysical black holes based on symmetries of the event horizon. The authors have to make several non-trivial assumptions so the next steps will be to show that these assumptions are valid.”

    Juan Maldacena, a theoretical physicist at Einstein’s alma mater, the Institute for Advanced Studies in Princeton, said: “Hawking found that black holes have a temperature. For ordinary objects we understand temperature as due to the motion of the microscopic constituents of the system. For example, the temperature of air is due to the motion of the molecules: the faster they move, the hotter it is.

    “For black holes, it is unclear what those constituents are, and whether they can be associated to the horizon of a black hole. In some physical systems that have special symmetries, the thermal properties can be calculated in terms of these symmetries. This paper shows that near the black hole horizon we have one of these special symmetries.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:06 am on October 11, 2018 Permalink | Reply
    Tags: , , , Quantum Mechanics, Turbulance unsolved, Werner Heisenberg   

    From ars technica: “Turbulence, the oldest unsolved problem in physics” 

    Ars Technica
    From ars technica

    Lee Phillips

    The flow of water through a pipe is still in many ways an unsolved problem.

    Werner Heisenberg won the 1932 Nobel Prize for helping to found the field of quantum mechanics and developing foundational ideas like the Copenhagen interpretation and the uncertainty principle. The story goes that he once said that, if he were allowed to ask God two questions, they would be, “Why quantum mechanics? And why turbulence?” Supposedly, he was pretty sure God would be able to answer the first question.

    Werner Heisenberg from German Federal Archives

    The quote may be apocryphal, and there are different versions floating around. Nevertheless, it is true that Heisenberg banged his head against the turbulence problem for several years.

    His thesis advisor, Arnold Sommerfeld, assigned the turbulence problem to Heisenberg simply because he thought none of his other students were up to the challenge—and this list of students included future luminaries like Wolfgang Pauli and Hans Bethe. But Heisenberg’s formidable math skills, which allowed him to make bold strides in quantum mechanics, only afforded him a partial and limited success with turbulence.

    Some nearly 90 years later, the effort to understand and predict turbulence remains of immense practical importance. Turbulence factors into the design of much of our technology, from airplanes to pipelines, and it factors into predicting important natural phenomena such as the weather. But because our understanding of turbulence over time has stayed largely ad-hoc and limited, the development of technology that interacts significantly with fluid flows has long been forced to be conservative and incremental. If only we became masters of this ubiquitous phenomenon of nature, these technologies might be free to evolve in more imaginative directions.

    An undefined definition

    Here is the point at which you might expect us to explain turbulence, ostensibly the subject of the article. Unfortunately, physicists still don’t agree on how to define it. It’s not quite as bad as “I know it when I see it,” but it’s not the best defined idea in physics, either.

    So for now, we’ll make do with a general notion and try to make it a bit more precise later on. The general idea is that turbulence involves the complex, chaotic motion of a fluid. A “fluid” in physics talk is anything that flows, including liquids, gases, and sometimes even granular materials like sand.

    Turbulence is all around us, yet it’s usually invisible. Simply wave your hand in front of your face, and you have created incalculably complex motions in the air, even if you can’t see it. Motions of fluids are usually hidden to the senses except at the interface between fluids that have different optical properties. For example, you can see the swirls and eddies on the surface of a flowing creek but not the patterns of motion beneath the surface. The history of progress in fluid dynamics is closely tied to the history of experimental techniques for visualizing flows. But long before the advent of the modern technologies of flow sensors and high-speed video, there were those who were fascinated by the variety and richness of complex flow patterns.

    One of the first to visualize these flows was scientist, artist, and engineer Leonardo da Vinci, who combined keen observational skills with unparalleled artistic talent to catalog turbulent flow phenomena. Back in 1509, Leonardo was not merely drawing pictures. He was attempting to capture the essence of nature through systematic observation and description. In this figure, we see one of his studies of wake turbulence, the development of a region of chaotic flow as water streams past an obstacle.

    For turbulence to be considered a solved problem in physics, we would need to be able to demonstrate that we can start with the basic equation describing fluid motion and then solve it to predict, in detail, how a fluid will move under any particular set of conditions. That we cannot do this in general is the central reason that many physicists consider turbulence to be an unsolved problem.

    I say “many” because some think it should be considered solved, at least in principle. Their argument is that calculating turbulent flows is just an application of Newton’s laws of motion, albeit a very complicated one; we already know Newton’s laws, so everything else is just detail. Naturally, I hold the opposite view: the proof is in the pudding, and this particular pudding has not yet come out right.

    The lack of a complete and satisfying theory of turbulence based on classical physics has even led to suggestions that a full account requires some quantum mechanical ingredients: that’s a minority view, but one that can’t be discounted.

    An example of why turbulence is said to be an unsolved problem is that we can’t generally predict the speed at which an orderly, non-turbulent (“laminar”) flow will make the transition to a turbulent flow. We can do pretty well in some special cases—this was one of the problems that Heisenberg had some success with—but, in general, our rules of thumb for predicting the transition speeds are summaries of experiments and engineering experience.

    There are many phenomena in nature that illustrate the often sudden transformation from a calm, orderly flow to a turbulent flow.The transition to turbulence. Credit: Dr. Gary Settles

    This figure above is a nice illustration of this transition phenomenon. It shows the hot air rising from a candle flame, using a 19th century visualization technique that makes gases of different densities look different. Here, the air heated by the candle is less dense than the surrounding atmosphere.

    For another turbulent transition phenomenon familiar to anyone who frequents the beach, consider gentle, rolling ocean waves that become complex and foamy as they approach the shore and “break.” In the open ocean, wind-driven waves can also break if the windspeed is high or if multiple waves combine to form a larger one.

    For another visual aid, there is a centuries-old tradition in Japanese painting of depicting turbulent, breaking ocean waves. In these paintings, the waves are not merely part of the landscape but the main subjects. These artists seemed to be mainly concerned with conveying the beauty and terrible power of the phenomenon, rather than, as was Leonardo, being engaged in a systematic study of nature. One of the most famous Japanese artworks, and an iconic example of this genre, is Hokusai’s “Great Wave,” a woodblock print published in 1831.

    Hokusai’s “Great Wave.”

    For one last reason to consider turbulence an unsolved problem, turbulent flows exhibit a wide range of interesting behavior in time and space. Most of these have been discovered by measurement, not predicted, and there’s still no satisfying theoretical explanation for them.


    Reasons for and against “mission complete” aside, why is the turbulence problem so hard? The best answer comes from looking at both the history and current research directed at what Richard Feynman once called “the most important unsolved problem of classical physics.”

    The most commonly used formula for describing fluid flow is the Navier-Stokes equation. This is the equation you get if you apply Newton’s first law of motion, F = ma (force = mass × acceleration), to a fluid with simple material properties, excluding elasticity, memory effects, and other complications. Complications like these arise when we try to accurately model the flows of paint, polymers, some biological fluids such as blood (there are many other substances also that violate the assumptions of the Navier-Stokes equations). But for water, air, and other simple liquids and gases, it’s an excellent approximation.

    The Navier-Stokes equation is difficult to solve because it is nonlinear. This word is thrown around quite a bit, but here it means something specific. You can build up a complicated solution to a linear equation by adding up many simple solutions. An example you may be aware of is sound: the equation for sound waves is linear, so you can build up a complex sound by adding together many simple sounds of different frequencies (“harmonics”). Elementary quantum mechanics is also linear; the Schrödinger equation allows you to add together solutions to find a new solution.

    But fluid dynamics doesn’t work this way: the nonlinearity of the Navier-Stokes equation means that you can’t build solutions by adding together simpler solutions. This is part of the reason that Heisenberg’s mathematical genius, which served him so well in helping to invent quantum mechanics, was put to such a severe test when it came to turbulence.

    Heisenberg was forced to make various approximations and assumptions to make any progress with his thesis problem. Some of these were hard to justify; for example, the applied mathematician Fritz Noether (a brother of Emmy Noether) raised prominent objections to Heisenberg’s turbulence calculations for decades before finally admitting that they seemed to be correct after all.

    (The situation was so hard to resolve that Heisenberg himself said, while he thought his methods were justified, he couldn’t find the flaw in Fritz Noether’s reasoning, either!)

    The cousins of the Navier-Stokes equation that are used to describe more complex fluids are also nonlinear, as is a simplified form, the Euler equation, that omits the effects of friction. There are cases where a linear approximation does work well, such as flow at extremely slow speeds (imagine honey flowing out of a jar), but this excludes most problems of interest including turbulence.

    Who’s down with CFD?

    Despite the near impossibility of finding mathematical solutions to the equations for fluid flows under realistic conditions, science still needs to get some kind of predictive handle on turbulence. For this, scientists and engineers have turned to the only option available when pencil and paper failed them—the computer. These groups are trying to make the most of modern hardware to put a dent in one of the most demanding applications for numerical computing: calculating turbulent flows.

    The need to calculate these chaotic flows has benefited from (and been a driver of) improvements in numerical methods and computer hardware almost since the first giant computers appeared. The field is called computational fluid dynamics, often abbreviated as CFD.

    Early in the history of CFD, engineers and scientists applied straightforward numerical techniques in order to try to directly approximate solutions to the Navier-Stokes equations. This involves dividing up space into a grid and calculating the fluid variables (pressure, velocity) at each grid point. The problem of the large range of spatial scales immediately makes this approach expensive: you need to find a solution where the flow features are accurate for the largest scales—meters for pipes, thousands of kilometers for weather, and down to near the molecular scale. Even if you cut off the length scale at the small end at millimeters or centimeters, you will still need millions of grid points.

    A possible grid for calculating the flow over an airfoil.

    One approach to getting reasonable accuracy with a manageable-sized grid begins with the realization that there are often large regions where not much is happening. Put another way, in regions far away from solid objects or other disturbances, the flow is likely to vary slowly in both space and time. All the action is elsewhere; the turbulent areas are usually found near objects or interfaces.

    A non-uniform grid for calculating the flow over an airfoil.

    If we take another look at our airfoil and imagine a uniform flow beginning at the left and passing over it, it can be more efficient to concentrate the grid points near the object, especially at the leading and trailing edges, and not “waste” grid points far away from the airfoil. The next figure shows one possible gridding for simulating this problem.

    This is the simplest type of 2D non-uniform grid, containing nothing but straight lines. The state of the art in nonuniform grids is called adaptive mesh refinement (AMR), where the mesh, or grid, actually changes and adapts to the flow during the simulation. This concentrates grid points where they are needed, not wasting them in areas of nearly uniform flow. Research in this field is aimed at optimizing the grid generation process while minimizing the artificial effects of the grid on the solution. Here it’s used in a NASA simulation of the flow around an oscillating rotor blade. The color represents vorticity, a quantity related to angular momentum.

    Using AMR to simulate the flow around a rotor blade.Neal M. Chaderjian, NASA/Ames

    The above image shows the computational grid, rendered as blue lines, as well as the airfoil and the flow solution, showing how the grid adapts itself to the flow. (The grid points are so close together at the areas of highest grid resolution that they appear as solid blue regions.) Despite the efficiencies gained by the use of adaptive grids, simulations such as this are still computationally intensive; a typical calculation of this type occupies 2,000 compute cores for about a week.

    Dimitri Mavriplis and his collaborators at the Mavriplis CFD Lab at the University of Wyoming have made available several videos of their AMR simulations.

    AMR simulation of flow past a sphere.Mavriplis CFD Lab

    Above is a frame from a video of a simulation of the flow past an object; the video is useful for getting an idea of how the AMR technique works, because it shows how the computational grid tracks the flow features.

    This work is an example of how state-of-the-art numerical techniques are capable of capturing some of the physics of the transition to turbulence, illustrated in the image of candle-heated air above.

    Another approach to getting the most out of finite computer resources involves making alterations to the equation of motion, rather than, or in addition to, altering the computational grid.

    Since the first direct numerical simulations of the Navier-Stokes equations were begun at Los Alamos in the late 1950s, the problem of the vast range of spatial scales has been attacked by some form of modeling of the flow at small scales. In other words, the actual Navier-Stokes equations are solved for motion on the medium and large scales, but, below some cutoff, a statistical or other model is substituted.

    The idea is that the interesting dynamics occur at larger scales, and grid points are placed to cover these. But the “subgrid” motions that happen between the gridpoints mainly just dissipate energy, or turn motion into heat, so don’t need to be tracked in detail. This approach is also called large-eddy simulation (LES), the term “eddy” standing in for a flow feature at a particular length scale.

    The development of subgrid modeling, although it began with the beginning of CFD, is an active area of research to this day. This is because we always want to get the most bang for the computer buck. No matter how powerful the computer, a sophisticated numerical technique that allows us to limit the required grid resolution will enable us to handle more complex problems.

    There are several other prominent approaches to modeling fluid flows on computers, some of which do not make use of grids at all. Perhaps the most successful of these is the technique called “smoothed particle hydrodynamics,” which, as its name suggests, models the fluid as a collection of computational “particles,” which are moved around without the use of a grid. The “smoothed” in the name comes from the smooth interpolations between particles that are used to derive the fluid properties at different points in space.

    Theory and experiment

    Despite the impressive (and ever-improving) ability of fluid dynamicists to calculate complex flows with computers, the search for a better theoretical understanding of turbulence continues, for computers can only calculate flow solutions in particular situations, one case at a time. Only through the use of mathematics do physicists feel that they’ve achieved a general understanding of a group of related phenomena. Luckily, there are a few main theoretical approaches to turbulence, each with some interesting phenomena they seek to penetrate.

    Only a few exact solutions of the Navier-Stokes equations are known; these describe simple, laminar flows (and certainly not turbulent flows of any kind). For flow in a pipe or between two flat plates, the flow velocity profile between two plates is zero at the boundaries and a maximum half-way between them. This parabolic flow profile (shown below) solves the equations: something that has been known for over a century. Laminar flow in a pipe is similar, with the maximum velocity occurring at the center.

    Exact solution for flow between plates.

    The interesting thing about this parabolic solution, and similar exact solutions, is that they are valid (mathematically speaking) at any flow velocity, no matter how high. However, experience shows that while this works at low speeds, the flow breaks up and becomes turbulent at some moderate “critical” speed. Using mathematical methods to try to find this critical speed is part of what Heisenberg was up to in his thesis work.

    Theorists describe what’s happening here by using the language of stability theory. Stability theory is the examination of the exact solutions to the Navier-Stokes equation and their ability to survive “perturbations,” which are small disturbances added to the flow. These disturbances can be in the form of boundaries that are less than perfectly smooth, variations in the pressure driving the flow, etc.

    The idea is that, while the low-speed solution is valid at any speed, near a critical speed another solution also becomes valid, and nature prefers that second, more complex solution. In other words, the simple solution has become unstable and is replaced by a second one. As the speed is ramped up further, each solution gives way to a more complicated one, until we arrive at the chaotic flow we call turbulence.

    In the real world, this will always happen, because perturbations are always present—and this is why laminar flows are much less common in everyday experience than turbulence.

    Experiments to directly observe these instabilities are delicate, because the distance between the first instability and the onset of full-blown turbulence is usually quite small. You can see a version of the process in the figure above, showing the transition to turbulence in the heated air column above a candle. The straight column is unstable, but it takes a while before the sinuous instability grows large enough for us to see it as a visible wiggle. Almost as soon as this happens, the cascade of instabilities piles up, and we see a sudden explosion into turbulence.

    Another example of the common pattern is in the next illustration, which shows the typical transition to turbulence in a flow bounded by a single wall.

    Transition to turbulence in a wall-bounded flow. NASA.

    We can again see an approximately periodic disturbance to the laminar flow begin to grow, and after just a few wavelengths the flow suddenly becomes turbulent.

    Capturing, and predicting, the transition to turbulence is an ongoing challenge for simulations and theory; on the theoretical side, the effort begins with stability theory.

    In fluid flows close to a wall, the transition to turbulence can take a somewhat different form. As in the other examples illustrated here, small disturbances get amplified by the flow until they break down into chaotic, turbulent motion. But the turbulence does not involve the entire fluid, instead confining itself to isolated spots, which are surrounded by calm, laminar flow. Eventually, more spots develop, enlarge, and ultimately merge, until the entire flow is turbulent.

    The fascinating thing about these spots is that, somehow, the fluid can enter them, undergo a complex, chaotic motion, and emerge calmly as a non-turbulent, organized flow on the other side. Meanwhile, the spots persist as if they were objects embedded in the flow and attached to the boundary.

    Turbulent spot experiment: pressure fluctuation. (Credit: Katya Casper et al., Sandia National Labs)

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    Despite a succession of first-rate mathematical minds puzzling over the Navier-Stokes equation since it was written down almost two centuries ago, exact solutions still are rare and cherished possessions, and basic questions about the equation remain unanswered. For example, we still don’t know whether the equation has solutions in all situations. We’re also not sure if its solutions, which supposedly represent the real flows of water and air, remain well-behaved and finite, or whether some of them blow up with infinite energies or become unphysically unsmooth.

    The scientist who can settle this, either way, has a cool million dollars waiting for them—this is one of the seven unsolved “Millennium Prize” mathematical problems set by the Clay Mathematics Institute.

    Fortunately, there are other ways to approach the theory of turbulence, some of which don’t depend on the knowledge of exact solutions to the equations of motion. The study of the statistics of turbulence uses the Navier-Stokes equation to deduce average properties of turbulent flows without trying to solve the equations exactly. It addresses questions like, “if the velocity of the flow here is so and so, then what is the probability that the velocity one centimeter away will be within a certain range?” It also answers questions about the average of quantities such as the resistance encountered when trying to push water through a pipe, or the lifting force on an airplane wing.

    These are the quantities of real interest to the engineer, who has little use for the physicist’s or mathematician’s holy grail of a detailed, exact description.

    It turns out that the one great obstacle in the way of a statistical approach to turbulence theory is, once again, the nonlinear term in the Navier-Stokes equation. When you use this equation to derive another equation for the average velocity at a single point, it contains a term involving something new: the velocity correlation between two points. When you derive the equation for this velocity correlation, you get an equation with yet another new term: the velocity correlation involving three points. This process never ends, as the diabolical nonlinear term keeps generating higher-order correlations.

    The need to somehow terminate, or “close,” this infinite sequence of equations is known as the “closure problem” in turbulence theory and is still the subject of active research. Very briefly, to close the equations you need to step outside of the mathematical procedure and appeal to a physically motivated assumption or approximation.

    Despite its difficulty, some type of statistical solution to the fluid equations is essential for describing the phenomena of fully developed turbulence, of which there are a number. Turbulence need not be merely a random, featureless expanse of roiling fluid; in fact, it usually is more interesting than that. One of the most intriguing phenomena is the existence of persistent, organized structures within a violent, chaotic flow environment. We are all familiar with magnificent examples of these in the form of the storms on Jupiter, recognizable, even iconic, features that last for years, embedded within a highly turbulent flow.

    More down-to-Earth examples occur in almost any real-world case of a turbulent flow—in fact, experimenters have to take great pains if they want to create a turbulent flow field that is truly homogeneous, without any embedded structure.

    In the below images of a turbulent wake behind a cylinder and of the transition to turbulence in a wall-bounded flow, you can see the echoes of the wave-like disturbance that precedes the onset of fully developed turbulence: a periodicity that persists even as the flow becomes chaotic.

    Cyclones at Jupiter’s north pole. NASA, JPL-Caltech, SwRI, ASI, INAF, JIRAM.

    Wake behind a cylinder. Joseph Straccia et al. (CC By NC-ND)

    When your basic governing equation is very hard to solve or even to simulate, it’s natural to look for a more tractable equation or model that still captures most of the important physics. Much of the theoretical effort to understand turbulence is of this nature.

    We’ve mentioned subgrid models above, used to reduce the number of grid points required in a numerical simulation. Another approach to simplifying the Navier-Stokes equation is a class of models called “shell models.” Roughly speaking, in these models you take the Fourier transform of the Navier-Stokes equation, leading to a description of the fluid as a large number of interacting waves at different wavelengths. Then, in a systematic way, you discard most of the waves, keeping just a handful of significant ones. You can then calculate, using a computer or, with the simplest models, by hand, the mode interactions and the resulting turbulent properties. While, naturally, much of the physics is lost in these types of models, they allow some aspects of the statistical properties of turbulence to be studied in situations where the full equations cannot be solved.

    Occasionally, we hear about the “end of physics”—the idea that we are approaching the stage where all the important questions will be answered, and we will have a theory of everything. But from another point of view, the fact that such a commonplace phenomenon as the flow of water through a pipe is still in many ways an unsolved problem means that we are unlikely to ever reach a point that all physicists will agree is the end of their discipline. There remains enough mystery in the everyday world around us to keep physicists busy far into the future.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Ars Technica was founded in 1998 when Founder & Editor-in-Chief Ken Fisher announced his plans for starting a publication devoted to technology that would cater to what he called “alpha geeks”: technologists and IT professionals. Ken’s vision was to build a publication with a simple editorial mission: be “technically savvy, up-to-date, and more fun” than what was currently popular in the space. In the ensuing years, with formidable contributions by a unique editorial staff, Ars Technica became a trusted source for technology news, tech policy analysis, breakdowns of the latest scientific advancements, gadget reviews, software, hardware, and nearly everything else found in between layers of silicon.

    Ars Technica innovates by listening to its core readership. Readers have come to demand devotedness to accuracy and integrity, flanked by a willingness to leave each day’s meaningless, click-bait fodder by the wayside. The result is something unique: the unparalleled marriage of breadth and depth in technology journalism. By 2001, Ars Technica was regularly producing news reports, op-eds, and the like, but the company stood out from the competition by regularly providing long thought-pieces and in-depth explainers.

    And thanks to its readership, Ars Technica also accomplished a number of industry leading moves. In 2001, Ars launched a digital subscription service when such things were non-existent for digital media. Ars was also the first IT publication to begin covering the resurgence of Apple, and the first to draw analytical and cultural ties between the world of high technology and gaming. Ars was also first to begin selling its long form content in digitally distributable forms, such as PDFs and eventually eBooks (again, starting in 2001).

  • richardmitnick 2:32 pm on October 5, 2018 Permalink | Reply
    Tags: , DOE Ofice of HIgh Energy Physics, , ORNL researchers advance quantum computing science through six DOE awards, Quantum Mechanics,   

    From Oak Ridge National Laboratory: “ORNL researchers advance quantum computing, science through six DOE awards” 


    From Oak Ridge National Laboratory

    October 3, 2018
    Scott Jones, Communications

    Oak Ridge National Laboratory will be working on new projects aimed at accelerating quantum information science. Credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    ORNL researchers will leverage various microscopy platforms for quantum computing projects. Credit: Genevieve Martin/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    The Department of Energy’s Oak Ridge National Laboratory is the recipient of six awards from DOE’s Office of Science aimed at accelerating quantum information science (QIS), a burgeoning field of research increasingly seen as vital to scientific innovation and national security.

    The awards, which were made in conjunction with the White House Summit on Advancing American Leadership in QIS, will leverage and strengthen ORNL’s established programs in quantum information processing and quantum computing.

    The application of quantum mechanics to computing and the processing of information has enormous potential for innovation across the scientific spectrum. Quantum technologies use units known as qubits to greatly increase the threshold at which information can be transmitted and processed. Whereas traditional “bits” have a value of either 0 or 1, qubits are encoded with values of both 0 and 1, or any combination thereof, at the same time, allowing for a vast number of possibilities for storing data.

    While in its infancy, the technology is being harnessed to develop computers that, when mature, will be exponentially more powerful than today’s leading systems. Beyond computing, however, quantum information science shows great promise to advance a vast array of research domains, from encryption to artificial intelligence to cosmology.

    The ORNL awards represent three Office of Science programs.

    “Software Stack and Algorithms for Automating Quantum-Classical Computing,” a new project supported by the Office of Advanced Scientific Computing Research, will develop methods for programming quantum computers. Led by ORNL’s Pavel Lougovski, the team of researchers from ORNL, Johns Hopkins University Applied Physics Lab, University of Southern California, University of Maryland, Georgetown University, and Microsoft, will tackle translating scientific applications into functional quantum programs that return accurate results when executed on real-world faulty quantum hardware. The team will develop an open-source algorithm and software stack that will automate the process of designing, executing, and analyzing the results of quantum algorithms, thus enabling new discovery across many scientific domains with an emphasis on applications in quantum field theory, nuclear physics, condensed matter, and quantum machine learning.

    ORNL’s Christopher M. Rouleau will lead the “Thin Film Platform for Rapid Prototyping Novel Materials with Entangled States for Quantum Information Science” project, funded by Basic Energy Sciences. The project aims to establish an agile AI-guided synthesis platform coupling reactive pulsed laser deposition with quick decision-making diagnostics to enable the rapid exploration of a wide spectrum of candidate thin-film materials for QIS; understand the dynamics of photonic states by combining a novel cathodoluminescence scanning electron microscopy platform with ultrafast laser spectroscopy; and enable understanding of entangled spin states for topological quantum computing by developing a novel scanning tunneling microscopy platform.

    ORNL’s Stephen Jesse will lead the “Understanding and Controlling Entangled and Correlated Quantum States in Confined Solid-State Systems Created via Atomic Scale Manipulation,” a new project supported by Basic Energy Sciences that includes collaborators from Harvard and MIT. The goal of the project is to use advanced electron microscopes to engineer novel materials on an atom-by-atom basis for use in QIS. These microscopes, along with other powerful instrumentation, will also be used to assess emerging quantum properties in-situ to aid the assembly process. Collaborators from Harvard will provide theoretical and computational effort to design quantum properties on demand using ORNL’s high-performance computing resources.

    ORNL is also partnering with Pacific Northwest National Laboratory, Berkeley Laboratory, and the University of Michigan on a project funded by the Office of Basic Energy Sciences titled “Embedding Quantum Computing into Many-Body Frameworks for Strongly-Correlated Molecular and Materials Systems.” The research team will develop methods for solving problems in computational chemistry for highly correlated electronic states. ORNL’s contribution, led by Travis Humble, will support this collaboration by translating applications of computational chemistry into the language needed for running on quantum computers and testing these ideas on experimental hardware.

    ORNL will support multiple projects awarded by the Office of High Energy Physics to develop methods for detecting high-energy particles using quantum information science. They include:

    “Quantum-Enhanced Detection of Dark Matter and Neutrinos,” in collaboration with the University of Wisconsin, Tufts, and San Diego State University. This project will use quantum simulation to calculate detector responses to dark matter particles and neutrinos. A new simulation technique under development will require extensive work in error mitigation strategies to correctly evaluate scattering cross sections and other physical quantities. ORNL’s effort, led by Raphael Pooser, will help develop these simulation techniques and error mitigation strategies for the new quantum simulator device, thus ensuring successful detector calculations.

    “Particle Track Pattern Recognition via Content Addressable Memory and Adiabatic Quantum Optimization: OLYMPUS Experiment Revisited,” a collaboration with John Hopkins Applied Physics Laboratory aimed at identifying rare events found in the data generated by experiments at particle colliders. ORNL principal investigator Travis Humble will apply new ideas for data analysis using experimental quantum computers that target faster response times and greater memory capacity for tracking signatures of high-energy particles.

    “HEP ML and Optimization Go Quantum,” in collaboration with Fermi National Accelerator Laboratory and Lockheed Martin Corporation, which will investigate how quantum machine learning methods may be applied to solving key challenges in optimization and data analysis. Advances in training machine learning networks using quantum computer promise greater accuracy and faster response times for data analysis. ORNL principal investigators Travis Humble and Alex McCaskey will help to develop these new methods for quantum machine learning for existing quantum computers by using the XACC programming tools, which offer a flexible framework by which to integrate quantum computing into scientific software.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


  • richardmitnick 1:08 pm on September 30, 2018 Permalink | Reply
    Tags: , Planck time, Quantum Mechanics, Time, Time can now be sliced into slivers as thin as one ten-trillionth of a second   

    From Nautilus: “Is It Time to Get Rid of Time?” 


    From Nautilus

    September 20, 2018
    Marcia Bartusiak

    No image credit.

    The crisis inside the physics of time.

    Poets often think of time as a river, a free-flowing stream that carries us from the radiant morning of birth to the golden twilight of old age. It is the span that separates the delicate bud of spring from the lush flower of summer.

    Physicists think of time in somewhat more practical terms. For them, time is a means of measuring change—an endless series of instants that, strung together like beads, turn an uncertain future into the present and the present into a definite past.

    The very concept of time allows researchers to calculate when a comet will round the sun or how a signal traverses a silicon chip. Each step in time provides a peek at the evolution of nature’s myriad phenomena.

    In other words, time is a tool. In fact, it was the first scientific tool. Time can now be sliced into slivers as thin as one ten-trillionth of a second.

    Planck Time. Universe Today


    But what is being sliced? Unlike mass and distance, time cannot be perceived by our physical senses. We don’t see, hear, smell, touch, or taste time. And yet we somehow measure it. As a cadre of theorists attempt to extend and refine the general theory of relativity, Einstein’s momentous law of gravitation, they have a problem with time. A big problem.

    Slicing it thin: A hydrogen maser clock keeps time by exploiting the so-called hyperfine transition.Wikimedia Commons

    “It’s a crisis,” says mathematician John Baez, of the University of California at Riverside, “and the solution may take physics in a new direction.” Not the physics of our everyday world. Stopwatches, pendulums, and hydrogen maser clocks will continue to keep track of nature quite nicely here in our low-energy earthly environs. The crisis arises when physicists attempt to merge the macrocosm—the universe on its grandest scale—with the microcosm of subatomic particles.

    Under Newton, time was special. Every moment was tallied by a universal clock that stood separate and apart from the phenomenon under study. In general relativity, this is no longer true. Einstein declared that time is not absolute—no particular clock is special—and his equations describing how the gravitational force works take this into account. His law of gravity looks the same no matter what timepiece you happen to be using as your gauge. “In general relativity time is completely arbitrary,” explains theoretical physicist Christopher Isham of Imperial College in London. “The actual physical predictions that come out of general relativity don’t depend on your choice of a clock.” The predictions will be the same whether you are using a clock traveling near the speed of light or one sitting quietly at home on a shelf.

    The choice of clock is still crucial, however, in other areas of physics, particularly quantum mechanics. It plays a central role in Erwin Schrödinger’s celebrated wave equation of 1926. The equation shows how a subatomic particle, whether traveling alone or circling an atom, can be thought of as a collection of waves, a wave packet that moves from point to point in space and from moment to moment in time.

    According to the vision of quantum mechanics, energy and matter are cut up into discrete bits, called quanta, whose motions are jumpy and blurry. They fluctuate madly. The behavior of these particles cannot be worked out exactly, the way a rocket’s trajectory can. Using Schrödinger’s wave equation, you can only calculate the probability that a particle—a wave packet—will attain a certain position or velocity. This is a picture so different from the world of classical physics that even Einstein railed against its indeterminacy. He declared that he could never believe that God would play dice with the world.

    You might say that quantum mechanics introduced a fuzziness into physics: You can pinpoint the precise position of a particle, but at a trade-off; its velocity cannot then be measured very well. Conversely, if you know how fast a particle is going, you won’t be able to know exactly where it is. Werner Heisenberg best summarized this strange and exotic situation with his famous uncertainty principle. But all this action, uncertain as it is, occurs on a fixed stage of space and time, a steadfast arena. A reliable clock is always around—is always needed, really—to keep track of the goings-on and thus enable physicists to describe how the system is changing. At least, that’s the way the equations of quantum mechanics are now set up.

    And that is the crux of the problem. How are physicists expected to merge one law of physics—namely gravity—that requires no special clock to arrive at its predictions, with the subatomic rules of quantum mechanics, which continue to work within a universal, Newtonian time frame? In a way, each theory is marching to the beat of a different drummer (or the ticking of a different clock).

    That’s why things begin to go a little crazy when you attempt to blend these two areas of physics. Although the scale on which quantum gravity comes into play is so small that current technology cannot possibly measure these effects directly, physicists can imagine them. Place quantum particles on the springy, pliable mat of spacetime, and it will bend and fold like so much rubber. And that flexibility will greatly affect the operation of any clock keeping track of the particles. A timepiece caught in that tiny submicroscopic realm would probably resemble a pendulum clock laboring amid the quivers and shudders of an earthquake. “Here the very arena is being subjected to quantum effects, and one is left with nothing to stand on,” explains Isham. “You can end up in a situation where you have no notion of time whatsoever.” But quantum calculations depend on an assured sense of time.

    For Karel Kucha, a general relativist and professor emeritus at the University of Utah, the key to measuring quantum time is to devise, using clever math, an appropriate clock—something he has been attempting, off and on, for several decades. Conservative by nature, Kucha believes it is best to stick with what you know before moving on to more radical solutions. So he has been seeking what might be called the submicroscopic version of a Newtonian clock, a quantum timekeeper that can be used to describe the physics going on in the extraordinary realm ruled by quantum gravity, such as the innards of a black hole or the first instant of creation.

    Unlike the clocks used in everyday physics, Kucha’s hypothetical clock would not stand off in a corner, unaffected by what is going on around it. It would be set within the tiny, dense system where quantum gravity rules and would be part and parcel of it. This insider status has its pitfalls: The clock would change as the system changed—so to keep track of time, you would have to figure out how to monitor those variations. In a way, it would be like having to pry open your wristwatch and check its workings every time you wanted to refer to it.

    The most common candidates for this special type of clock are simply “matter clocks.” “This, of course, is the type of clock we’ve been used to since time immemorial. All the clocks we have around us are made up of matter,” Kucha points out. Conventional timekeeping, after all, means choosing some material medium, such as a set of particles or a fluid, and marking its changes. But with pen and paper, Kucha mathematically takes matter clocks into the domain of quantum gravity, where the gravitational field is extremely strong and those probabilistic quantum-mechanical effects begin to arise. He takes time where no clock has gone before.

    But as you venture into this domain, says Kucha, “matter becomes denser and denser.” And that’s the Achilles heel for any form of matter chosen to be a clock under these extreme conditions; it eventually gets squashed. That may seem obvious from the start, but Kucha needs to examine precisely how the clock breaks down so he can better understand the process and devise new mathematical strategies for constructing his ideal clock.

    More promising as a quantum clock is the geometry of space itself: monitoring spacetime’s changing curvature as the infant universe expands or a black hole forms. Kucha surmises that such a property might still be measurable in the extreme conditions of quantum gravity. The expanding cosmos offers the simplest example of this scheme. Imagine the tiny infant universe as an inflating balloon. Initially, its surface bends sharply around. But as the balloon blows up, the curvature of its surface grows shallower and shallower. “The changing geometry,” explains Kucha, “allows you to see that you are at one instant of time rather than another.” In other words, it can function as a clock.

    Unfortunately, each type of clock that Kucha has investigated so far leads to a different quantum description, different predictions of the system’s behavior. “You can formulate your quantum mechanics with respect to one clock that you place in spacetime and get one answer,” explains Kucha.

    “But if you choose another type of clock, perhaps one based on an electric field, you get a completely different result. It is difficult to say which of these descriptions, if any, is correct.”

    More than that, the clock that is chosen must not eventually crumble. Quantum theory suggests there is a limit to how fine you can cut up space. The smallest quantum grain of space imaginable is 10^33 centimeter wide, the Planck length, named after Max Planck, inventor of the quantum. On that infinitesimal scale, the spacetime canvas turns choppy and jumbled, like the whitecaps on an angry sea. Space and time become unglued and start to wink in and out of existence in a probabilistic froth. Time and space, as we know them, are no longer easily defined. This is the point at which the physics becomes unknown and theorists start walking on shaky ground. As physicist Paul Davies points out in his book About Time, “You must imagine all possible geometries—all possible spacetimes, space warps and time warps—mixed together in a sort of cocktail, or ‘foam.’ ”

    Only a fully developed theory of quantum gravity will show what’s really happening at this unimaginably small level of spacetime. Kucha conjectures that some property of general relativity (as yet unknown) will not undergo quantum fluctuations at this point. Something might hold on and not come unglued. If that’s true, such a property could serve as the reliable clock that Kucha has been seeking for so long. And with that hope, Kucha continues to explore, one by one, the varied possibilities.

    Kucha has been trying to mold general relativity into the style of quantum mechanics, to find a special clock for it. But some other physicists trying to understand quantum gravity believe that the revision should happen the other way around—that quantum gravity should be made over in the likeness of general relativity, where time is pushed into the background. Carlo Rovelli is a champion of this view.

    Forget time,” Rovelli declares emphatically. “Time is simply an experimental fact.” Rovelli, a physicist at the Center of Theoretical Physics in France, has been working on an approach to quantum gravity that is essentially timeless. To simplify the calculations, he and his collaborators, physicists Abhay Ashtekar and Lee Smolin, set up a theoretical space without a clock. In this way, they were able to rewrite Einstein’s general theory of relativity, using a new set of variables so that it could more easily be interpreted and adapted for use on the quantum level.

    Their formulation has allowed physicists to explore how gravity behaves on the subatomic scale in a new way. But is that really possible without any reference to time at all? “First with special relativity and then with general relativity, our classical notion of time has only gotten weaker and weaker,” answers Rovelli. “We think in terms of time. We need it. But the fact that we need time to carry out our thinking does not mean it is reality.”

    Rovelli believes if physicists ever find a unified law that links all the forces of nature under one banner, it will be written without any reference to time. “Then, in certain situations,” says Rovelli, “as when the gravitational field is not dramatically strong, reality organizes itself so that we perceive a flow that we call time.”

    Getting rid of time in the most fundamental physical laws, says Rovelli, will probably require a grand conceptual leap, the same kind of adjustment that 16th-century scientists had to make when Copernicus placed the sun, and not the Earth, at the center of the universe. In so doing, the Polish cleric effectively kicked the Earth into motion, even though back then it was difficult to imagine how the Earth could zoom along in orbit about the sun without its occupants being flung off the surface. “In the 1500s, people thought a moving earth was impossible,” notes Rovelli.

    But maybe the true rules are timeless, including those applied to the subatomic world. Indeed, a movement has been under way to rewrite the laws of quantum mechanics, a renovation that was spurred partly by the problem of time, among other quantum conundrums. As part of that program, theorists have been rephrasing quantum mechanics’ most basic equations to remove any direct reference to time.

    The roots of this approach can be traced to a procedure introduced by the physicist Richard Feynman in the 1940s, a method that has been extended and broadened by others, including James Hartle of the University of California at Santa Barbara and physics Nobel laureate Murray Gell-Mann.

    Basically, it’s a new way to look at Schrödinger’s equation. As originally set up, this equation allows physicists to compute the probability of a particle moving directly from point A to point B over specified slices of time. The alternate approach introduced by Feynman instead considers the infinite number of paths the particle could conceivably take to get from A to B, no matter how slim the chance. Time is removed as a factor; only the potential pathways are significant. Summing up these potentials (some paths are more likely than others, depending on the initial conditions), a specific path emerges in the end.

    The process is sometimes compared to interference between waves. When two waves in the ocean combine, they may reinforce one another (leading to a new and bigger wave) or cancel each other out entirely. Likewise, you might think of these many potential paths as interacting with one another—some getting enhanced, others destroyed—to produce the final path. More important, the variable of time no longer enters into the calculations.

    Hartle has been adapting this technique to his pursuits in quantum cosmology, an endeavor in which the laws of quantum mechanics are applied to the young universe to discern its evolution. Instead of dealing with individual particles, though, he works with all the configurations that could possibly describe an evolving cosmos, an infinite array of potential universes. When he sums up these varied configurations—some enhancing one another, others canceling each other out—a particular spacetime ultimately emerges. In this way, Hartle hopes to obtain clues to the universe’s behavior during the era of quantum gravity. Conveniently, he doesn’t have to choose a special clock to carry out the physics: Time disappears as an essential variable.

    Of course, as Isham points out, “having gotten rid of time, we’re then obliged to explain how we get back to the ordinary world, where time surrounds us.” Quantum gravity theorists have their hunches. Like Rovelli, many are coming to suspect that time is not fundamental at all. This theme resounds again and again in the various approaches aimed at solving the problem of time. Time, they say, may more resemble a physical property such as temperature or pressure. Pressure has no meaning when you talk about one particle or one atom; the concept of pressure arises only when we consider trillions of atoms. The notion of time could very well share this statistical feature. If so, reality would then resemble a pointillist painting. On the smallest of scales—the Planck length—time would have no meaning, just as a pointillist painting, built up from dabs of paint, cannot be fathomed close up.

    Quantum gravity theorists like to compare themselves to archeologists. Each investigator is digging away at a different site, finding a separate artifact of some vast subterranean city. The full extent of the find is not yet realized. What theorists desperately need are data, experimental evidence that could help them decide between the different approaches.

    It seems an impossible task, one that would appear to require recreating the hellish conditions of the Big Bang. But not necessarily. For instance, future generations of “gravity-wave telescopes,” instruments that detect ripples in the rubberlike mat of spacetime, might someday sense the Big Bang’s reverberating thunder, relics from the instant of creation when the force of gravity first emerged. Such waves could provide vital clues to the nature of space and time.

    “We wouldn’t have believed just [decades] ago that it would be possible to say what happened in the first 10 minutes of the Big Bang,” points out Kucha. “But we can now do that by looking at the abundances of the elements. Perhaps if we understand physics on the Planck scale well enough, we’ll be able to search for certain consequences—remnants—that are observable today.” If found, such evidence would bring us the closest ever to our origins and possibly allow us to perceive at last how space and time came to well up out of nothingness some 14 billion years ago.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 4:00 pm on September 16, 2018 Permalink | Reply
    Tags: , , , , , Quantum Mechanics, , TheatreWorks   

    From Symmetry: “A play in parallel universes” From 10/10/17 

    Symmetry Mag
    From Symmetry

    10/10/17 [From the past]
    Kathryn Jepsen

    Artwork by Sandbox Studio, Chicago with Ana Kova

    Constellations illustrates the many-worlds interpretation of quantum mechanics—with a love story.

    The play Constellations begins with two people, Roland and Marianne, meeting for the first time. It’s a short scene, and it doesn’t go well. Then the lights go down, come back up, and it’s as if the scene has reset itself. The characters meet for the first time, again, but with slightly different (still unfortunate) results.

    The entire play progresses this way, showing multiple versions of different scenes between Roland, a beekeeper, and Marianne, an astrophysicist.

    In the script, each scene is divided from the next by an indented line. As the stage notes explain: “An indented rule indicates a change in universe.”

    To scientist Richard Partridge, who recently served as a consultant for a production of Constellations at TheatreWorks Silicon Valley, it’s a play about quantum mechanics.

    “Quantum mechanics is about everything happening at once,” he says.

    We don’t experience our lives this way, but atoms and particles do.

    In 1927, physicists Niels Bohr and Werner Heisenberg wrote that, on the scale of atoms and smaller, the properties of physical systems remain undefined until they are measured. Light, for example, can behave as a particle or a wave. But until someone observes it to be one or the other, it exists in a state of quantum superposition: It is both a particle and a wave at the same time. When a scientist takes a measurement, the two possibilities collapse into a single truth.

    Physicist Erwin Schrodinger illustrated this with a cat. He created a thought experiment in which the decay of an atom—an event ruled by quantum mechanics—would trigger toxic gas to be released in a steel chamber with a cat inside. By the rules of quantum mechanics, until someone opened the chamber, the cat existed in a state of superposition: simultaneously alive and dead.

    Some interpretations of quantum mechanics dispute the idea that observing a system can determine its true state. In the many-worlds interpretation, every possibility exists in a giant collection of parallel realities.

    In some, the cat lives. In others, it does not.

    In some Constellations universes, the astrophysicist and the beekeeper fall in love. In others, they do not. “So it’s not really about physics,” Partridge says.

    Constellations director Robert Kelley, who founded TheatreWorks in 1970, agrees. He says he was intimidated by the physics concepts in the play at first but that he was eventually drawn to the relationship at its core.

    “With all of these things swirling around in the play, what really counts is the relationship between two people and the love that grows between them,” he says. “I found that a very charming message for Silicon Valley. We’re surrounded by a whole lot of technology, but probably for most people what counts is when you get home and you’re on the couch and your one-and-a-half-year-old shows up.”

    TheatreWorks in Silicon Valley production of Constellations

    Cosmologist Marianne (Carie Kawa) and beekeeper Roland (Robert Gilbert) explore the ever-changing mystery of “what ifs” in the regional premiere of Constellations presented by TheatreWorks Silicon Valley, August 23-September 17, at the Mountain View Center for the Performing Arts.
    Photo by Kevin Berne

    Kelley says that he found something familiar in the many timelines of the play. “It’s really kind of fun to see all that happen because it’s common ground for us as human beings: You hang up the phone and think, ‘If only I’d said that or hadn’t said that.’ It’s a fascinating thought that every single thing that happens will then determine every single other thing that happens.”

    Constantly resetting and replaying the same scenes “was very acrobatic,” says Los Angeles-based actress Carie Kawa, who played Marianne in the TheatreWorks production, which concluded in September. “And there were emotional acrobatics—just jumping into different emotional states. Usually you get a little longer arc; this play is just all middles, almost like shooting a film.”

    To her, the repeats and jumps were familiar in a different way: They were an encapsulation of the experience of acting.

    “We do the play over and over again,” she says. “It’s the same scene, but it’s different every single time. And if we’re doing it right, we’re not thinking about the scene that just happened or the scene that’s to come, we’re in the moment.”

    The play will mean different things to different people, Kawa says.

    “A teacher once told me a story about theater and a perspective that he had,” she says. “At first he said, ‘Theater is important because everybody can come together and feel the same feeling at the same time and know that we’re all okay.’

    “But as he progressed in this artistry he realized that, no, what’s happening is everybody is feeling a slightly different feeling at the same time. And that’s OK. That’s what helps us experience our humanity and the humanity of the other people around us. We’re all alone in this together.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 3:05 pm on September 12, 2018 Permalink | Reply
    Tags: A novel quantum state of matter that can be manipulated at will with a weak magnetic field, , , , , Quantum Mechanics, Scanning tunneling spectromicroscope operating in conjunction with a rotatable vector magnetic field capability, This could indeed be evidence of a new quantum phase of matter   

    From Princeton University: “Princeton scientists discover a ‘tuneable’ novel quantum state of matter” 

    Princeton University
    From Princeton University

    Sept. 12, 2018
    Liz Fuller-Wright, Office of Communications

    Quantum particles can be difficult to characterize, and almost impossible to control if they strongly interact with each other — until now.

    An international team of researchers led by Princeton physicist Zahid Hasan has discovered a novel quantum state of matter that can be manipulated at will with a weak magnetic field, which opens new possibilities for next-generation nano- or quantum technologies. Researchers in Hasan’s lab include (from left): Jia-Xin Yin, Zahid Hasan, Songtian Sonia Zhang, Daniel Multer, Maksim Litskevich and Guoqing Chang. Photo by Nick Barberio, Office of Communications.

    An international team of researchers led by Princeton physicist Zahid Hasan has discovered a quantum state of matter that can be “tuned” at will — and it’s 10 times more tuneable than existing theories can explain. This level of manipulability opens enormous possibilities for next-generation nanotechnologies and quantum computing.

    “We found a new control knob for the quantum topological world,” said Hasan, the Eugene Higgins Professor of Physics. “We expect this is tip of the iceberg. There will be a new subfield of materials or physics grown out of this. … This would be a fantastic playground for nanoscale engineering.”

    Hasan and his colleagues, whose research appears in the current issue of Nature, are calling their discovery a “novel” quantum state of matter because it is not explained by existing theories of material properties.

    Hasan’s interest in operating beyond the edges of known physics is what attracted Jia-Xin Yin, a postdoctoral research associate and one of three co-first-authors on the paper, to his lab. Other researchers had encouraged him to tackle one of the defined questions in modern physics, Yin said.

    “But when I talked to Professor Hasan, he told me something very interesting,” Yin said. “He’s searching for new phases of matter. The question is undefined. What we need to do is search for the question rather than the answer.”

    The classical phases of matter — solids, liquids and gases — arise from interactions between atoms or molecules. In a quantum phase of matter, the interactions take place between electrons, and are much more complex.

    “This could indeed be evidence of a new quantum phase of matter — and that’s, for me, exciting,” said David Hsieh, a professor of physics at the California Institute of Technology and a 2009 Ph.D. graduate of Princeton, who was not involved in this research. “They’ve given a few clues that something interesting may be going on, but a lot of follow-up work needs to be done, not to mention some theoretical backing to see what really is causing what they’re seeing.”

    Hasan has been working in the groundbreaking subfield of topological materials, an area of condensed matter physics, where his team discovered topological quantum magnets a few years ago. In the current research, he and his colleagues “found a strange quantum effect on the new type of topological magnet that we can control at the quantum level,” Hasan said.

    The key was looking not at individual particles but at the ways they interact with each other in the presence of a magnetic field. Some quantum particles, like humans, act differently alone than in a community, Hasan said. “You can study all the details of the fundamentals of the particles, but there’s no way to predict the culture, or the art, or the society, that will emerge when you put them together and they start to interact strongly with each other,” he said.

    To study this quantum “culture,” he and his colleagues arranged atoms on the surface of crystals in many different patterns and watched what happened. They used various materials prepared by collaborating groups in China, Taiwan and Princeton. One particular arrangement, a six-fold honeycomb shape called a “kagome lattice” for its resemblance to a Japanese basket-weaving pattern, led to something startling — but only when examined under a spectromicroscope in the presence of a strong magnetic field, equipment found in Hasan’s Laboratory for Topological Quantum Matter and Advanced Spectroscopy, located in the basement of Princeton’s Jadwin Hall.

    All the known theories of physics predicted that the electrons would adhere to the six-fold underlying pattern, but instead, the electrons hovering above their atoms decided to march to their own drummer — in a straight line, with two-fold symmetry.

    “The electrons decided to reorient themselves,” Hasan said. “They ignored the lattice symmetry. They decided that to hop this way and that way, in one line, is easier than sideways. So this is the new frontier. … Electrons can ignore the lattice and form their own society.”

    This is a very rare effect, noted Caltech’s Hsieh. “I can count on one hand” the number of quantum materials showing this behavior, he said.

    The researchers were shocked to discover this two-fold arrangement, said Songtian Sonia Zhang, a graduate student in Hasan’s lab and another co-first-author on the paper. “We had expected to find something six-fold, as in other topological materials, but we found something completely unexpected,” she said. “We kept investigating — Why is this happening? — and we found more unexpected things. It’s interesting because the theorists didn’t predict it at all. We just found something new.”

    When the researchers turn an external magnetic field in different directions (indicated with arrows), they change the orientation of the linear electron flow above the kagome (six-fold) magnet, as seen in these electron wave interference patterns on the surface of a topological quantum kagome magnet. Each pattern is created by a particular direction of the external magnetic field applied on the sample.
    Image by M. Z. Hasan, Jia-Xin Yin, Songtian Sonia Zhang, Princeton University.

    The decoupling between the electrons and the arrangement of atoms was surprising enough, but then the researchers applied a magnetic field and discovered that they could turn that one line in any direction they chose. Without moving the crystal lattice, Zhang could rotate the line of electrons just by controlling the magnetic field around them.

    “Sonia noticed that when you apply the magnetic field, you can reorient their culture,” Hasan said. “With human beings, you cannot change their culture so easily, but here it looks like she can control how to reorient the electrons’ many-body culture.”

    The researchers can’t yet explain why.

    “It is rare that a magnetic field has such a dramatic effect on electronic properties of a material,” said Subir Sachdev, the Herchel Smith Professor of Physics at Harvard University and chair of the physics department, who was not involved in this study.

    Even more surprising than this decoupling — called anisotropy — is the scale of the effect, which is 100 times more than what theory predicts. Physicists characterize quantum-level magnetism with a term called the “g factor,” which has no units. The g factor of an electron in a vacuum has been precisely calculated as very slightly more than two, but in this novel material, the researchers found an effective g factor of 210, when the electrons strongly interact with each other.

    “Nobody predicted that in topological materials,” said Hasan.

    “There are many things we can calculate based on the existing theory of quantum materials, but this paper is exciting because it’s showing an effect that was not known,” he said. This has implications for nanotechnology research especially in developing sensors. At the scale of quantum technology, efforts to combine topology, magnetism and superconductivity have been stymied by the low effective g factors of the tiny materials.

    “The fact that we found a material with such a large effective g factor, meaning that a modest magnetic field can bring a significant effect in the system — this is highly desirable,” said Hasan. “This gigantic and tunable quantum effect opens up the possibilities for new types of quantum technologies and nanotechnologies.”

    The discovery was made using a two-story, multi-component instrument known as a scanning tunneling spectromicroscope, operating in conjunction with a rotatable vector magnetic field capability, in the sub-basement of Jadwin Hall. The spectromicroscope has a resolution less than half the size of an atom, allowing it to scan individual atoms and detect details of their electrons while measuring the electrons’ energy and spin distribution. The instrument is cooled to near absolute zero and decoupled from the floor and the ceiling to prevent even atom-sized vibrations.

    “We’re going down to 0.4 Kelvin. It’s colder than intergalactic space, which is 2.7 Kelvin,” said Hasan. “And not only that, the tube where the sample is — inside that tube we create a vacuum condition that’s more than a trillion times thinner than Earth’s upper atmosphere. It took about five years to achieve these finely tuned operating conditions of the multi-component instrument necessary for the current experiment,” he said.

    “All of us, when we do physics, we’re looking to find how exactly things are working,” said Zhang. “This discovery gives us more insight into that because it’s so unexpected.”

    By finding a new type of quantum organization, Zhang and her colleagues are making “a direct contribution to advancing the knowledge frontier — and in this case, without any theoretical prediction,” said Hasan. “Our experiments are advancing the knowledge frontier.”

    The team included numerous researchers from Princeton’s Department of Physics, including present and past graduate students Songtian Sonia Zhang, Ilya Belopolski, Tyler Cochran and Suyang Xu; and present and past postdoctoral research associates Jia-Xin Yin, Guoqing Chang, Hao Zheng, Guang Bian and Biao Lian. Other co-authors were Hang Li, Kun Jiang, Bingjing Zhang, Cheng Xiang, Kai Liu, Tay-Rong Chang, Hsin Lin, Zhongyi Lu, Ziqiang Wang, Shuang Jia and Wenhong Wang.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Princeton University Campus

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

  • richardmitnick 7:35 am on September 7, 2018 Permalink | Reply
    Tags: , Fish-eye lens may entangle pairs of atoms, , , , , Quantum Mechanics   

    From MIT News: “Fish-eye lens may entangle pairs of atoms” 

    MIT News
    MIT Widget

    From MIT News

    September 5, 2018
    Jennifer Chu

    James Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. No image credit.

    Scientists find a theoretical optical device may have uses in quantum computing.

    Nearly 150 years ago, the physicist James Maxwell proposed that a circular lens that is thickest at its center, and that gradually thins out at its edges, should exhibit some fascinating optical behavior. Namely, when light is shone through such a lens, it should travel around in perfect circles, creating highly unusual, curved paths of light.

    He also noted that such a lens, at least broadly speaking, resembles the eye of a fish. The lens configuration he devised has since been known in physics as Maxwell’s fish-eye lens — a theoretical construct that is only slightly similar to commercially available fish-eye lenses for cameras and telescopes.

    Now scientists at MIT and Harvard University have for the first time studied this unique, theoretical lens from a quantum mechanical perspective, to see how individual atoms and photons may behave within the lens. In a study published Wednesday in Physical Review A, they report that the unique configuration of the fish-eye lens enables it to guide single photons through the lens, in such a way as to entangle pairs of atoms, even over relatively long distances.

    Entanglement is a quantum phenomenon in which the properties of one particle are linked, or correlated, with those of another particle, even over vast distances. The team’s findings suggest that fish-eye lenses may be a promising vehicle for entangling atoms and other quantum bits, which are the necessary building blocks for designing quantum computers.

    “We found that the fish-eye lens has something that no other two-dimensional device has, which is maintaining this entangling ability over large distances, not just for two atoms, but for multiple pairs of distant atoms,” says first author Janos Perczel, a graduate student in MIT’s Department of Physics. “Entanglement and connecting these various quantum bits can be really the name of the game in making a push forward and trying to find applications of quantum mechanics.”

    The team also found that the fish-eye lens, contrary to recent claims, does not produce a perfect image. Scientists have thought that Maxwell’s fish-eye may be a candidate for a “perfect lens” — a lens that can go beyond the diffraction limit, meaning that it can focus light to a point that is smaller than the light’s own wavelength. This perfect imaging, scientist predict, should produce an image with essentially unlimited resolution and extreme clarity.

    However, by modeling the behavior of photons through a simulated fish-eye lens, at the quantum level, Perczel and his colleagues concluded that it cannot produce a perfect image, as originally predicted.

    “This tells you that there are these limits in physics that are really difficult to break,” Perczel says. “Even in this system, which seemed to be a perfect candidate, this limit seems to be obeyed. Perhaps perfect imaging may still be possible with the fish eye in some other, more complicated way, but not as originally proposed.”

    Perczel’s co-authors on the paper are Peter Komar and Mikhail Lukin from Harvard University.

    A circular path

    Maxwell was the first to realize that light is able to travel in perfect circles within the fish-eye lens because the density of the lens changes, with material being thickest at the middle and gradually thinning out toward the edges. The denser a material, the slower light moves through it. This explains the optical effect when a straw is placed in a glass half full of water. Because the water is so much denser than the air above it, light suddenly moves more slowly, bending as it travels through water and creating an image that looks as if the straw is disjointed.

    In the theoretical fish-eye lens, the differences in density are much more gradual and are distributed in a circular pattern, in such a way that it curves rather bends light, guiding light in perfect circles within the lens.

    In 2009, Ulf Leonhardt, a physicist at the Weizmann Institute of Science in Israel was studying the optical properties of Maxwell’s fish-eye lens and observed that, when photons are released through the lens from a single point source, the light travels in perfect circles through the lens and collects at a single point at the opposite end, with very little loss of light.

    “None of the light rays wander off in unwanted directions,” Perczel says. “Everything follows a perfect trajectory, and all the light will meet at the same time at the same spot.”

    Leonhardt, in reporting his results, made a brief mention as to whether the fish-eye lens’ single-point focus might be useful in precisely entangling pairs of atoms at opposite ends of the lens.

    “Mikhail [Lukin] asked him whether he had worked out the answer, and he said he hadn’t,” Perczel says. “That’s how we started this project and started digging deeper into how well this entangling operation works within the fish-eye lens.”

    Playing photon ping-pong

    To investigate the quantum potential of the fish-eye lens, the researchers modeled the lens as the simplest possible system, consisting of two atoms, one at either end of a two-dimensional fish-eye lens, and a single photon, aimed at the first atom. Using established equations of quantum mechanics, the team tracked the photon at any given point in time as it traveled through the lens, and calculated the state of both atoms and their energy levels through time.

    They found that when a single photon is shone through the lens, it is temporarily absorbed by an atom at one end of the lens. It then circles through the lens, to the second atom at the precise opposite end of the lens. This second atom momentarily absorbs the photon before sending it back through the lens, where the light collects precisely back on the first atom.

    “The photon is bounced back and forth, and the atoms are basically playing ping pong,” Perczel says. “Initially only one of the atoms has the photon, and then the other one. But between these two extremes, there’s a point where both of them kind of have it. It’s this mind-blowing quantum mechanics idea of entanglement, where the photon is completely shared equally between the two atoms.”

    Perczel says that the photon is able to entangle the atoms because of the unique geometry of the fish-eye lens. The lens’ density is distributed in such a way that it guides light in a perfectly circular pattern and can cause even a single photon to bounce back and forth between two precise points along a circular path.

    “If the photon just flew away in all directions, there wouldn’t be any entanglement,” Perczel says. “But the fish-eye gives this total control over the light rays, so you have an entangled system over long distances, which is a precious quantum system that you can use.”

    As they increased the size of the fish-eye lens in their model, the atoms remained entangled, even over relatively large distances of tens of microns. They also observed that, even if some light escaped the lens, the atoms were able to share enough of a photon’s energy to remain entangled. Finally, as they placed more pairs of atoms in the lens, opposite to one another, along with corresponding photons, these atoms also became simultaneously entangled.

    “You can use the fish eye to entangle multiple pairs of atoms at a time, which is what makes it useful and promising,” Perczel says.

    Fishy secrets

    In modeling the behavior of photons and atoms in the fish-eye lens, the researchers also found that, as light collected on the opposite end of the lens, it did so within an area that was larger than the wavelength of the photon’s light, meaning that the lens likely cannot produce a perfect image.

    “We can precisely ask the question during this photon exchange, what’s the size of the spot to which the photon gets recollected? And we found that it’s comparable to the wavelength of the photon, and not smaller,” Perczel says. “Perfect imaging would imply it would focus on an infinitely sharp spot. However, that is not what our quantum mechanical calculations showed us.”

    Going forward, the team hopes to work with experimentalists to test the quantum behaviors they observed in their modeling. In fact, in their paper, the team also briefly proposes a way to design a fish-eye lens for quantum entanglement experiments.

    “The fish-eye lens still has its secrets, and remarkable physics buried in it,” Perczel says. “But now it’s making an appearance in quantum technologies where it turns out this lens could be really useful for entangling distant quantum bits, which is the basic building block for building any useful quantum computer or quantum information processing device.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

  • richardmitnick 12:27 pm on September 6, 2018 Permalink | Reply
    Tags: , For The First Time, , Quantum gates, Quantum Mechanics, , , Scientists Have Teleported And Measured a Quantum Gate in Real Time, Teleporting a special quantum operation between two locations,   

    From Yale University via Science Alert: “For The First Time, Scientists Have Teleported And Measured a Quantum Gate in Real Time” 

    Yale University bloc

    From Yale University


    Science Alert

    6 SEP 2018


    Welcome to the future.

    Around 20 years ago, two computer scientists proposed a technique for teleporting a special quantum operation between two locations with the goal of making quantum computers more reliable.

    Now a team of researchers from Yale University have successfully turned their idea into reality, demonstrating a practical approach to making this incredibly delicate form of technology scalable.

    These physicists have developed a practical method for teleporting a quantum operation – or gate – across a distance and measuring its effect. While this feat has been done before, it’s never been done in real time. This paves the way for developing a process that can make quantum computing modular, and therefore more reliable.

    Unlike regular computers, which perform their calculations with states of reality called bits (on or off, 1 or 0), quantum computers operate with qubits – a strange state of reality we can’t wrap our heads around, but which taps into some incredibly useful mathematics.

    In classical computers, bits interact with operations called logic gates. Like the world’s smallest gladiatorial arena, two bits enter, one bit leaves. Gates come in different forms, selecting a winner depending on their particular rule.

    These bits, channelled through gates, form the basis of just about any calculation you can think of, as far as classical computers are concerned.

    But qubits offer an alternative unit to base algorithms on. More than just a 1 or a 0, they also provide a special blend of the two states. It’s like a coin held in a hand before you see whether it’s heads or tails.

    In conjunction with a quantum version of a logic gate, qubits can do what classical bits can’t. There’s just one problem – that indeterminate state of 1 and 0 turns into a definite 1 or 0 when it becomes part of a measured system.

    Worse still, it doesn’t take much to collapse the qubit’s maybe into a definitely, which means a quantum computer can become an expensive paperweight if those delicate components aren’t adequately hidden from their noisy environment.

    Right now, quantum computer engineers are super excited by devices that can wrangle just over 70 qubits – which is impressive, but quantum computers will really only earn their keep as they stock up on hundreds, if not thousands of qubits all hovering on the brink of reality at the same time.

    To make this kind of scaling a reality, scientists need additional tricks. One option would be to make the technology as modular as possible, networking smaller quantum systems into a bigger one in order to offset errors.

    But for that to work, quantum gates – those special operations that deal with the heavy lifting of qubits – also need to be shared.

    Teleporting information, such as a quantum gate, sounds pretty sci-fi. But we’re obviously not talking about Star Trek transport systems here.

    In reality it simply refers to the fact that objects can have their history entangled so that when one is measured, the other immediately collapses into a related state, no matter how far away it is.

    This has technically been demonstrated experimentally already [Physical Review Letters], but, until now, the process hasn’t been reliably performed and measured in real time, which is crucial if it’s to become part of a practical computer.

    “Our work is the first time that this protocol has been demonstrated where the classical communication occurs in real-time, allowing us to implement a ‘deterministic’ operation that performs the desired operation every time,” says lead author Kevin Chou.

    The researchers used qubits in sapphire chips inside a cutting-edge setup to teleport a type of quantum operation called a controlled-NOT gate. Importantly, by applying error-correctable coding, the process was 79 percent reliable.

    “It is a milestone toward quantum information processing using error-correctable qubits,” says principal investigator Robert Schoelkopf.

    It’s a baby step on the road to making quantum modules, but this proof-of-concept shows modules could still be the way to go in growing quantum computers to the scale we need.

    This research was published in Nature.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Yale University Campus

    Yale University comprises three major academic components: Yale College (the undergraduate program), the Graduate School of Arts and Sciences, and the professional schools. In addition, Yale encompasses a wide array of centers and programs, libraries, museums, and administrative support offices. Approximately 11,250 students attend Yale.

  • richardmitnick 12:26 am on August 29, 2018 Permalink | Reply
    Tags: A Programmable Quantum Chip, , Packs in more than 200 separate photonic components, , Quantum Mechanics, Two-qubit quantum processor, , via Silicon Photonics   

    From U Bristol via Optics & Photonics: “A Programmable Quantum Chip, via Silicon Photonics” 

    From University of Bristol


    From Optics & Photonics

    28 August 2018
    Stewart Wills

    [Image: Xiaogang Qiang/University of Bristol]

    A team led by researchers at the University of Bristol, U.K., has demonstrated a silicon-photonics-based chip that reportedly implements a fully programmable, two-qubit quantum processor [Nature Photonics]. The researchers were able to use the chip, which packs in more than 200 separate photonic components, to program and run 98 different two-qubit operations, as well as an optimization algorithm and a quantum simulation.

    The system—which relies on a different model of quantum information processing (QIP) than the conventional “quantum circuit” model—isn’t ready to scale to a full-fledged universal quantum computer. But the Bristol-led group suggests that its proof-of-concept device concretely demonstrates the potential of silicon as a platform for “full-scale universal quantum technologies using light.”

    Problematic entanglements

    From one perspective, silicon photonics seems the ideal QIP vehicle. It’s compatible with long-standing CMOS manufacturing processes honed in the classical computing business, for example, and features an increasing array of compact, reconfigurable optical components on which to draw for quantum operations. One challenge, though, lies in combining all of the elements necessary for QIP—generating photons, encoding quantum information on them, manipulating them and reading out their quantum state—on a single, programmable device.

    Even more fundamental than these engineering challenges is the issue of using photons for operations that include quantum entanglement, a fundamental requirement for QIP. That’s because, in the conventional circuit model of quantum computing, each arbitrary two-qubit operation requires the equivalent of three consecutive entangling logic gates for control of the quantum system. That’s something that’s been too complex to implement practically using free-space optics or combinations of free-space and integrated photonic systems.

    Changing the model

    The research team, which included not only Bristol scientists but researchers in China and Australia, attacked the problem by focusing on a different QIP model. Rather than one that implements quantum processing as a multiplication of quantum logic gates in series, as under the conventional circuit model, the group used a “linear combination of quantum operators” scheme.

    In this approach, an arbitrary two-qubit unitary operation is reframed as a linear combination, or weighted sum, of four easier-to-implement unitaries. Previous work at Bristol and elsewhere had suggested that the linear-combination approach could simplify control of quantum operations in a programmable QIP framework [Nature Communications]. Even under that ostensibly simpler model, however, putting arbitrarily programmable QIP into practice calls for a formidably complex array of optical components. To achieve the required complexity, the research team turned to silicon photonics, and set their sights on a single photonic chip that could handle the quantum functions end to end.

    The monolithically integrated, silicon-based device that the researchers fabricated to do the job includes four spontaneous four-wave-mixing photon-pair sources, four pump rejection filters, 58 thermo-optical phase shifters, 82 multimode interferometer beamsplitters, 18 waveguide crossers and 40 optical-grating couplers. All of these elements are crowded onto a device with an effective footprint of less than 14 mm^2.

    Nearly 100,000 reprogrammed settings

    This small platform packs in sufficient complexity, according to the researchers, to allow them to perform end-to-end reprogrammable two-qubit operations—generating two photons; turning the photons into qubits by encoding quantum information on them; performing arbitrary unitary operations on those qubits, including entanglement; and reading out the resulting quantum state via quantum tomography. (The light source for the experiments was an externally connected, tunable laser, tied to the chip via a fiber array.)

    The Bristol-led team found that it could program the device to implement 98 different unitary quantum operations, with an average quantum process fidelity of 93.2 ± 4.5 percent. The researchers also programmed the chip to realize a previously developed quantum optimization algorithm, and to simulate a specific kind of “quantum walk,” the quantum analogue to a classical mathematical random walk. Together, the experiments involved some 98,480 different reprogrammed settings.

    Taking on larger tasks

    The team is careful to stress that the chip—or, more precisely, the linear-combination protocol it’s based on—can’t, in its present form, scale to universal quantum computing. That’s because the protocol’s probability of success is inversely proportional to the number of terms, so that “it would achieve exponentially small success probability for a universal quantum computer.” Still, the researchers believe that, through certain optimization and scaling efforts, the platform could expand to handle families of large-scale QIP tasks “with considerable success probability.”

    The researchers also see the system as an important step toward proving the mettle of silicon photonics in the drive for a universal quantum computer. “What we’ve demonstrated is a programmable machine that can do lots of different tasks,” the paper’s lead author, Xiaogang Qiang—previously a University of Bristol Ph.D. student, and now a researcher at the National University of Defence Technology, China—said in a press release accompanying the work.

    “It’s a very primitive processor, because it only works on two qubits, which means there is still a long way before we can do useful computations with this technology,” Qiang continued. “But what is exciting is that it the different properties of silicon photonics that can be used for making a quantum computer have been combined together in one device. This is just too complicated to physically implement with light using previous approaches.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Optics & Photonics News (OPN) is The Optical Society’s monthly news magazine. It provides in-depth coverage of recent developments in the field of optics and offers busy professionals the tools they need to succeed in the optics industry, as well as informative pieces on a variety of topics such as science and society, education, technology and business. OPN strives to make the various facets of this diverse field accessible to researchers, engineers, businesspeople and students. Contributors include scientists and journalists who specialize in the field of optics. We welcome your submissions.

    Bristol is one of the most popular and successful universities in the UK and was ranked within the top 50 universities in the world in the QS World University Rankings 2018.

    The University of Bristol is at the cutting edge of global research. We have made innovations in areas ranging from cot death prevention to nanotechnology.

    The University has had a reputation for innovation since its founding in 1876. Our research tackles some of the world’s most pressing issues in areas as diverse as infection and immunity, human rights, climate change, and cryptography and information security.

    The University currently has 40 Fellows of the Royal Society and 15 of the British Academy – a remarkable achievement for a relatively small institution.

    We aim to bring together the best minds in individual fields, and encourage researchers from different disciplines and institutions to work together to find lasting solutions to society’s pressing problems.

    We are involved in numerous international research collaborations and integrate practical experience in our curriculum, so that students work on real-life projects in partnership with business, government and community sectors.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: