Tagged: Quantum Mechanics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:52 am on July 22, 2019 Permalink | Reply
    Tags: , , , , , , , QMCPACK, Quantum Mechanics, , The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.   

    From insideHPC: “Supercomputing Complex Materials with QMCPACK” 

    From insideHPC

    July 21, 2019

    In this special guest feature, Scott Gibson from the Exascale Computing Project writes that computer simulations based on quantum mechanics are getting a boost through QMCPACK.


    The theory of quantum mechanics underlies explorations of the behavior of matter and energy in the atomic and subatomic realms. Computer simulations based on quantum mechanics are consequently essential in designing, optimizing, and understanding the properties of materials that have, for example, unusual magnetic or electrical properties. Such materials would have potential for use in highly energy-efficient electrical systems and faster, more capable electronic devices that could vastly improve our quality of life.

    Quantum mechanics-based simulation methods render robust data by describing materials in a truly first-principles manner. This means they calculate electronic structure in the most basic terms and thus can allow speculative study of systems of materials without reference to experiment, unless researchers choose to add parameters. The quantum Monte Carlo (QMC) family of these approaches is capable of delivering the most highly accurate calculations of complex materials without biasing the results of a property of interest.

    An effort within the US Department of Energy’s Exascale Computing Project (ECP) is developing a QMC methods software named QMCPACK to find, predict, and control materials and properties at the quantum level. The ultimate aim is to achieve an unprecedented and systematically improvable accuracy by leveraging the memory and power capabilities of the forthcoming exascale computing systems.

    Greater Accuracy, Versatility, and Performance

    One of the primary objectives of the QMCPACK project is to reduce errors in calculations so that predictions concerning complex materials can be made with greater assurance.

    “We would like to be able to tell our colleagues in experimentation that we have confidence that a certain short list of materials is going to have all the properties that we think they will,” said Paul Kent of Oak Ridge National Laboratory and principal investigator of QMCPACK. “Many ways of cross-checking calculations with experimental data exist today, but we’d like to go further and make predictions where there aren’t experiments yet, such as a new material or where taking a measurement is difficult—for example, in conditions of high pressure or under an intense magnetic field.”

    The methods the QMCPACK team is developing are fully atomistic and material specific. This refers to having the capability to address all of the atoms in the material—whether it be silver, carbon, cerium, or oxygen, for example—compared with more simplified lattice model calculations where the full details of the atoms are not included.

    The team’s current activities are restricted to simpler, bulk-like materials; but exascale computing is expected to greatly widen the range of possibilities.

    “At exascale not only the increase in compute power but also important changes in the memory on the machines will enable us to explore material defects and interfaces, more-complex materials, and many different elements,” Kent said.

    With the software engineering, design, and computational aspects of delivering the science as the main focus, the project plans to improve QMCPACK’s performance by at least 50x. Based on experimentation using a mini-app version of the software, and incorporating new algorithms, the team achieved a 37x improvement on the pre-exascale Summit supercomputer versus the Titan system.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    One Robust Code

    “We’re taking the lessons we’ve learned from developing the mini app and this proof of concept, the 37x, to update the design of the main application to support this high efficiency, high performance for a range of problem sizes,” Kent said. “What’s crucial for us is that we can move to a single version of the code with no internal forks, to have one source supporting all architectures. We will use all the lessons we’ve learned with experimentation to create one version where everything will work everywhere—then it’s just a matter of how fast. Moreover, in the future we will be able to optimize. But at least we won’t have a gap in the feature matrix, and the student who is running QMCPACK will always have all features work.”

    As an open-source and openly developed product, QMCPACK is improving via the help of many contributors. The QMCPACK team recently published the master citation paper for the software’s code; the publication has 48 authors with a variety of affiliations.

    “Developing these large science codes is an enormous effort,” Kent said. “QMCPACK has contributors from ECP researchers, but it also has many past developers. For example, a great deal of development was done for the Knights Landing processor on the Theta supercomputer with Intel. This doubled the performance on all CPU-like architectures.”

    ANL ALCF Theta Cray XC40 supercomputer

    A Synergistic Team

    The QMCPACK project’s collaborative team draws talent from Argonne, Lawrence Livermore, Oak Ridge, and Sandia National Laboratories.

    It also benefits from collaborations with Intel and NVIDIA.


    The composition of the staff is nearly equally divided between scientific domain specialists and people centered on the software engineering and computer science aspects.

    “Bringing all of this expertise together through ECP is what has allowed us to perform the design study, reach the 37x, and improve the architecture,” Kent said. “All the materials we work with have to be doped, which means incorporating additional elements in them. We can’t run those simulations on Titan but are beginning to do so on Summit with improvements we have made as part of our ECP project. We are really looking forward to the opportunities that will open up when the exascale systems are available.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

  • richardmitnick 9:14 am on July 13, 2019 Permalink | Reply
    Tags: , , , Quantum Mechanics, , University of Glasgow   

    From University of Glasgow via Science Alert: “Scientists Just Unveiled The First-Ever Photo of Quantum Entanglement” 

    U Glasgow bloc

    From University of Glasgow



    Science Alert

    13 JUL 2019

    (University of Glasgow)

    In an incredible first, scientists have captured the world’s first actual photo of quantum entanglement – a phenomenon so strange Einstein famously described it as ‘spooky action at a distance’.

    The image was captured by physicists at the University of Glasgow in Scotland, and it’s so breathtaking we can’t stop staring.

    It might not look like much, but just stop and think about it for a second: this fuzzy grey image is the first time we’ve seen the particle interaction that underpins the strange science of quantum mechanics and forms the basis of quantum computing.

    Quantum entanglement occurs when two particles become inextricably linked, and whatever happens to one immediately affects the other, regardless of how far apart they are. Hence the ‘spooky action at a distance’ description.

    This particular photo shows entanglement between two photons – two particles of light. They’re interacting and for a brief moment sharing physical states.

    Paul-Antoine Moreau, first author on the paper where the image was unveiled, told the BBC the image was “an elegant demonstration of a fundamental property of nature”.

    To capture the incredible photo, Moreau and a team of physicists created a system that blasted out streams of entangled photons at what they described as ‘non-conventional objects’.

    The experiment actually involved capturing four images of the photons under four different phase transitions. You can see the full image below:

    (Moreau et al., Science Advances, 2019)

    What you’re looking at here is actually a composite of multiple images of the photons as they go through a series of four phase transitions.

    Basically, the physicists split the entangled photons up and ran one beam through a liquid crystal material known as β-Barium Borate, triggering four phase transitions.

    At the same time they captured photos of the entangled pair going through the same phase transitions, even though it hadn’t passed through the liquid crystal.

    You can see the setup below, the entangled beam of photons comes from the bottom left, one half of the entangled pair splits to the left and passes through the four phase filters. The others that go straight ahead didn’t go through the filters, but underwent the same phase changes.

    (Moreau et al., Science Advances, 2019)

    The camera was able to capture images of these at the same time, showing that they’d both shifted the same way despite being split. In other words, they were entangled.

    While Einstein made quantum entanglement famous, the late physicist John Stewart Bell helped define quantum entanglement and established a test known as ‘Bell inequality’. Basically, if you can break Bell inequality, you can confirm true quantum entanglement.

    “Here, we report an experiment demonstrating the violation of a Bell inequality within observed images,” the team write in Science Advances.

    “This result both opens the way to new quantum imaging schemes … and suggests promise for quantum information schemes based on spatial variables.”

    The research was published in Science Advances.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Glasgow campus

    The University of Glasgow (Scottish Gaelic: Oilthigh Ghlaschu, Latin: Universitas Glasguensis) is the fourth oldest university in the English-speaking world and one of Scotland’s four ancient universities. It was founded in 1451. Along with the University of Edinburgh, the University was part of the Scottish Enlightenment during the 18th century. It is currently a member of Universitas 21, the international network of research universities, and the Russell Group.

    In common with universities of the pre-modern era, Glasgow originally educated students primarily from wealthy backgrounds, however it became a pioneer[citation needed] in British higher education in the 19th century by also providing for the needs of students from the growing urban and commercial middle class. Glasgow University served all of these students by preparing them for professions: the law, medicine, civil service, teaching, and the church. It also trained smaller but growing numbers for careers in science and engineering.[4]

    Originally located in the city’s High Street, since 1870 the main University campus has been located at Gilmorehill in the West End of the city.[5] Additionally, a number of university buildings are located elsewhere, such as the University Marine Biological Station Millport on the Island of Cumbrae in the Firth of Clyde and the Crichton Campus in Dumfries.

    Alumni or former staff of the University include philosopher Francis Hutcheson, engineer James Watt, philosopher and economist Adam Smith, physicist Lord Kelvin, surgeon Joseph Lister, 1st Baron Lister, seven Nobel laureates, and two British Prime Ministers.

  • richardmitnick 1:49 pm on July 10, 2019 Permalink | Reply
    Tags: , , Quantum Mechanics, , , , Computational materials science, DFT-density functional theory, Atomic force microscopy, Kelvin probe force microscopy, Coupled cluster theory   

    From Argonne Leadership Computing Facility: “Predicting material properties with quantum Monte Carlo” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    July 9, 2019
    Nils Heinonen

    For one of their efforts, the team used diffusion Monte Carlo to compute how doping affects the energetics of nickel oxide. Their simulations revealed the spin density difference between bulks of potassium-doped nickel oxide and pure nickel oxide, showing the effects of substituting a potassium atom (center atom) for a nickel atom on the spin density of the bulk. Credit: Anouar Benali, Olle Heinonen, Joseph A. Insley, and Hyeondeok Shin, Argonne National Laboratory.

    Recent advances in quantum Monte Carlo (QMC) methods have the potential to revolutionize computational materials science, a discipline traditionally driven by density functional theory (DFT). While DFT—an approach that uses quantum-mechanical modeling to examine the electronic structure of complex systems—provides convenience to its practitioners and has unquestionably yielded a great many successes throughout the decades since its formulation, it is not without shortcomings, which have placed a ceiling on the possibilities of materials discovery. QMC is poised to break this ceiling.

    The key challenge is to solve the quantum many-body problem accurately and reliably enough for a given material. QMC solves these problems via stochastic sampling—that is, by using random numbers to sample all possible solutions. The use of stochastic methods allows the full many-body problem to be treated while circumventing large approximations. Compared to traditional methods, they offer extraordinary potential accuracy, strong suitability for high-performance computing, and—with few known sources of systematic error—transparency. For example, QMC satisfies a mathematical principle that allows it to set a bound for a given system’s ground state energy (the lowest-energy, most stable state).

    QMC’s accurate treatment of quantum mechanics is very computationally demanding, necessitating the use of leadership-class computational resources and thus limiting its application. Access to the computing systems at the Argonne Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF)—U.S. Department of Energy (DOE) Office of Science User Facilities—has enabled a team of researchers led by Paul Kent of Oak Ridge National Laboratory (ORNL) to meet the steep demands posed by QMC. Supported by DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, the team’s goal is to simulate promising materials that elude DFT’s investigative and predictive powers.

    To conduct their work, the researchers employ QMCPACK, an open-source QMC code developed by the team. It is written specifically for high-performance computers and runs on all the DOE machines. It has been run at the ALCF since 2011.

    Functional materials

    The team’s efforts are focused on studies of materials combining transition metal elements with oxygen. Many of these transition metal oxides are functional materials that have striking and useful properties. Small perturbations in the make-up or structure of these materials can cause them to switch from metallic to insulating, and greatly change their magnetic properties and ability to host and transport other atoms. Such attributes make the materials useful for technological applications while posing fundamental scientific questions about how these properties arise.

    The computational challenge has been to simulate the materials with sufficient accuracy: the materials’ properties are sensitive to small changes due to complex quantum mechanical interactions, which make them very difficult to model.

    The computational performance and large memory of the ALCF’s Theta system have been particularly helpful to the team. Theta’s storage capacity has enabled studies of material changes caused by small perturbations such as additional elements or vacancies. Over three years the team developed a new technique to more efficiently store the quantum mechanical wavefunctions used by QMC, greatly increasing the range of materials that could be run on Theta.

    ANL ALCF Theta Cray XC40 supercomputer

    Experimental Validation

    Kent noted that experimental validation is a key component of the INCITE project. “The team is leveraging facilities located at Argonne and Oak Ridge National Laboratories to grow high-quality thin films of transition-metal oxides,” he said, including vanadium oxide (VO2) and variants of nickel oxide (NiO) that have been modified with other compounds.

    For VO2, the team combined atomic force microscopy, Kelvin probe force microscopy, and time-of-flight secondary ion mass spectroscopy on VO2 grown at ORNL’s Center for Nanophase Materials Science (CNMS) to demonstrate how oxygen vacancies suppress the transition from metallic to insulating VO2. A combination of QMC, dynamical mean field theory, and DFT modeling was deployed to identify the mechanism by which this phenomenon occurs: oxygen vacancies leave positively charged holes that are localized around the vacancy site and end up distorting the structure of certain vanadium orbitals.

    For NiO, the challenge was to understand how a small quantity of dopant atoms, in this case potassium, modifies the structure and optical properties. Molecular beam epitaxy at Argonne’s Materials Science Division was used to create high quality films that were then probed via techniques such as x-ray scattering and x-ray absorption spectroscopy at Argonne’s Advanced Photon Source (APS) [below] for direct comparison with computational results. These experimental results were subsequently compared against computational models employing QMC and DFT. The APS and CNMS are DOE Office of Science User Facilities.

    So far the team has been able to compute, understand, and experimentally validate how the band gap of materials containing a single transition metal element varies with composition. Band gaps determine a material’s usefulness as a semiconductor—a substance that can alternately conduct or cease the flow of electricity (which is important for building electronic sensors or devices). The next steps of the study will be to tackle more complex materials, with additional elements and more subtle magnetic properties. While more challenging, these materials could lead to greater discoveries.

    New chemistry applications

    Many of the features that make QMC attractive for materials also make it attractive for chemistry applications. An outside colleague—quantum chemist Kieron Burke of the University of California, Irvine—provided the impetus for a paper published in Journal of Chemical Theory and Computation. Burke approached the team’s collaborators with a problem he had encountered while trying to formulate a new method for DFT. Moving forward with his attempt required benchmarks against which to test his method’s accuracy. As QMC was the only means by which sufficiently precise benchmarks could be obtained, the team produced a series of calculations for him.

    The reputed gold standard for many-body system numerical techniques in quantum chemistry is known as coupled cluster theory. While it is extremely accurate for many molecules, some are so strongly correlated quantum-mechanically that they can be thought of as existing in a superposition of quantum states. The conventional coupled cluster method cannot handle something so complicated. Co-principal investigator Anouar Benali, a computational scientist at the ALCF and Argonne’s Computational Sciences Division, spent some three years collaborating on efforts to expand QMC’s capability so as to include both low-cost and highly efficient support for these states that will in future also be needed for materials problems. Performing analysis on the system for which Burke needed benchmarks required this superposition support; he verified the results of his newly developed DFT approach against the calculations generated with Benali’s QMC expansion. They were in close agreement with each other, but not with the results conventional coupled cluster had generated—which, for one particular compound, contained significant errors.

    “This collaboration and its results have therefore identified a potential new area of research for the team and QMC,” Kent said. “That is, tackling challenging quantum chemical problems.”

    The research was supported by DOE’s Office of Science. ALCF and OLCF computing time and resources were allocated through the INCITE program.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

  • richardmitnick 12:07 pm on July 7, 2019 Permalink | Reply
    Tags: "What Does It Mean That Quantum Gravity Has No Symmetry?", , , Quantum Mechanics   

    From Ethan Siegel: “What Does It Mean That Quantum Gravity Has No Symmetry?” 

    From Ethan Siegel
    July 6, 2019

    A diagram used to prove that quantum gravity cannot have any global symmetry. Symmetry, if existed, could act only on the shaded regions in the diagram and causes no change around the black spot in the middle. The shaded regions can be made as small as we like by dividing the boundary circle more and more. Thus, the alleged symmetry would not act anywhere inside of the circle. (DANIEL HARLOW AND HIROSI OOGURI, PRL, 122, 191601 (2019))

    The quest for a quantum theory of gravity is the holy grail of physics. Here’s why it’s murkier than anyone expected.

    If you want to fully describe how the Universe works at a fundamental level, you have to look at it in two different — and incompatible — ways. To describe the particles and their electromagnetic and nuclear interactions, you need to use the framework of quantum field theory (QFT), where quantum fields permeate the Universe and their excitations give rise to the particles we know of. To describe how every quantum of matter and energy moves through the Universe, we need the framework of General Relativity (GR), where matter and energy define how spacetime is curved, and curved spacetime tells matter and energy how to move.

    Yet these two theories are mutually incompatible; to make them work together, we’d need to develop a working theory of quantum gravity. Yet a new paper, just published (Physical Review Letters), has Alex Knapp puzzled, leading him to ask:

    “What does it mean that quantum gravity doesn’t have symmetry?”

    It’s a fascinating find with big implications. Let’s find out what it means.

    Feynman diagrams (top) are based off of point particles and their interactions. Converting them into their string theory analogues (bottom) gives rise to surfaces which can have non-trivial curvature. In string theory, all particles are simply different vibrating modes of an underlying, more fundamental structure: strings. But does a quantum theory of gravity, which string theory aspires to be, have symmetries, and by association, conservation laws? (PHYS. TODAY 68, 11, 38 (2015))

    When you hear the word “symmetry,” there are probably all sorts of images that pop into your mind. Some letters of the alphabet — like “A” or “T” — display a symmetry where if you drew a vertical line down their centers, the left sides and the right sides are symmetric. Other letters — like “B” or “E” — have a similar symmetry but in a different direction: horizontally, where the tops and bottoms are symmetric. Still others — such as “O” — have rotational symmetry, where no matter how many degrees you rotate it, its appearance is unchanged.

    These are some examples of symmetry that are easy to visualize, but they’re not exhaustive. Sure, some systems have no differences from their mirror reflections, known as a parity symmetry. Others demonstrate rotational symmetries, where it doesn’t matter what angle you view it from. But there are many others, all of vital importance.

    There are many letters of the alphabet that exhibit particular symmetries. Note that the capital letters shown here have one and only one line of symmetry; letters like “I” or “O” have more than one. (MATH-ONLY-MATH.COM)

    Some systems are the same for matter as they are for antimatter: they exhibit charge conjugation symmetry. Some systems obey the same laws if you evolve them forwards in time as they do if you evolve them backwards in time: time-reversal symmetry. Still others don’t depend on your physical location (translational symmetry) or on when you’re viewing your system (time-translational symmetry) or on which non-accelerating reference frame you occupy (Lorentz symmetry).

    Some physical systems have these symmetries; others don’t. Dropping a ball off of a cliff obeys time-reversal symmetry; cooking scrambled eggs does not. Flying through space with your engines turned off obeys Lorentz symmetry; accelerating, with your engines firing at full power, does not.

    The DEEP laser-sail concept relies on a large laser array striking and accelerating a relatively large-area, low-mass spacecraft. This has the potential to accelerate non-living objects to speeds approaching the speed of light, making an interstellar journey possible within a single human lifetime. The work done by the laser, applying a force as an object moves a certain distance, is an example of energy transfer from one form into another. An accelerating reference frame is an example of a non-inertial system; for these systems, the Lorentz symmetry does not strictly hold. (© 2016 UCSB EXPERIMENTAL COSMOLOGY GROUP)

    It isn’t just physical systems that can obey (or disobey) symmetries. Whenever you have an equation (or a quantitative theory in general), you can test them to see which symmetries they obey and which ones they don’t.

    Within various QFTs, for example, particles experiencing the electromagnetic force obey parity, charge conjugation, and time-reversal symmetries all independently of one another. Electromagnetism is the same for particles regardless of their direction of motion; the same for particles and antiparticles; the same forwards in time as backwards in time.

    Particles experiencing the weak nuclear force, on the other hand, violate parity, charge conjugation, and time-reversal individually. Left-handed muons decay differently from right-handed muons. Neutral kaons and neutral anti-kaons have different properties. And the decays of B-mesons have time-asymmetric transformation rates [Physical Review Letters]. But even the weak interactions obey the combination of all three symmetries: if you perform an experiment on a particle in motion that moves forward in time and an antiparticle with its motion reflected moving backwards in time, you get the same physical results.

    Changing particles for antiparticles and reflecting them in a mirror simultaneously represents CP symmetry. If the anti-mirror decays are different from the normal decays, CP is violated. Time reversal symmetry, known as T, is violated if CP is violated. The combined symmetries of C, P, and T, all together, must be conserved under our present laws of physics, with implications for the types of interactions that are and aren’t allowed. (E. SIEGEL / BEYOND THE GALAXY)

    Within GR, various spacetimes obey different sets of symmetry. The (Schwarzschild) spacetime describing a non-rotating black hole exhibits time-translation, mirror, and full rotational symmetries. The (Kerr) spacetime describing a rotating black hole exhibits time-translation symmetry, but only has rotational symmetries about one axis.

    The (Friedmann-Lemaitre-Robertson-Walker) spacetime describing the expanding Universe, on the other hand, has a slew of symmetries it does obey, but time-translation isn’t one of them: an expanding Universe is different from one moment in time to the next.

    If you had a static spacetime that weren’t changing, energy conservation would be guaranteed. But if the fabric of space changes as the objects you’re interested in move through them, there is no longer an energy conservation law under the laws of General Relativity. (DAVID CHAMPION, MAX PLANCK INSTITUTE FOR RADIO ASTRONOMY)

    In general, these symmetries are profoundly important to our understanding of the Universe, and have enormous additional implications for reality. You see, there’s a brilliant theorem at the intersection of physics and mathematics that states the following: every unique mathematical symmetry exhibited by a physical theory necessarily implies an associated conserved quantity. This theorem — known as Noether’s theorem after its discoverer, the incomparable mathematician Emmy Noether — is the root of why certain quantities are or aren’t conserved.

    A time-translation symmetry leads to the conservation of energy, which explains why energy is not conserved in an expanding Universe. Spatial translation symmetry leads to the conservation of momentum; rotational symmetry leads to the conservation of angular momentum. Even CPT conservation — where charge conjugation, parity, and time-reversal symmetry are all combined — is a consequence of Lorentz symmetry.

    Quantum gravity tries to combine Einstein’s General theory of Relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white. Whether space (or time) itself is discrete or continuous is not yet decided, as is the question of whether gravity is quantized at all, or particles, as we know them today, are fundamental or not. But if we hope for a fundamental theory of everything, it must include quantized fields.(SLAC NATIONAL ACCELERATOR LAB)

    Some symmetries are inherent to specific QFTs or to QFTs in general; some symmetries are inherent to specific solutions in GR or to GR in general. But these two descriptions of the Universe are both incomplete. There are many questions we can ask about reality that require us to understand what’s happening where gravity is important or where the curvature of spacetime is extremely strong (where we need GR), but also when distance scales are very small or where individual quantum effects are at play (where we need QFT).

    These include questions such as the following:

    What happens to the gravitational field of an electron when it passes through a double slit?
    What happens to the information of the particles that form a black hole, if the black hole’s eventual state is thermal radiation?
    And what is the behavior of a gravitational field/force at and around a singularity?

    To address them, GR and QFT individually are both insufficient. We need something more: an understanding of gravity at the quantum level.

    We don’t have a working theory of quantum gravity, of course, or we’d be able to understand what symmetries it does (and doesn’t) exhibit. But even without a full theory, we have a tremendous clue: the holographic principle. Just as a two-dimensional hologram encodes three-dimensional information on its surface, the holographic principle allows physicists to relate what happens in a spacetime with Ndimensions to a conformal field theory with N-1 dimensions: the AdS/CFT correspondence.

    The AdS stands for anti-de Sitter space, which is frequently used to describe quantum gravity in the context of string theory, while the CFT stands for conformal field theory, such as the QFTs we use to describe three of the four fundamental interactions. While no one is certain whether this is applicable to our Universe, there are many good reasons to think it does.

    In the Standard Model, the neutron’s electric dipole moment is predicted to be a factor of ten billion larger than our observational limits show. The only explanation is that somehow, something beyond the Standard Model is protecting this CP symmetry in the strong interactions. We can demonstrate a lot of things in science, but proving that CP is conserved in the strong interactions can never be done. Which is too bad; we need more CP-violation to explain the matter-antimatter asymmetry present in our Universe. There can be no global symmetries if the AdS/CFT correspondence is correct. (PUBLIC DOMAIN WORK FROM ANDREAS KNECHT)

    The new result, which is very far-reaching in its implications, is this: within the framework of AdS/CFT, there are no global symmetries. The paper itself, published on May 17, 2019 [above], is titled Constraints on Symmetries from Holography and was written by Daniel Harlow and Hirosi Ooguri. In particular, it showed that — again, within the context of AdS/CFT — that the following three conjectures are true.

    Quantum gravity does not allow global symmetries of any type.
    Quantum gravity requires that any internal gauge symmetry (which implies conservation laws like electric charge, color charge, or weak isospin) is mathematically compact.
    Quantum gravity requires that any internal gauge symmetry necessarily comes along with dynamical objects that transform in all irreducible representations.

    Each of these deserve elaboration, but the first one is the most powerful and profound.

    Different frames of reference, including different positions and motions, would see different laws of physics (and would disagree on reality) if a theory is not relativistically invariant. The fact that we have a symmetry under ‘boosts,’ or velocity transformations, tells us we have a conserved quantity: linear momentum. This is much more difficult to comprehend when momentum isn’t simply a quantity associated with a particle, but is rather a quantum mechanical operator. This symmetry, if the holographic principle is correct, cannot exist globally. (WIKIMEDIA COMMONS USER KREA)

    All three of these conjectures have been around for a long time, and none of them are strictly true in either QFT or GR (or any form of classical physics) on their own. The classic arguments for all of them, in fact, are rooted in black hole physics and are known to require certain assumptions that, if violated, admit various loopholes. But if the AdS/CFT correspondence is true, and the holographic principle is applicable to quantum gravity in our Universe, all three of these conjectures are valid.

    The first one means that there are no conservation laws that always necessarily hold. There might be good approximate conservation laws that are still valid, but nothing — not energy, not angular momentum, not linear momentum — is explicitly or strictly conserved under all conditions. Even CPT and Lorentz invariance can be violated. The other two are more subtle, but help extend global symmetries to local conditions: they held prevent things like the instantaneous teleportation of electric charge in one location to another, disconnected location, and require the existence of all possible charges allowed by the theory, such as magnetic monopoles.

    In 1982, an experiment running under the leadership of Blas Cabrera, one with eight turns of wire, detected a flux change of eight magnetons: indications of a magnetic monopole. Unfortunately, no one was present at the time of detection, and no one has ever reproduced this result or found a second monopole. Still, if string theory and this new result are correct, magnetic monopoles, being not forbidden by any law, must exist at some level. (CABRERA B. (1982). FIRST RESULTS FROM A SUPERCONDUCTIVE DETECTOR FOR MOVING MAGNETIC MONOPOLES, PHYSICAL REVIEW LETTERS, 48 (20) 1378–1381)

    The three quantum gravity conjectures that are demonstrated to hold for a holographic Universe have been around, in some form, since 1957 [Annals of Physics], but they were only conjectures until now [Physical Review D]. If the holographic principle (and AdS/CFT, and possibly string theory, by extension) is correct, all of these conjectures are necessarily true. There are no global symmetries; nothing in the Universe is always conserved under all imaginable circumstances (even if you need to reach the Planck scale to see violations), and all non-forbidden charges must exist. It would be revolutionary for our understanding of the quantum Universe.

    Despite the results and implications of this study, it’s still limited. We don’t know whether the holographic principle is true or not, or whether these assumptions about quantum gravity are correct. If it’s right, however, it means that once you include gravity, many of the symmetries that we hold so dear in the physics we know today are not global and fundamental. Paradoxically, if string theory is right, our expectations about hidden symmetries revealing themselves at a more fundamental level are not only wrong, but nature has no global symmetries at all.

    Update: First author of the paper, Daniel Harlow, has reached out to clarify a point that was not sufficiently appreciated by the author. He relates the following:

    “I wanted to point out that there is one technical problem in your description… our theorem does not apply to any of the symmetries you mention here! And indeed in AdS/CFT they all can be unbroken. The reason is that they are all actually gauge symmetries, not global symmetries. For electric charge I guess you are familiar with that, but in gravitational theory such as general relativity then translations, Lorentz transformations, CPT, etc are also gauge symmetries: they are just diffeomorphisms.

    The difference between a gauge symmetry and a global symmetry is that the presence of gauge charge can be measured from far away, while the presence of a global charge cannot. For example in elecromagnetism if we want to know the total charge in a region, we just have to measure the electric flux through its boundary. Similarly in gravity if we want to know the energy of something, we can measure the fall-off of the metric far away (basically looking for the M in the Schwarzschild metric). This should be compared with for example the Z_2 global symmetry of the Ising model, where there is no way to know that the spins are up in a region without going there and looking at them.

    It isn’t widely appreciated, but in the standard model of particle physics coupled to gravity there is actually only one global symmetry: the one described by the conservation of B-L (baryon number minus lepton number). So this is the only known symmetry we are actually saying must be violated!

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:11 am on July 6, 2019 Permalink | Reply
    Tags: , Quantum Mechanics,   

    From Science Alert: “If You Thought Quantum Mechanics Was Weird, You Need to Check Out Entangled Time” 


    From Science Alert

    6 JULY 2019


    In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics.

    The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

    Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly.

    “I know of course how the hocus pocus works mathematically,” he wrote to Einstein on 13 July 1935. “But I do not like such a theory.”

    Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

    The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one.

    But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles.

    If two quantum systems meet and then separate, even across a distance of thousands of lightyears, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

    Up to today, most experiments have tested entanglement over spatial gaps.

    The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?

    The answer, as it turns out, is yes.

    Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted.

    Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

    Here’s how they did it.

    First, they created an entangled pair of photons, ‘1-2’ (step I in the diagram below). Soon after, they measured the polarisation of photon 1 (a property describing the direction of light’s oscillation) – thus ‘killing’ it (step II).


    Photon 2 was sent on a wild goose chase while a new entangled pair, ‘3-4’, was created (step III). Photon 3 was then measured along with the itinerant photon 2 in such a way that the entanglement relation was ‘swapped’ from the old pairs (‘1-2’ and ‘3-4’) onto the new ‘2-3’ combo (step IV).

    Some time later (step V), the polarisation of the lone survivor, photon 4, is measured, and the results are compared with those of the long-dead photon 1 (back at step II).

    The upshot? The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

    What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter.

    Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

    Lest this scenario strike you as too outlandish, Megidish and his colleagues can’t resist speculating on possible and rather spooky interpretations of their results.

    Perhaps the measurement of photon 1’s polarisation at step II somehow steers the future polarisation of 4, or the measurement of photon 4’s polarisation at step V somehow rewrites the past polarisation state of photon 1.

    In both forward and backward directions, quantum correlations span the causal void between the death of one photon and the birth of the other.

    Just a spoonful of relativity helps the spookiness go down, though.

    In developing his theory of special relativity, Einstein deposed the concept of simultaneity from its Newtonian pedestal.

    As a consequence, simultaneity went from being an absolute property to being a relative one. There is no single timekeeper for the Universe; precisely when something is occurring depends on your precise location relative to what you are observing, known as your frame of reference.

    So the key to avoiding strange causal behaviour (steering the future or rewriting the past) in instances of temporal separation is to accept that calling events ‘simultaneous’ carries little metaphysical weight.

    It is only a frame-specific property, a choice among many alternative but equally viable ones – a matter of convention, or record-keeping.

    The lesson carries over directly to both spatial and temporal quantum nonlocality.

    Mysteries regarding entangled pairs of particles amount to disagreements about labelling, brought about by relativity.

    Einstein showed that no sequence of events can be metaphysically privileged – can be considered more real – than any other. Only by accepting this insight can one make headway on such quantum puzzles.

    The various frames of reference in the Hebrew University experiment (the lab’s frame, photon 1’s frame, photon 4’s frame, and so on) have their own ‘historians’, so to speak.

    While these historians will disagree about how things went down, not one of them can claim a corner on truth. A different sequence of events unfolds within each one, according to that spatiotemporal point of view.

    Clearly, then, any attempt at assigning frame-specific properties generally, or tying general properties to one particular frame, will cause disputes among the historians.

    But here’s the thing: while there might be legitimate disagreement about which properties should be assigned to which particles and when, there shouldn’t be disagreement about the very existence of these properties, particles, and events.

    These findings drive yet another wedge between our beloved classical intuitions and the empirical realities of quantum mechanics.

    As was true for Schrödinger and his contemporaries, scientific progress is going to involve investigating the limitations of certain metaphysical views.

    Schrödinger’s cat, half-alive and half-dead, was created to illustrate how the entanglement of systems leads to macroscopic phenomena that defy our usual understanding of the relations between objects and their properties: an organism such as a cat is either dead or alive. No middle ground there.

    Most contemporary philosophical accounts of the relationship between objects and their properties embrace entanglement solely from the perspective of spatial nonlocality.

    But there’s still significant work to be done on incorporating temporal nonlocality – not only in object-property discussions, but also in debates over material composition (such as the relation between a lump of clay and the statue it forms), and part-whole relations (such as how a hand relates to a limb, or a limb to a person).

    For example, the ‘puzzle’ of how parts fit with an overall whole presumes clear-cut spatial boundaries among underlying components, yet spatial nonlocality cautions against this view. Temporal nonlocality further complicates this picture: how does one describe an entity whose constituent parts are not even coexistent?

    Discerning the nature of entanglement might at times be an uncomfortable project. It’s not clear what substantive metaphysics might emerge from scrutiny of fascinating new research by the likes of Megidish and other physicists.

    In a letter to Einstein, Schrödinger notes wryly (and deploying an odd metaphor): “One has the feeling that it is precisely the most important statements of the new theory that can really be squeezed into these Spanish boots – but only with difficulty.”

    We cannot afford to ignore spatial or temporal nonlocality in future metaphysics: whether or not the boots fit, we’ll have to wear ’em.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:37 am on July 4, 2019 Permalink | Reply
    Tags: "What Is The Smallest Possible Distance In The Universe?", , , , , , , , , Planck length, Quantum Mechanics   

    From Ethan Siegel: “What Is The Smallest Possible Distance In The Universe?” 

    From Ethan Siegel
    July 3, 2019

    The Planck length is a lot smaller than anything we’ve ever accessed. But is it a true limit?

    Black holes may be our best option for exploring quantum gravitational effects, as the space very close to the central singularity is where those effects are expected to be most important. However, below a certain distance scale, we are unable to accurately describe the Universe, even in theory. The existence of a smallest distance scale at which the laws of physics presently makes sense is a puzzle yet-to-be-solved for physicists. (NASA/AMES RESEARCH CENTER/C. HENZE)

    If you wanted to understand how our Universe operates, you’d have to examine it at a fundamental level. Macroscopic objects are made up of particles, which can only themselves be detected by going to subatomic scales. To examine the Universe’s properties, you must to look at the smallest constituents on the smallest possible scales. Only by understanding how they behave at this fundamental level can we hope to understand how they join together to create the human-scale Universe we’re familiar with.

    But you can’t extrapolate what we know about even the small-scale Universe to arbitrarily small distance scales. If we decide to go down to below about 10^-35 meters ⁠ — the Planck distance scale ⁠ — our conventional laws of physics only give nonsense for answers. Here’s the story of why, below a certain length scale, we cannot say anything physically meaningful.

    We often visualize space as a 3D grid, even though this is a frame-dependent oversimplification when we consider the concept of spacetime. The question of whether space and time are discrete or continuous, and whether there’s a smallest possible length scale, is still unanswered. However, we do know that below the Planck distance scale, we cannot predict anything with any accuracy at all. (REUNMEDIA / STORYBLOCKS)

    Imagine, if you like, one of the classic problems of quantum physics: the particle-in-a-box. Imagine any particle you like, and imagine that it’s somehow confined to a certain small volume of space. Now, in this quantum game of peek-a-boo, we’re going to ask the most straightforward question you can imagine: “where is this particle?”

    You can make a measurement to determine the particle’s position, and that measurement will give you an answer. But there will be an inherent uncertainty associated with that measurement, where the uncertainty is caused by the quantum effects of nature.

    How large is that uncertainty? It’s related to both ħ and L, where ħ is Planck’s constant and L is the size of the box.

    This diagram illustrates the inherent uncertainty relation between position and momentum. When one is known more accurately, the other is inherently less able to be known accurately. (WIKIMEDIA COMMONS USER MASCHEN)

    For most of the experiments we perform, Planck’s constant is small compared to any actual distance scale we’re capable of probing, and so when we examine the uncertainty we get — related to both ħ and L — we’ll see a small inherent uncertainty.

    But what if L is small? What if L is so small that, relative to ħ, it’s either comparably sized or even smaller?

    This is where you can see the problem start to arise. These quantum corrections that occur in nature don’t simply arise because there’s the main, classical effect, and then there are quantum corrections of order ~ħ that arise. There are corrections of all orders: ~ħ, ~ħ², ~ħ³, and so on. There’s a certain length scale, known as the Planck length, where if you reach it, the higher-order terms (which we usually ignore) become just as important as, or even more important than, the quantum corrections we normally apply.

    The energy levels and electron wavefunctions that correspond to different states within a hydrogen atom, although the configurations are extremely similar for all atoms. The energy levels are quantized in multiples of Planck’s constant, but the sizes of the orbitals and atoms are determined by the ground-state energy and the electron’s mass. Additional effects may be subtle, but shift the energy levels in measurable, quantifiable fashions. Note that the potential created by the nucleus acts like a ‘box’ that confines the electron’s physical extent, similar to the particle-in-a-box thought experiment. (POORLENO OF WIKIMEDIA COMMONS)

    What is that critical length scale, then? The Planck scale was first put forth by physicist Max Planck more than 100 years ago. Planck took the three constants of nature:

    G, the gravitational constant of Newton’s and Einstein’s theories of gravity,
    ħ, Planck’s constant, or the fundamental quantum constant of nature, and
    c, the speed of light in a vacuum,

    and realized that you could combine them in different ways to get a single value for mass, another value for time, and another value for distance. These three quantities are known as the Planck mass (which comes out to about 22 micrograms), the Planck time (around 10^-43 seconds), and the Planck length (about 10^-35 meters). If you put a particle in a box that’s the Planck length or smaller, the uncertainty in its position becomes greater than the size of the box.

    If you confine a particle to a space, and try to measure its properties, there will be quantum effects proportional to Planck’s constant and the size of the box. If the box is very small, below a certain length scale, these properties become impossible to calculate. (ANDY NGUYEN / UT-MEDICAL SCHOOL AT HOUSTON)

    But there’s a lot more to the story than that. Imagine you had a particle of a certain mass. If you compressed that mass down into a small enough volume, you’d get a black hole, just like you would for any mass. If you took the Planck mass — which is defined by the combination of those three constants in the form of √(ħc/G) — and asked that question, what sort of answer would you get?

    You’d find that the volume of space you needed that mass to occupy would be a sphere whose Schwarzschild radius is double the Planck length. If you asked how long it would take to cross from one end of the black hole to the other, the length of time is four times the Planck time. It’s no coincidence that these quantities are related; that’s unsurprising. But what might be surprising is what it implies when you start asking questions about the Universe at those tiny distance and time scales.

    The energy of a photon depends on the wavelength it has; longer wavelength are lower in energy and shorter wavelengths are higher. In principle, there is no limit to how short a wavelength can be, but there are other physics concerns that cannot be ignored. (WIKIMEDIA COMMONS USER MAXHURTZ)

    In order to measure anything at the Planck scale, you’d need a particle with sufficiently high energy to probe it. The energy of a particle corresponds to a wavelength (either a photon wavelength for light or a de Broglie wavelength for matter), and to get down to Planck lengths, you need a particle at the Planck energy: ~10¹⁹ GeV, or approximately a quadrillion times greater than the maximum LHC energy.

    If you had a particle that actually achieved that energy, its momentum would be so large that the energy-momentum uncertainty would render that particle indistinguishable from a black hole. This is truly the scale at which our laws of physics break down.

    The simulated decay of a black hole not only results in the emission of radiation, but the decay of the central orbiting mass that keeps most objects stable. Black holes are not static objects, but rather change over time. For the lowest-mass black holes, evaporation happens the fastest. (EU’S COMMUNICATE SCIENCE)

    When you examine the situation in greater detail, it only gets worse. If you start thinking about quantum fluctuations inherent to space (or spacetime) itself, you’ll recall there’s also an energy-time uncertainty relation. The smaller the distance scale, the smaller the corresponding timescale, which implies a larger energy uncertainty.

    At the Planck distance scale, this implies the appearance of black holes and quantum-scale wormholes, which we cannot investigate. If you performed higher-energy collisions, you’d simply create larger mass (and larger size) black holes, which would then evaporate via Hawking radiation.

    An illustration of the concept of quantum foam, where quantum fluctuations are large, varied, and important on the smallest of scales. The energy inherent to space fluctuates in large amounts on these scales. If you view scales that are small enough, such as approaching the Planck scale, the fluctuations become large enough that they create black holes spontaneously. (NASA/CXC/M.WEISS)

    You might argue that, perhaps, this is why we need quantum gravity. That when you take the quantum rules we know and apply them to the law of gravity we know, this is simply highlighting a fundamental incompatibility between quantum physics and General Relativity. But it’s not so simple.

    Energy is energy, and we know it causes space to curve. If you start attempting to perform quantum field theory calculations at or near the Planck scale, you no longer know what type of spacetime to perform your calculations in. Even in quantum electrodynamics or quantum chromodynamics, we can treat the background spacetime where these particles exist to be flat. Even around a black hole, we can use a known spatial geometry. But at these ultra-intense energy, the curvature of space is unknown. We cannot calculate anything meaningful.

    Quantum gravity tries to combine Einstein’s General theory of Relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white. Whether space (or time) itself is discrete or continuous is not yet decided, as is the question of whether gravity is quantized at all, or particles, as we know them today, are fundamental or not. But if we hope for a fundamental theory of everything, it must include quantized fields. (SLAC NATIONAL ACCELERATOR LAB)

    At energies that are sufficiently high, or (equivalently) at sufficiently small distances or short times, our current laws of physics break down. The background curvature of space that we use to perform quantum calculations is unreliable, and the uncertainty relation ensures that our uncertainty is larger in magnitude than any prediction we can make. The physics that we know can no longer be applied, and that’s what we mean when we say that “the laws of physics break down.”

    But there might be a way out of this conundrum. There’s an idea that’s been floating around for a long time — since Heisenberg, actually — that could provide a solution: perhaps there’s a fundamentally minimal length scale to space itself.

    A representation of flat, empty space with no matter, energy or curvature of any type. If this space is fundamentally discrete, meaning there’s a minimum length scale to the Universe, we should be able to design an experiment that, at least in theory, shows that behavior. (AMBER STUVER, FROM HER BLOG, LIVING LIGO)

    Of course, a finite, minimum length scale would create its own set of problems. In Einstein’s theory of relativity, you can put down an imaginary ruler, anywhere, and it will appear to shorten based on the speed at which you move relative to it. If space were discrete and had a minimum length scale, different observers — i.e., people moving at different velocities — would now measure a different fundamental length scale from one another!

    That strongly suggests there would be a “privileged” frame of reference, where one particular velocity through space would have the maximum possible length, while all others would be shorter. This implies that something that we currently think is fundamental, like Lorentz invariance or locality, must be wrong. Similarly, discretized time poses big problems for General Relativity.

    This illustration, of light passing through a dispersive prism and separating into clearly defined colors, is what happens when many medium-to-high energy photons strike a crystal. If we were to set this up with just a single photon, the amount the crystal moved could be in a discrete number of spatial ‘steps.’ (WIKIMEDIA COMMONS USER SPIGGET)

    Still, there may actually be a way to test whether there is a smallest length scale or not. Three years before he died, physicist Jacob Bekenstein put forth a brilliant idea for an experiment. If you pass a single photon through a crystal, you’ll cause it to move by a slight amount.

    Because photons can be tuned in energy (continuously) and crystals can be very massive compared to a photon’s momentum, we could detect whether the crystal moves in discrete “steps” or continuously. With low-enough energy photons, if space is quantized, the crystal would either move a single quantum step or not at all.

    The fabric of spacetime, illustrated, with ripples and deformations due to mass. However, even though there are many things happening in this space, it does not need to be broken up into individual quanta itself.(EUROPEAN GRAVITATIONAL OBSERVATORY, LIONEL BRET/EUROLIOS)

    At present, there is no way to predict what’s going to happen on distance scales that are smaller than about 10^-35 meters, nor on timescales that are smaller than about 10^-43 seconds. These values are set by the fundamental constants that govern our Universe. In the context of General Relativity and quantum physics, we can go no farther than these limits without getting nonsense out of our equations in return for our troubles.

    It may yet be the case that a quantum theory of gravity will reveal properties of our Universe beyond these limits, or that some fundamental paradigm shifts concerning the nature of space and time could show us a new path forward. If we base our calculations on what we know today, however, there’s no way to go below the Planck scale in terms of distance or time. There may be a revolution coming on this front, but the signposts have yet to show us where it will occur.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 10:01 am on June 18, 2019 Permalink | Reply
    Tags: "Scientists Have Found Evidence a Strange Group of Quantum Particles Are Basically Immortal", Phonons, Polarons, Quantum Mechanics, Quasiparticles do decay however new identical particle entities emerge from the debris., Researchers believe this quasiparticle immortality imbues it with strong potential for long-lasting data storage in quantum computing systems., , Scientists have found that quasiparticles in quantum systems could be effectively immortal, Strong interactions can even stop decay entirely.,   

    From Techniche Universitat Munchen via Science Alert: “Scientists Have Found Evidence a Strange Group of Quantum Particles Are Basically Immortal” 

    Techniche Universitat Munchen

    From Techniche Universitat Munchen


    Science Alert

    17 JUN 2019

    (Verreson et al., Nature Physics, 2019)

    Nothing lasts forever. Humans, planets, stars, galaxies, maybe even the Universe itself, everything has an expiration date. But things in the quantum realm don’t always follow the rules. Now, scientists have found that quasiparticles in quantum systems could be effectively immortal.

    That doesn’t mean they don’t decay, which is reassuring. But once these quasiparticles have decayed, they are able to reorganise themselves back into existence, possibly ad infinitum.

    This seemingly flies right in the face of the second law of thermodynamics, which asserts that entropy in an isolated system can only move in an increasing direction: things can only break down, not build back up again.

    Of course, quantum physics can get weird with the rules; but even quantum scientists didn’t know quasiparticles were weird in this particular manner.

    “Until now, the assumption was that quasiparticles in interacting quantum systems decay after a certain time,” said physicist Frank Pollman of the Technical University of Munich.

    “We now know that the opposite is the case: strong interactions can even stop decay entirely.”

    Quasiparticles aren’t particles the way we typically think of them, like electrons and quarks. Rather, they’re the disturbances or excitations in a solid caused by electrical or magnetic forces that, collectively, behave like particles.

    Phonons – the discrete units of vibrational energy that oscillate the atoms in a crystal lattice, for example – are often classified as quasiparticles, as are polarons, electrons trapped in a lattice surrounded by a cloud of polarisation.

    The researchers involved with this latest study developed numerical methods for calculating the complex interactions of these quasiparticles, and ran simulations on a powerful computer to observe how they decay.

    “The result of the elaborate simulation: admittedly, quasiparticles do decay, however new, identical particle entities emerge from the debris,” said physicist Ruben Verresen of the Technical University of Munich and the Max Planck Institute for the Physics of Complex Systems.

    “If this decay proceeds very quickly, an inverse reaction will occur after a certain time and the debris will converge again. This process can recur endlessly and a sustained oscillation between decay and rebirth emerges.”

    And, the physicists pointed out, it doesn’t violate the second law of thermodynamics after all. That’s because the oscillation is a wave that is transformed into matter, which is covered under the quantum mechanical concept of wave-particle duality.

    Their entropy is not decreasing, but remaining constant. That’s still pretty weird, but not physics-breaking weird.

    In fact, the finding has solved a couple of other head-scratchers. For example, there’s a magnetic compound Ba3CoSb2O9 used in experiments that’s been previously found to be unexpectedly stable. Now it looks like the key might be the magnetic quasiparticles it contains, called magnons. According to the simulation, they rearrange themselves after decay.

    Another potential example is helium: it becomes a resistance-free superfluid at a temperature of absolute zero, and this peculiar property could be explained by the fact this gas is full of quasiparticles called rotons.

    At the moment, the work is only in the theoretical realm, but the researchers believe this quasiparticle immortality imbues it with strong potential for long-lasting data storage in quantum computing systems.

    The research has been published in Nature Physics.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Techniche Universitat Munchin Campus

    Techniche Universitat Munchin is one of Europe’s top universities. It is committed to excellence in research and teaching, interdisciplinary education and the active promotion of promising young scientists. The university also forges strong links with companies and scientific institutions across the world. TUM was one of the first universities in Germany to be named a University of Excellence. Moreover, TUM regularly ranks among the best European universities in international rankings.

  • richardmitnick 2:16 pm on June 10, 2019 Permalink | Reply
    Tags: "Quantum Leaps, Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s., In this way what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control., Long Assumed to Be Instantaneous, , Quantum Mechanics, Take Time", The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr Werner Heisenberg and their colleagues in the mid-1920s, The “quantum leap.”, the Copenhagen interpretation, The researchers could spot when a quantum jump was about to appear- “catch” it halfway through and reverse it sending the system back to the state in which it started.,   

    From Quanta: “Quantum Leaps, Long Assumed to Be Instantaneous, Take Time” 

    Quanta Magazine
    From Quanta Magazine

    June 5, 2019
    Philip Ball

    A quantum leap is a rapidly gradual process. Quanta Magazine; source: qoncha

    When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

    A new experiment [Nature] shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature [noted above]. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”

    But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.

    All Too Random

    The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are “quantized”: Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow.

    Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg’s quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. “The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation,” historian of science Mara Beller has written.

    U Chicago Press Books

    Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg’s math of discrete quantum states and instant jumps between them. Schrödinger’s theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don’t switch suddenly, in zero time, Schrödinger thought — discontinuous “quantum jumps” were just a figment of the mind. In a 1952 paper called “Are there quantum jumps?,” [BJPS] Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.”

    The argument wasn’t just about Schrödinger’s discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore “not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general.” In other words, there’s a lot riding on the reality (or not) of quantum jumps.

    Seeing Without Looking

    To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reported [Physical Review Letters] them [Physical Review Letters] happening [Physical Review Letters] in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a “bright” state, where they could emit a photon of light, and a “dark” state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported [Nature] jumps that correspond to what they called “the birth, life and death of individual photons.”

    In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team’s setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail.

    The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called “artificial atoms” because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms.

    Michel Devoret (left) and Zlatko Minev in front of the cryostat holding their experiment. Yale Quantum Institute

    Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn’t monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they’d “collapse” the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger’s cat, which is allegedly placed in a coherent quantum “superposition” of live and dead states but becomes only one or the other when observed.

    To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second “bright” state, so named because it’s the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the “dark” state — because it remains hidden from direct view.

    The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter’s “click.”

    The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it’s necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it.

    The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. “Absence of an event can bring as much information as its presence,” said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the “curious incident” in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it “Baskerville’s Hound meets Schrödinger’s Cat.”

    To Catch a Jump

    The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that’s the only thing that can prevent flipping back and forth between the ground and bright states.

    So here in these switches from “click” to “no-click” states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new.

    Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. “As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur,” said Devoret.

    That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach.

    With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how?

    The team found that jumps are in fact gradual. That’s because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It’s a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it’s a binary choice — but as the evening wears on and you get tired, the question “Are you staying or leaving?” becomes increasingly likely to get the answer “I’m leaving.”

    The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That’s pretty fast, but it’s certainly not instantaneous.

    What’s more, this electronic system is so fast that the researchers could “catch” the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all.

    Flash of Insight

    The experiment shows that quantum jumps “are indeed not instantaneous if we look closely enough,” said Oliver, “but are coherent processes”: real physical events that unfold over time.

    The gradualness of the “jump” is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. “It is reassuring that the theory matches perfectly with what is seen” said David DiVincenzo, an expert in quantum information at Aachen University in Germany, “but it’s a subtle theory, and we are far from having gotten our heads completely around it.”

    The possibility of predicting quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. “To the best of our knowledge, this precursory signal [to a quantum jump] has not been proposed or measured before,” he said.

    Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, “in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference,” he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run.

    DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as “quite far down the line.” To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo.

    The real value of the result is not, though, in any practical benefits; it’s a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 6:55 am on April 22, 2019 Permalink | Reply
    Tags: "New experiment dives into quantum physics in a liquid", , Kastler Brossel Laboratory in France, , Quantum Mechanics, Superfluid liquid helium,   

    From Yale University: “New experiment dives into quantum physics in a liquid” 

    Yale University bloc

    From Yale University

    April 18, 2019
    Jim Shelton

    The space between two optical fibers (yellow) is filled wth liquid helium (blue). Laser light (red) is trapped in this space, and interacts with sound waves in the liquid (blue ripples). (Image credit: Harris Lab)

    For the first time, Yale physicists have directly observed quantum behavior in the vibrations of a liquid body.

    A great deal of ongoing research is currently devoted to discovering and exploiting quantum effects in the motion of macroscopic objects made of solids and gases. This new experiment opens a potentially rich area of further study into the way quantum principles work on liquid bodies.

    The findings come from the Yale lab of physics and applied physics professor Jack Harris, along with colleagues at the Kastler Brossel Laboratory in France. A study about the research appears in the journal Physical Review Letters.

    “We filled a specially designed cavity with superfluid liquid helium,” Harris explained. “Then we use laser light to monitor an individual sound wave in the liquid helium. The volume of helium in which this sound wave lives is fairly large for a macroscopic object — equal to a cube whose sides are one-thousandth of an inch.”

    Harris and his team discovered they could detect the sound wave’s quantum properties: its zero-point motion, which is the quantum motion that exists even when the temperature is lowered to absolute zero; and its quantum “back-action,” which is the effect of a detector on the measurement itself.

    The co-first authors of the study are Yale postdoctoral fellows Alexey Shkarin and Anya Kashkanova. Additional authors are Charles Brown of Yale and Jakob Reichel, Sébastien Garcia, and Konstantin Ott of the Kastler Brossel Laboratory.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Yale University Campus

    Yale University comprises three major academic components: Yale College (the undergraduate program), the Graduate School of Arts and Sciences, and the professional schools. In addition, Yale encompasses a wide array of centers and programs, libraries, museums, and administrative support offices. Approximately 11,250 students attend Yale.

  • richardmitnick 11:41 am on April 13, 2019 Permalink | Reply
    Tags: , , , Quantum Mechanics   

    From Nature: “A realist takes on quantum mechanics” 

    Nature Mag
    From Nature

    Graham Farmelo parses Lee Smolin’s takedown of the most successful physics theory ever.


    09 April 2019
    Graham Farmelo

    Quantum mechanics is perhaps the most successful theory ever formulated. For almost 90 years, experimenters have subjected it to rigorous tests, none of which has called its foundations into question. It is one of the triumphs of twentieth-century science. The only problem with it, argues Lee Smolin in Einstein’s Unfinished Revolution, is that it is wrong. In this challenging book, he attempts to examine other options for a theory of the atomic world.

    Lee Smolin (/ˈsmoʊlɪn/; born June 6, 1955) is an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo and a member of the graduate faculty of the philosophy department at the University of Toronto.

    U Toronto

    Smolin is a theoretical physicist at the Perimeter Institute in Waterloo, Canada, and an outspoken critic of the direction his subject has taken over the past four decades. A fount of provocative ideas, he has showcased them in several popular books, including The Trouble with Physics (2006) and Time Reborn (2013). He is perhaps best known for his rejection of string theory, a widely used framework for fundamental physics that he dismisses as misguided. Although Smolin’s spirited opposition to some mainstream developments in modern physics irritates quite a few of his peers, I have a soft spot for him and anyone else who is unafraid to question the standard way of doing things. As the journalist Malcolm Muggeridge observed: “Only dead fish swim with the stream.”

    Smolin’s book is in many ways ambitious. It goes right back to square one, introducing quantum mechanics at a level basic enough for high-school science students to grasp. He points out that the field gave a truly revolutionary account of the atomic world, something that had proved impossible with the theories (retrospectively labelled ‘classical’) that preceded it. The mathematical structure of quantum mechanics arrived before physicists were able to interpret it, and Smolin gives a clear account of subsequent arguments about the nature of the theory, before finally setting out his own ideas.

    For me, the book demonstrates that it is best to regard Smolin as a natural philosopher, most interested in reflecting on the fundamental meanings of space, time, reality, existence and related topics. James Clerk Maxwell, leading nineteenth-century pioneer of the theory of electricity and magnetism, might be described in the same way — he loved to debate philosophical matters with colleagues in a range of disciplines. Maxwell’s way of thinking had a profound impact on Albert Einstein, who might also be considered part natural philosopher, part theoretical physicist.

    Like Einstein, Smolin is a philosophical ‘realist’ — someone who thinks that the real world exists independently of our minds and can be described by deterministic laws. These enable us, in principle, to predict the future of any particle if we have enough information about it. This view of the world is incompatible with the conventional interpretation of quantum mechanics, in which key features are unpredictability and the role of observers in the outcome of experiments. Thus, Einstein never accepted that quantum mechanics was anything but an impressive placeholder for a more fundamental theory conforming to his realist credo. Smolin agrees.

    He conducts his search for other ways of setting out quantum mechanics in language intelligible to a lay audience, with scarcely an equation in sight. Smolin is a lucid expositor, capable of freshening up material that has been presented thousands of times. Non-experts might, however, struggle as he delves into some of the modern interpretations of quantum mechanics, only to dismiss them. These include, for instance, the superdeterminism approach of the theoretician Gerard ’t Hooft.

    The book is, however, upbeat and, finally, optimistic. Unapologetically drawing on historical tradition and even modern philosophy, Smolin proposes a new set of principles that applies to both quantum mechanics and space-time. He then explores how these principles might be realized as part of a fundamental theory of nature, although he stops short of supplying details of the implementation.

    Smolin concludes with the implications of all this for our understanding of space and time. He suggests that time is irreversible and fundamental, in the sense that the processes by which future events are produced from present ones are truly basic: they do not need to be explained in terms of more basic ideas. Space, however, is different. He argues that it emerges from something deeper.

    Yet it is far from clear whether Smolin’s new methods allow space and time to be investigated effectively. In recent decades, there have been many exciting advances in this subject, almost all made using standard quantum mechanics and Einstein’s theory of relativity. In my opinion, Smolin downplays the extraordinary success of this conservative approach. It is the basis of modern quantum field theory (a descendant of Maxwell’s theory of electricity and magnetism), which accounts for the results of all subatomic experiments, some of them to umpteen decimal places. Despite the impression that Smolin gives, modern theoretical physics is thriving, with potentially revolutionary ideas about space and time emerging from a combination of the standard quantum mechanics and relativity theory taught in universities for generations. Maybe the upheaval in physics that Smolin yearns for is simply unnecessary.

    Rewarding as it is, I doubt Einstein’s Unfinished Revolution will convert many of Smolin’s critics. To do that, he will need to present his ideas more rigorously than he could reasonably do in a popular book.

    One thing on which every physicist in Smolin’s field can agree is that there is a crying need for more juicy clues from nature. There have been no surprises concerning the inner workings of atoms for some 20 years. It is experimental results that will decide whether Smolin is correct, or whether he protests too much. After all, although quantum mechanics might not satisfy the philosophically minded, it has proved to be a completely dependable tool for physicists — even those who have no interest in debates about its interpretation.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: