Tagged: Superposition Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:11 am on January 29, 2023 Permalink | Reply
    Tags: "How Quantum Physicists ‘Flipped Time’ (and How They Didn’t)", "Time’s arrow", , Before being measured a particle acts more like a wave., Physicists have coaxed particles of light into undergoing opposite transformations simultaneously like a human turning into a werewolf as the werewolf turns into a human., , , , Superposition, The essence of quantum strangeness, The perplexing phenomenon could lead to new kinds of quantum technology.   

    From “Quanta Magazine” : “How Quantum Physicists ‘Flipped Time’ (and How They Didn’t)” 

    From “Quanta Magazine”

    1.27.23
    Charlie Wood


    The quantum time flip circuit is like a metronome swinging both ways at once. Kristina Armitage/Quanta Magazine.

    Physicists have coaxed particles of light into undergoing opposite transformations simultaneously, like a human turning into a werewolf as the werewolf turns into a human. In carefully engineered circuits, the photons act as if time were flowing in a quantum combination of forward and backward.

    “For the first time ever, we kind of have a time-traveling machine going in both directions,” said Sonja Franke-Arnold, a quantum physicist at the University of Glasgow in Scotland who was not involved in the research.

    Regrettably for science fiction fans, the devices have nothing in common with a 1982 DeLorean. Throughout the experiments, which were conducted by two independent teams in China and Austria, laboratory clocks continued to tick steadily forward. Only the photons flitting through the circuitry experienced temporal shenanigans. And even for the photons, researchers debate whether the flipping of “time’s arrow” is real or simulated.

    Either way, the perplexing phenomenon could lead to new kinds of quantum technology.

    “You could conceive of circuits in which your information could flow both ways,” said Giulia Rubino, a researcher at the University of Bristol.

    Anything Anytime All at Once

    Physicists first realized a decade ago that the strange rules of quantum mechanics topple commonsense notions of time.

    The essence of quantum strangeness is this: When you look for a particle, you’ll always detect it in a single, pointlike location. But before being measured, a particle acts more like a wave; it has a “wave function” that spreads out and ripples over multiple routes. In this undetermined state, a particle exists in a quantum blend of possible locations known as a superposition.

    In a paper published in 2013, Giulio Chiribella, a physicist now at the University of Hong Kong, and co-authors proposed a circuit that would put events into a superposition of temporal orders, going a step beyond the superposition of locations in space. Four years later, Rubino and her colleagues directly experimentally demonstrated the idea [Science Advances (below)]. They sent a photon down a superposition of two paths: one in which it experienced event A and then event B, and another where it experienced B then A. In some sense, each event seemed to cause the other, a phenomenon that came to be called “indefinite causality”.

    Not content to mess merely with the order of events while time marched onward, Chiribella and a colleague, Zixuan Liu, next took aim at the marching direction, or arrow, of time itself. They sought a quantum apparatus in which time entered a superposition of flowing from the past to the future and vice versa — an indefinite arrow of time.

    To do this, Chiribella and Liu realized they needed a system that could undergo opposite changes, like a metronome whose arm can swing left or right. They imagined putting such a system in a superposition, akin to a musician simultaneously flicking a quantum metronome rightward and leftward. They described a scheme for setting up such a system in 2020.

    Optics wizards immediately started constructing dueling arrows of time in the lab. Last fall, two teams declared success.

    A Two-Timing Game

    Chiribella and Liu had devised a game at which only a quantum two-timer could excel. Playing the game with light involves firing photons through two crystal gadgets, A and B. Passing forward through a gadget rotates a photon’s polarization by an amount that depends on the gadget’s settings. Passing backward through the gadget rotates the polarization in precisely the opposite way.

    Before each round of the game, a referee secretly sets the gadgets in one of two ways: The path forward through A, then backward through B, will either shift a photon’s wave function relative to the time-reversed path (backward through A, then forward through B), or it won’t. The player must figure out which choice the referee made. After the player arranges the gadgets and other optical elements however they want, they send a photon through the maze, perhaps splitting it into a superposition of two paths using a half-silvered mirror. The photon ends up at one of two detectors. If the player has set up their maze in a sufficiently clever way, the click of the detector that has the photon will reveal the referee’s choice.

    When the player sets up the circuit so that the photon moves in only one direction through each gadget, then even if A and B are in an indefinite causal order, the detector’s click will match the secret gadget settings at most about 90% of the time. Only when the photon experiences a superposition that takes it forward and backward through both gadgets — a tactic dubbed the “quantum time flip” — can the player theoretically win every round.

    2
    Merrill Sherman/Quanta Magazine

    Last year, a team in Hefei, China advised by Chiribella and one in Vienna advised by the physicist Časlav Brukner set up quantum time-flip circuits. Over 1 million rounds, the Vienna team guessed correctly 99.45% of the time. Chiribella’s group won 99.6% of its rounds. Both teams shattered the theoretical 90% limit, proving that their photons experienced a superposition of two opposing transformations and hence an indefinite arrow of time.

    Interpreting the Time Flip

    While the researchers have executed and named the quantum time flip, they’re not in perfect agreement regarding which words best capture what they’ve done.

    In Chiribella’s eyes, the experiments have simulated a flipping of time’s arrow. Actually flipping it would require arranging the fabric of space-time itself into a superposition of two geometries where time points in different directions. “Obviously, the experiment is not implementing the inversion of the arrow of time,” he said.

    Brukner, meanwhile, feels that the circuits take a modest step beyond simulation. He points out that the measurable properties of the photons change exactly as they would if they passed through a true superposition of two space-time geometries. And in the quantum world, there is no reality beyond what can be measured. “From the state itself, there is no difference between the simulation and the real thing,” he said.

    Granted, he admits, the circuit can time-flip only photons undergoing polarization changes; if space-time were truly in a superposition, dueling time directions would affect everything.

    Two-Arrow Circuits

    Whatever their philosophical inclinations, physicists hope that the ability to design quantum circuits that flow two ways at once might enable new devices for quantum computing, communication and metrology.

    “This allows you to do more things than just implementing the operations in one order or another,” said Cyril Branciard, a quantum information theorist at the Néel Institute in France.

    “This allows you to do more things than just implementing the operations in one order or another,” said Cyril Branciard, a quantum information theorist at the Néel Institute in France.

    Some researchers speculate that the time-travel flavor of the quantum time flip might enable a future quantum “undo” function. Others anticipate that circuits operating in two directions at once could allow quantum machines to run more efficiently. “You could use this for games where you want to reduce the so-called query complexity,” Rubino said, referring to the number of steps it takes to carry out some task.

    Such practical applications are far from assured. While the time-flip circuits broke a theoretical performance limit in Chiribella and Liu’s guessing game, that was a highly contrived task dreamt up only to highlight their advantage over one-way circuits.

    But bizarre, seemingly niche quantum phenomena have a knack for proving useful. The eminent physicist Anton Zeilinger used to believe that quantum entanglement — a link between separated particles — wasn’t good for anything. Today, entanglement threads together nodes in nascent quantum networks and qubits in prototype quantum computers, and Zeilinger’s work on the phenomenon won him a share of the 2022 Nobel Prize in Physics. For the flippable nature of quantum time, Franke-Arnold said, “it’s very early days.”

    a paper published in 2013
    Science Advances 2017
    described a scheme 2022

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:19 pm on December 11, 2022 Permalink | Reply
    Tags: "Using ‘cat states’ to realize fault-tolerant quantum computers", A proposal to use "cat states" promises to make more quantum computers less prone to errors., , , , , Superposition   

    From RIKEN[理](JP): “Using ‘cat states’ to realize fault-tolerant quantum computers” 

    RIKEN bloc

    From RIKEN[理](JP)

    A proposal to use “cat states” promises to make more quantum computers less prone to errors.

    Error correction in quantum computers could be simplified by a new protocol proposed by an all-RIKEN team based on “cat states”. It could cut the computing resources needed to fix errors to the same level as conventional computers, making quantum computers cheaper and more compact.

    Quantum computers are looming ever larger on the horizon of computing. They have already demonstrated the ability to outperform traditional computers for certain kinds of calculations. But they are more prone to errors than conventional computers.

    Since traditional computers are based on bits that are either 0 or 1, the only error they are susceptible to is when a bit accidentally flips from 0 to 1 or vice versa.

    But quantum computers use qubits, which can be in a superposition of two states. When the states are depicted on a sphere, the angle between the two states is known as the qubit’s phase. This phase can also be flipped in quantum computers. They thus need more computing resources to correct for this additional source of error.

    An attractive way to sidestep this problem is to use qubits based on so-called “cat states”. These states are named after Schrödinger’s hypothetical cat, which is simultaneously dead and alive until observed. By analogy, “cat states” are superpositions of two states with opposite phase.

    1
    Figure 1: In a thought experiment, Schrödinger conjectured a cat could be both alive and dead until observed if a quantum event such as the decay of a radioactive particle (right) would trigger an event that will kill the cat. Now, by using “cat states”, RIKEN researchers have proposed a scheme for realizing fault-tolerant gates to entangle multiple qubits. © Rhoeo/iStock/Getty Images.

    Unlike other qubits, “cat-state” qubits cannot undergo phase flips, so that engineers making quantum computers based on them need only worry about bit flips—just like in conventional computers. Researchers are now exploring how to use these cat-state qubits to perform computations.

    Now, Ye-Hong Chen and four co-workers, all at the RIKEN Center for Quantum Computing, have theoretically demonstrated a way to use cat states to realize fault-tolerant gates for connecting multiple qubits in a process known as entanglement.

    “Conventional computers can only process data one bit at a time, but entanglement allows quantum computers to process a lot of data simultaneously,” explains Chen. “The gates can rapidly generate entangled cat states with high accuracy.”

    The team showed that such fault-tolerant quantum gates could be used to implement a quantum search algorithm with a high efficiency. The algorithm will allow databases to be searched faster than is currently possible using conventional computers.

    “Let us assume that you are searching for one key that will open a box among 100 keys. On average, you would need to try 50 keys using a conventional search algorithm to identify the one key that opens that box,” says Chen. “But with the quantum search algorithm the average is only 10 attempts,” says Chen.

    The team is now exploring how to develop other useful quantum algorithms based on fault-tolerant quantum codes.

    Science paper:
    Physical Review Applied

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    RIKEN campus

    RIKEN [理研](JP) is Japan’s largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. Founded in 1917 as a private research foundation in Tokyo, RIKEN has grown rapidly in size and scope, today encompassing a network of world-class research centers and institutes across Japan. Founded in 1917, it now has about 3,000 scientists on seven campuses across Japan, including the main site at Wakō, Saitama Prefecture, just outside Tokyo. Riken is a Designated National Research and Development Institute, and was formerly an Independent Administrative Institution.
    Riken conducts research in many areas of science including physics; chemistry; biology; genomics; medical science; engineering; high-performance computing and computational science and ranging from basic research to practical applications with 485 partners worldwide. It is almost entirely funded by the Japanese government, and its annual budget is about ¥88 billion (US$790 million).

    Organizational structure:

    The main divisions of Riken are listed here. Purely administrative divisions are omitted.

    Headquarters (mostly in Wako)
    Wako Branch
    Center for Emergent Matter Science (research on new materials for reduced power consumption)
    Center for Sustainable Resource Science (research toward a sustainable society)
    Nishina Center for Accelerator-Based Science (site of the Radioactive Isotope Beam Factory, a heavy-ion accelerator complex)
    Center for Brain Science
    Center for Advanced Photonics (research on photonics including terahertz radiation)
    Research Cluster for Innovation
    Cluster for Pioneering Research (chief scientists)
    Interdisciplinary Theoretical and Mathematical Sciences Program
    Tokyo Branch
    Center for Advanced Intelligence Project (research on artificial intelligence)
    Tsukuba Branch
    BioResource Research Center
    Harima Institute
    Riken SPring-8 Center (site of the SPring-8 synchrotron and the SACLA x-ray free electron laser)

    Riken SPring-8 synchrotron, located in Hyōgo Prefecture, Japan.

    RIKEN/HARIMA (JP) X-ray Free Electron Laser
    Yokohama Branch (site of the Yokohama Nuclear magnetic resonance facility)
    Center for Sustainable Resource Science
    Center for Integrative Medical Sciences (research toward personalized medicine)
    Center for Biosystems Dynamics Research (also based in Kobe and Osaka)
    Program for Drug Discovery and Medical Technology Platform
    Structural Biology Laboratory
    Sugiyama Laboratory
    Kobe Branch
    Center for Biosystems Dynamics Research (developmental biology and nuclear medicine medical imaging techniques)
    Center for Computational Science (R-CCS, home of the K computer and The post-K (Fugaku) computer development plan)

    Riken Fujitsu K supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan.

    Fugaku is a claimed exascale supercomputer (while only at petascale for mainstream benchmark), at the RIKEN Center for Computational Science in Kobe, Japan. It started development in 2014 as the successor to the K computer, and is officially scheduled to start operating in 2021. Fugaku made its debut in 2020, and became the fastest supercomputer in the world in the June 2020 TOP500 list, the first ever supercomputer that achieved 1 exaFLOPS. As of April 2021, Fugaku is currently the fastest supercomputer in the world.

     
  • richardmitnick 5:33 pm on December 5, 2022 Permalink | Reply
    Tags: "Detecting dark matter with quantum computers", , , , Dark matter makes up about 27% of the matter and energy budget in the universe but scientists do not know much about it., , How quantum computers could detect dark matter, It is difficult to detect dark matter directly because it does not interact with light., , , Scientists at the DOE's Fermi National Accelerator Laboratory have found a way to look for dark matter using quantum computers., Superposition, , Using qubits-the main component of quantum computing systems-to detect single photons produced by dark matter in the presence of a strong magnetic field., When dark matter particles traverse a strong magnetic field they may produce photons scientists can measure with superconducting qubits inside aluminum photon cavities.   

    From The DOE’s Fermi National Accelerator Laboratory: “Detecting dark matter with quantum computers” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From The DOE’s Fermi National Accelerator Laboratory-an enduring source of strength for the US contribution to scientific research worldwide.

    12.5.22
    Emily Driehaus

    Dark matter makes up about 27% of the matter and energy budget in the universe but scientists do not know much about it. They do know that it is cold, meaning that the particles that make up dark matter are slow-moving. It is also difficult to detect dark matter directly because it does not interact with light. However, scientists at the U.S. Department of Energy’s Fermi National Accelerator Laboratory have found a way to look for dark matter using quantum computers.

    Aaron Chou, a senior scientist at Fermilab, works on detecting dark matter through quantum science. As part of DOE’s Office of High Energy Physics QuantISED program, he has developed a way to use qubits, the main component of quantum computing systems, to detect single photons produced by dark matter in the presence of a strong magnetic field.

    How quantum computers could detect dark matter

    A classical computer processes information with binary bits set to either 1 or 0. The specific pattern of ones and zeros makes it possible for the computer to perform certain functions and tasks. In quantum computing, however, qubits exist at both 1 and 0 simultaneously until they are read, due to a quantum mechanical property known as superposition. This property allows quantum computers to efficiently perform complex calculations that a classical computer would take an enormous amount of time to complete.

    “Qubits work by manipulating single excitations of information, for example, single photons,” said Chou. “So, if you’re working with such small packets of energy as single excitations, you’re far more susceptible to external disturbances.”

    1
    Akash Dixit works on the team that uses quantum computers to look for dark matter. Here, Dixit holds a microwave cavity containing a superconducting qubit. The cavity has holes in its side in the same way the screen on a microwave oven door has holes; the holes are simply too small for microwaves to escape. Photo: Ryan Postel, Fermilab.

    In order for qubits to operate at these quantum levels, they must reside in carefully controlled environments that protect them from outside interference and keep them at consistently cold temperatures. Even the slightest disturbance can throw off a program in a quantum computer. With their extreme sensitivity, Chou realized quantum computers could provide a way to detect dark matter. He recognized that other dark matter detectors need to be shielded in the same way quantum computers are, further solidifying the idea.

    “Both quantum computers and dark matter detectors have to be heavily shielded, and the only thing that can jump through is dark matter,” Chou said. “So, if people are building quantum computers with the same requirements, we asked ‘why can’t you just use those as dark matter detectors?’”

    Where errors are most welcome

    When dark matter particles traverse a strong magnetic field, they may produce photons that Chou and his team can measure with superconducting qubits inside aluminum photon cavities. Because the qubits have been shielded from all other outside disturbances, when scientists detect a disturbance from a photon, they can infer that it was the result of dark matter flying through the protective layers.

    “These disturbances manifest as errors where you didn’t load any information into the computer, but somehow information appeared, like zeroes that flip into ones from particles flying through the device,” he said.

    2
    Scientist Aaron Chou leads the experiment that searches for dark matter using superconducting qubits and cavities. Photo: Ryan Postel, Fermilab.

    So far, Chou and his team have demonstrated how the technique works and that the device is incredibly sensitive to these photons. Their method has advantages over other sensors, such as being able to make multiple measurements of the same photon to ensure a disturbance was not just caused by another fluke. The device also has an ultra-low noise level, which allows for a heightened sensitivity to dark matter signals.

    “We know how to make these tunable boxes from the high-energy physics community, and we worked together with the quantum computing people to understand and transfer the technology for these qubits to be used as sensors,” Chou said.

    From here, they plan to develop a dark matter detection experiment and continue improving upon the design of the device.

    Using sapphire cavities to catch dark matter

    3
    These new sapphire photon cavities will help lead the team closer to running dark matter experiments that combine aspects from both physics and quantum science. Photo: Ankur Agrawal, University of Chicago.

    “This apparatus tests the sensor in the box, which holds photons with a single frequency,” Chou said. “The next step is to modify this box to turn it into kind of a radio receiver in which we can change the dimensions of the box.”

    By altering the dimensions of the photon cavity, it will be able to sense different wavelengths of photons produced by dark matter.

    “The waves that can live in the box are determined by the overall size of the box. In order to change what frequencies and which wavelengths of dark matter we want to look for, we actually have to change the size of the box,” said Chou. “That’s the work we’re currently doing; we’ve created boxes in which we can change the lengths of different parts of it in order to be able to tune into dark matter at different frequencies.”

    The researchers are also developing cavities made from different materials. The traditional aluminum photon cavities lose their superconductivity in the presence of the magnetic field necessary for producing photons from dark matter particles.

    “These cavities cannot live in high magnetic fields,” he said. “High magnetic fields destroy the superconductivity, so we’ve made a new cavity made out of synthetic sapphire.”

    Developing these new, tunable sapphire photon cavities will lead the team closer to running dark matter experiments that combine aspects from both physics and quantum science.

    __________________________________
    Dark Matter Background
    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., and Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.

    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).

    Dark Matter Research

    Super Cryogenic Dark Matter Search from DOE’s SLAC National Accelerator Laboratory at Stanford University at SNOLAB (Vale Inco Mine, Sudbury, Canada).

    LBNL LZ Dark Matter Experiment xenon detector at Sanford Underground Research Facility Credit: Matt Kapust.


    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment Dark Matter project at SURF, Lead, SD.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.

    3
    The University of Western Australia ORGAN Experiment’s main detector. A small copper cylinder called a “resonant cavity” traps photons generated during dark matter conversion. The cylinder is bolted to a “dilution refrigerator” which cools the experiment to very low temperatures.
    __________________________________

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The DOE’s Fermi National Accelerator Laboratory, located just outside Batavia, Illinois, near Chicago, is a United States Department of Energy national laboratory specializing in high-energy particle physics. Since 2007, Fermilab has been operated by the Fermi Research Alliance, a joint venture of the University of Chicago, and the Universities Research Association. Fermilab is a part of the Illinois Technology and Research Corridor.

    Fermilab’s Tevatron was a landmark particle accelerator; until the startup in 2008 of the The European Southern Observatory [La Observatorio Europeo Austral][Observatoire européen austral][Europäische Südsternwarte](EU)(CL)[CERN] Large Hadron Collider(CH) near Geneva, Switzerland, it was the most powerful particle accelerator in the world, accelerating antiprotons to energies of 500 GeV, and producing proton-proton collisions with energies of up to 1.6 TeV, the first accelerator to reach one “tera-electron-volt” energy. At 3.9 miles (6.3 km), it was the world’s fourth-largest particle accelerator in circumference. One of its most important achievements was the 1995 discovery of the top quark, announced by research teams using the Tevatron’s CDF and DØ detectors. It was shut down in 2011.

    In addition to high-energy collider physics, Fermilab hosts a series of fixed-target and neutrino experiments, such as The MicroBooNE (Micro Booster Neutrino Experiment),

    NOνA (NuMI Off-Axis νe Appearance)

    and Seaquest

    .

    Completed neutrino experiments include MINOS (Main Injector Neutrino Oscillation Search), MINOS+, MiniBooNE and SciBooNE (SciBar Booster Neutrino Experiment).

    The MiniBooNE detector was a 40-foot (12 m) diameter sphere containing 800 tons of mineral oil lined with 1,520 phototube detectors. An estimated 1 million neutrino events were recorded each year.

    SciBooNE sat in the same neutrino beam as MiniBooNE but had fine-grained tracking capabilities. The NOνA experiment uses, and the MINOS experiment used, Fermilab’s NuMI (Neutrinos at the Main Injector) beam, which is an intense beam of neutrinos that travels 455 miles (732 km) through the Earth to the Soudan Mine in Minnesota and the Ash River, Minnesota, site of the NOνA far detector.

    The ICARUS neutrino experiment was moved from CERN to Fermilab.

    In the public realm, Fermilab is home to a native prairie ecosystem restoration project and hosts many cultural events: public science lectures and symposia, classical and contemporary music concerts, folk dancing and arts galleries. The site is open from dawn to dusk to visitors who present valid photo identification.

    Asteroid 11998 Fermilab is named in honor of the laboratory.

    The DOE’s Fermi National Accelerator Laboratory campus.

    The DOE’s Fermi National Accelerator Laboratory/MINERvA. Photo Reidar Hahn.

    The DOE’s Fermi National Accelerator LaboratoryDAMIC | The Fermilab Cosmic Physics Center.

    The DOE’s Fermi National Accelerator LaboratoryMuon g-2 studio. As muons race around a ring at the Muon g-2 studio, their spin axes twirl, reflecting the influence of unseen particles.

    The DOE’s Fermi National Accelerator Laboratory Short-Baseline Near Detector under construction.

    The DOE’s Fermi National Accelerator Laboratory Mu2e solenoid.

    The Dark Energy Camera [DECam], built at The DOE’s Fermi National Accelerator Laboratory.

    Weston, Illinois, was a community next to Batavia voted out of existence by its village board in 1966 to provide a site for Fermilab.

    The laboratory was founded in 1969 as the National Accelerator Laboratory; it was renamed in honor of Enrico Fermi in 1974. The laboratory’s first director was Robert Rathbun Wilson, under whom the laboratory opened ahead of time and under budget. Many of the sculptures on the site are of his creation. He is the namesake of the site’s high-rise laboratory building, whose unique shape has become the symbol for Fermilab and which is the center of activity on the campus.

    After Wilson stepped down in 1978 to protest the lack of funding for the lab, Leon M. Lederman took on the job. It was under his guidance that the original accelerator was replaced with the Tevatron, an accelerator capable of colliding protons and antiprotons at a combined energy of 1.96 TeV. Lederman stepped down in 1989. The science education center at the site was named in his honor.

    The later directors include:

    John Peoples, 1989 to 1996
    Michael S. Witherell, July 1999 to June 2005
    Piermaria Oddone, July 2005 to July 2013
    Nigel Lockyer, September 2013 to the present

    Fermilab continues to participate in the work at the Large Hadron Collider (LHC); it serves as a Tier 1 site in the Worldwide LHC Computing Grid and hosts 1000 U.S. scientists who work on the CMS project.

    FNAL Icon

     
  • richardmitnick 12:03 pm on May 27, 2022 Permalink | Reply
    Tags: "Constructor theory", "Maxwell’s demon", "Physicists Rewrite the Fundamental Law That Leads to Disorder", , Hilbert’s Problem, , , , , Quantum information theory, , Quantum resource theories, Superposition, , The informational perspective on the second law is now being recast as a quantum problem., The Second Law of Thermodynamics, The universe began — for reasons not fully understood or agreed on — in a low-entropy state and is heading toward one of ever higher entropy.   

    From “Quanta Magazine”: “Physicists Rewrite the Fundamental Law That Leads to Disorder” 

    From “Quanta Magazine”

    May 26, 2022
    Philip Ball

    1
    Is the rise of entropy merely probabilistic, or can it be straightened out by use of clear quantum axioms? Maggie Chiang for Quanta Magazine

    The Second Law of Thermodynamics is among the most sacred in all of science, but it has always rested on 19th century arguments about probability. New arguments trace its true source to the flows of quantum information.

    In all of physical law, there’s arguably no principle more sacrosanct than the Second Law of Thermodynamics — the notion that entropy, a measure of disorder, will always stay the same or increase. “If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations,” wrote the British astrophysicist Arthur Eddington in his 1928 book The Nature of the Physical World. “If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” No violation of this law has ever been observed, nor is any expected.

    But something about the second law troubles physicists. Some are not convinced that we understand it properly or that its foundations are firm. Although it’s called a law, it’s usually regarded as merely probabilistic: It stipulates that the outcome of any process will be the most probable one (which effectively means the outcome is inevitable given the numbers involved).

    Yet physicists don’t just want descriptions of what will probably happen. “We like laws of physics to be exact,” said the physicist Chiara Marletto of the University of Oxford. Can the second law be tightened up into more than just a statement of likelihoods?

    A number of independent groups appear to have done just that. They may have woven the second law out of the fundamental principles of quantum mechanics — which, some suspect, have directionality and irreversibility built into them at the deepest level. According to this view, the second law comes about not because of classical probabilities but because of quantum effects such as entanglement. It arises from the ways in which quantum systems share information, and from cornerstone quantum principles that decree what is allowed to happen and what is not. In this telling, an increase in entropy is not just the most likely outcome of change. It is a logical consequence of the most fundamental resource that we know of — the quantum resource of information.

    Quantum Inevitability

    Thermodynamics was conceived in the early 19th century to describe the flow of heat and the production of work. The need for such a theory was urgently felt as steam power drove the Industrial Revolution, and engineers wanted to make their devices as efficient as possible.

    In the end, thermodynamics wasn’t much help in making better engines and machinery. Instead, it became one of the central pillars of modern physics, providing criteria that govern all processes of change.

    Classical thermodynamics has only a handful of laws, of which the most fundamental are the first and second. The first says that energy is always conserved; the second law says that heat always flows from hot to cold. More commonly this is expressed in terms of entropy, which must increase overall in any process of change. Entropy is loosely equated with disorder, but the Austrian physicist Ludwig Boltzmann formulated it more rigorously as a quantity related to the total number of microstates a system has: how many equivalent ways its particles can be arranged.

    The second law appears to show why change happens in the first place. At the level of individual particles, the classical laws of motion can be reversed in time. But the second law implies that change must happen in a way that increases entropy. This directionality is widely considered to impose an arrow of time. In this view, time seems to flow from past to future because the universe began — for reasons not fully understood or agreed on — in a low-entropy state and is heading toward one of ever higher entropy. The implication is that eventually heat will be spread completely uniformly and there will be no driving force for further change — a depressing prospect that scientists of the mid-19th century called the heat death of the universe.

    Boltzmann’s microscopic description of entropy seems to explain this directionality. Many-particle systems that are more disordered and have higher entropy vastly outnumber ordered, lower-entropy states, so molecular interactions are much more likely to end up producing them. The second law seems then to be just about statistics: It’s a law of large numbers. In this view, there’s no fundamental reason why entropy can’t decrease — why, for example, all the air molecules in your room can’t congregate by chance in one corner. It’s just extremely unlikely.

    Yet this probabilistic statistical physics leaves some questions hanging. It directs us toward the most probable microstates in a whole ensemble of possible states and forces us to be content with taking averages across that ensemble.

    But the laws of classical physics are deterministic — they allow only a single outcome for any starting point. Where, then, can that hypothetical ensemble of states enter the picture at all, if only one outcome is ever possible?

    David Deutsch, a physicist at Oxford, has for several years been seeking to avoid this dilemma by developing a theory of (as he puts it) “a world in which probability and randomness are totally absent from physical processes.” His project, on which Marletto is now collaborating, is called “Constructor theory”. It aims to establish not just which processes probably can and can’t happen, but which are possible and which are forbidden outright.

    Constructor theory aims to express all of physics in terms of statements about possible and impossible transformations. It echoes the way thermodynamics itself began, in that it considers change in the world as something produced by “machines” (constructors) that work in a cyclic fashion, following a pattern like that of the famous Carnot cycle, proposed in the 19th century to describe how engines perform work. The constructor is rather like a catalyst, facilitating a process and being returned to its original state at the end.

    “Say you have a transformation like building a house out of bricks,” said Marletto. “You can think of a number of different machines that can achieve this, to different accuracies. All of these machines are constructors, working in a cycle” — they return to their original state when the house is built.

    But just because a machine for conducting a certain task might exist, that doesn’t mean it can also undo the task. A machine for building a house might not be capable of dismantling it. This makes the operation of the constructor different from the operation of the dynamical laws of motion describing the movements of the bricks, which are reversible.

    The reason for the irreversibility, said Marletto, is that for most complex tasks, a constructor is geared to a given environment. It requires some specific information from the environment relevant to completing that task. But the reverse task will begin with a different environment, so the same constructor won’t necessarily work. “The machine is specific to the environment it is working on,” she said.

    Recently, Marletto, working with the quantum theorist Vlatko Vedral at Oxford and colleagues in Italy, showed that constructor theory does identify processes that are irreversible in this sense — even though everything happens according to quantum mechanical laws that are themselves perfectly reversible. “We show that there are some transformations for which you can find a constructor for one direction but not the other,” she said.

    The researchers considered a transformation involving the states of quantum bits (qubits), which can exist in one of two states or in a combination, or superposition, of both. In their model, a single qubit B may be transformed from some initial, perfectly known state B1 to a target state B2 when it interacts with other qubits by moving past a row of them one qubit at a time. This interaction entangles the qubits: Their properties become interdependent, so that you can’t fully characterize one of the qubits unless you look at all the others too.

    As the number of qubits in the row gets very large, it becomes possible to bring B into state B2 as accurately as you like, said Marletto. The process of sequential interactions of B with the row of qubits constitutes a constructor-like machine that transforms B1 to B2. In principle you can also undo the process, turning B2 back to B1, by sending B back along the row.

    But what if, having done the transformation once, you try to reuse the array of qubits for the same process with a fresh B? Marletto and colleagues showed that if the number of qubits in the row is not very large and you use the same row repeatedly, the array becomes less and less able to produce the transformation from B1 to B2. But crucially, the theory also predicts that the row becomes even less able to do the reverse transformation from B2 to B1. The researchers have confirmed this prediction experimentally using photons for B and a fiber optic circuit to simulate a row of three qubits.

    “You can approximate the constructor arbitrarily well in one direction but not the other,” Marletto said. There’s an asymmetry to the transformation, just like the one imposed by the second law. This is because the transformation takes the system from a so-called pure quantum state (B1) to a mixed one (B2, which is entangled with the row). A pure state is one for which we know all there is to be known about it. But when two objects are entangled, you can’t fully specify one of them without knowing everything about the other too. The fact is that it’s easier to go from a pure quantum state to a mixed state than vice versa — because the information in the pure state gets spread out by entanglement and is hard to recover. It’s comparable to trying to re-form a droplet of ink once it has dispersed in water, a process in which the irreversibility is imposed by the second law.

    So here the irreversibility is “just a consequence of the way the system dynamically evolves,” said Marletto. There’s no statistical aspect to it. Irreversibility is not just the most probable outcome but the inevitable one, governed by the quantum interactions of the components. “Our conjecture,” said Marletto, “is that thermodynamic irreversibility might stem from this.”

    Demon in the Machine

    There’s another way of thinking about the second law, though, that was first devised by James Clerk Maxwell, the Scottish scientist who pioneered the statistical view of thermodynamics along with Boltzmann. Without quite realizing it, Maxwell connected the thermodynamic law to the issue of information.

    Maxwell was troubled by the theological implications of a cosmic heat death and of an inexorable rule of change that seemed to undermine free will. So in 1867 he sought a way to “pick a hole” in the second law. In his hypothetical scenario, a microscopic being (later, to his annoyance, called a demon) turns “useless” heat back into a resource for doing work. Maxwell had previously shown that in a gas at thermal equilibrium there is a distribution of molecular energies. Some molecules are “hotter” than others — they are moving faster and have more energy. But they are all mixed at random so there appears to be no way to make use of those differences.

    Enter Maxwell’s demon. It divides the compartment of gas in two, then installs a frictionless trapdoor between them. The demon lets the hot molecules moving about the compartments pass through the trapdoor in one direction but not the other. Eventually the demon has a hot gas on one side and a cooler one on the other, and it can exploit the temperature gradient to drive some machine.

    The demon has used information about the motions of molecules to apparently undermine the second law. Information is thus a resource that, just like a barrel of oil, can be used to do work. But as this information is hidden from us at the macroscopic scale, we can’t exploit it. It’s this ignorance of the microstates that compels classical thermodynamics to speak of averages and ensembles.

    Almost a century later, physicists proved that Maxwell’s demon doesn’t subvert the second law in the long term, because the information it gathers must be stored somewhere, and any finite memory must eventually be wiped to make room for more. In 1961 the physicist Rolf Landauer showed that this erasure of information can never be accomplished without dissipating some minimal amount of heat, thus raising the entropy of the surroundings. So the second law is only postponed, not broken.

    The informational perspective on the second law is now being recast as a quantum problem. That’s partly because of the perception that quantum mechanics is a more fundamental description — Maxwell’s demon treats the gas particles as classical billiard balls, essentially. But it also reflects the burgeoning interest in quantum information theory itself. We can do things with information using quantum principles that we can’t do classically. In particular, entanglement of particles enables information about them to be spread around and manipulated in nonclassical ways.

    Crucially, the quantum informational approach suggests a way of getting rid of the troublesome statistical picture that bedevils the classical view of thermodynamics, where you have to take averages over ensembles of many different microstates. “The true novelty with quantum information came with the understanding that one can replace ensembles with entanglement with the environment,” said Carlo Maria Scandolo of the University of Calgary.

    Taking recourse in an ensemble, he said, reflects the fact that we have only partial information about the state — it could be this microstate or that one, with different probabilities, and so we have to average over a probability distribution. But quantum theory offers another way to generate states of partial information: through entanglement. When a quantum system gets entangled with its environment, about which we can’t know everything, some information about the system itself is inevitably lost: It ends up in a mixed state, where you can’t know everything about it even in principle by focusing on just the system.

    Then you are forced to speak in terms of probabilities not because there are things about the system you don’t know, but because some of that information is fundamentally unknowable. In this way, “probabilities arise naturally from entanglement,” said Scandolo. “The whole idea of getting thermodynamic behavior by considering the role of the environment works only as long as there is entanglement.”

    Those ideas have now been made precise. Working with Giulio Chiribella of the University of Hong Kong, Scandolo has proposed four axioms about quantum information that are required to obtain a “sensible thermodynamics” — that is, one not based on probabilities. The axioms describe constraints on the information in a quantum system that becomes entangled with its environment. In particular, everything that happens to the system plus environment is in principle reversible, just as is implied by the standard mathematical formulation of how a quantum system evolves in time.

    As a consequence of these axioms, Scandolo and Chiribella show, uncorrelated systems always grow more correlated through reversible interactions. Correlations are what connect entangled objects: The properties of one are correlated with those of the other. They are measured by “mutual information,” a quantity that’s related to entropy. So a constraint on how correlations can change is also a constraint on entropy. If the entropy of the system decreases, the entropy of the environment must increase such that the sum of the two entropies can only increase or stay the same, but never decrease. In this way, Scandolo said, their approach derives the existence of entropy from the underlying axioms, rather than postulating it at the outset.

    Redefining Thermodynamics

    One of the most versatile ways to understand this new quantum version of thermodynamics invokes so-called resource theories — which again speak about which transformations are possible and which are not. “A resource theory is a simple model for any situation in which the actions you can perform and the systems you can access are restricted for some reason,” said the physicist Nicole Yunger Halpern of the National Institutes of Standards and Technology. (Scandolo has incorporated resource theories into his work too.)

    Quantum resource theories adopt the picture of the physical world suggested by quantum information theory, in which there are fundamental limitations on which physical processes are possible. In quantum information theory these limitations are typically expressed as “no-go theorems”: statements that say “You can’t do that!” For example, it is fundamentally impossible to make a copy of an unknown quantum state, an idea called quantum no-cloning.

    Resource theories have a few main ingredients. The operations that are allowed are called free operations. “Once you specify the free operations, you have defined the theory — and then you can start reasoning about which transformations are possible or not, and ask what are the optimal efficiencies with which we can perform these tasks,” said Yunger Halpern. A resource, meanwhile, is something that an agent can access to do something useful — it could be a pile of coal to fire up a furnace and power a steam engine. Or it could be extra memory that will allow a Maxwellian demon to subvert the second law for a little longer.

    Quantum resource theories allow a kind of zooming in on the fine-grained details of the classical second law. We don’t need to think about huge numbers of particles; we can make statements about what is allowed among just a few of them. When we do this, said Yunger Halpern, it becomes clear that the classical second law (final entropy must be equal to or greater than initial entropy) is just a kind of coarse-grained sum of a whole family of inequality relationships. For instance, classically the second law says that you can transform a nonequilibrium state into one that is closer to thermal equilibrium. But “asking which of these states is closer to thermal is not a simple question,” said Yunger Halpern. To answer it, “we have to check a whole bunch of inequalities.”

    In other words, in resource theories there seem to be a whole bunch of mini-second laws. “So there could be some transformations allowed by the conventional second law but forbidden by this more detailed family of inequalities,” said Yunger Halpern. For that reason, she adds, “sometimes I feel like everyone [in this field] has their own second law.”

    The resource-theory approach, said physicist Markus Müller of the University of Vienna, “admits a fully mathematically rigorous derivation, without any conceptual or mathematical loose ends, of the thermodynamic laws and more.” He said that this approach involves “a reconsideration of what one really means by thermodynamics” — it is not so much about the average properties of large ensembles of moving particles, but about a game that an agent plays against nature to conduct a task efficiently with the available resources. In the end, though, it is still about information. The discarding of information — or the inability to keep track of it — is really the reason why the second law holds, Yunger Halpern said.

    Hilbert’s Problem

    All these efforts to rebuild thermodynamics and the second law recall a challenge laid down by the German mathematician David Hilbert. In 1900 he posed 23 outstanding problems in mathematics that he wanted to see solved. Item six in that list was “to treat, by means of axioms, those physical sciences in which already today mathematics plays an important part.” Hilbert was concerned that the physics of his day seemed to rest on rather arbitrary assumptions, and he wanted to see them made rigorous in the same way that mathematicians were attempting to derive fundamental axioms for their own discipline.

    Some physicists today are still working on Hilbert’s sixth problem, attempting in particular to reformulate quantum mechanics and its more abstract version, quantum field theory, using axioms that are simpler and more physically transparent than the traditional ones. But Hilbert evidently had thermodynamics in mind too, referring to aspects of physics that use “the theory of probabilities” as among those ripe for reinvention.

    Whether Hilbert’s sixth problem has yet been cracked for the second law seems to be a matter of taste. “I think Hilbert’s sixth problem is far from being completely solved, and I personally find it a very intriguing and important research direction in the foundations of physics,” said Scandolo. “There are still open problems, but I think they will be solved in the foreseeable future, provided enough time and energy are devoted to them.”

    Maybe, though, the real value of re-deriving the second law lies not in satisfying Hilbert’s ghost but just in deepening our understanding of the law itself. As Einstein said, “A theory is the more impressive the greater the simplicity of its premises.” Yunger Halpern compares the motivation for working on the law to the reason literary scholars still reanalyze the plays and poems of Shakespeare: not because such new analysis is “more correct,” but because works this profound are an endless source of inspiration and insight.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:00 pm on April 5, 2021 Permalink | Reply
    Tags: "New computing algorithms expand the boundaries of a quantum future", , New amplification algorithms expand the utility of quantum computers to handle non-Boolean scenarios., Qubits can be in a superposition of 0 and 1 while classical bits can be only one or the other., Scientists developed an algorithm 25 years ago that will perform a series of operations on a superposition to amplify the probabilities of certain individual states and suppress others., Standard techniques are able to assess only Boolean scenarios-ones that can be answered with a yes or no output., Superposition   

    From DOE’s Fermi National Accelerator Laboratory(US): “New computing algorithms expand the boundaries of a quantum future” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From DOE’s Fermi National Accelerator Laboratory(US), an enduring source of strength for the US contribution to scientific research world wide.

    April 5, 2021
    Katrina Miller

    Quantum computing promises to harness the strange properties of quantum mechanics in machines that will outperform even the most powerful supercomputers of today. But the extent of their application, it turns out, isn’t entirely clear.

    To fully realize the potential of quantum computing, scientists must start with the basics: developing step-by-step procedures, or algorithms, for quantum computers to perform simple tasks, like the factoring of a number. These simple algorithms can then be used as building blocks for more complicated calculations.

    Prasanth Shyamsundar, a postdoctoral research associate at the Department of Energy’s Department of Energy’s Fermilab Quantum Institute (US), has done just that. In a preprint paper released in February [Non-Boolean Quantum Amplitude Amplification and Quantum Mean Estimation], he announced two new algorithms that build upon existing work in the field to further diversify the types of problems quantum computers can solve.

    “There are specific tasks that can be done faster using quantum computers, and I’m interested in understanding what those are,” Shyamsundar said. “These new algorithms perform generic tasks, and I am hoping they will inspire people to design even more algorithms around them.”

    Shyamsundar’s quantum algorithms, in particular, are useful when searching for a specific entry in an unsorted collection of data. Consider a toy example: Suppose we have a stack of 100 vinyl records, and we task a computer with finding the one jazz album in the stack.

    Classically, a computer would need to examine each individual record and make a yes-or-no decision about whether it is the album we are searching for, based on a given set of search criteria.

    “You have a query, and the computer gives you an output,” Shyamsundar said. “In this case, the query is: Does this record satisfy my set of criteria? And the output is yes or no.”

    Finding the record in question could take only a few queries if it is near the top of the stack, or closer to 100 queries if the record is near the bottom. On average, a classical computer would locate the correct record with 50 queries, or half the total number in the stack.

    A quantum computer, on the other hand, would locate the jazz album much faster. This is because it has the ability to analyze all of the records at once, using a quantum effect called superposition.

    With this property, the number of queries needed to locate the jazz album is only about 10, the square root of the number of records in the stack. This phenomenon is known as quantum speedup and is a result of the unique way quantum computers store information.

    The quantum advantage

    Classical computers use units of storage called bits to save and analyze data. A bit can be assigned one of two values: 0 or 1.

    The quantum version of this is called a qubit. Qubits can be either 0 or 1 as well, but unlike their classical counterparts, they can also be a combination of both values at the same time. This is known as superposition, and allows quantum computers to assess multiple records, or states, simultaneously.

    1
    Qubits can be in a superposition of 0 and 1 while classical bits can be only one or the other. Credit: Jerald Pinson.

    Amplifying the probabilities of correct states

    Luckily, scientists developed an algorithm nearly 25 years ago that will perform a series of operations on a superposition to amplify the probabilities of certain individual states and suppress others, depending on a given set of search criteria. That means when it comes time to measure, the superposition will most likely collapse into the state they are searching for.

    But the limitation of this algorithm is that it can be applied only to Boolean situations, or ones that can be queried with a yes or no output, like searching for a jazz album in a stack of several records.

    3
    A quantum computer can amplify the probabilities of certain individual records and suppress others, as indicated by the size and color of the disks in the output superposition. Standard techniques are able to assess only Boolean scenarios-ones that can be answered with a yes or no output. Credit: Prasanth Shyamsundar.

    Scenarios with non-Boolean outputs present a challenge. Music genres aren’t precisely defined, so a better approach to the jazz record problem might be to ask the computer to rate the albums by how “jazzy” they are. This could look like assigning each record a score on a scale from 1 to 10.

    4
    New amplification algorithms expand the utility of quantum computers to handle non-Boolean scenarios, allowing for an extended range of values to characterize individual records, such as the scores assigned to each disk in the output superposition above. Credit: Prasanth Shyamsundar.

    Previously, scientists would have to convert non-Boolean problems such as this into ones with Boolean outputs.

    “You’d set a threshold and say any state below this threshold is bad, and any state above this threshold is good,” Shyamsundar said. In our jazz record example, that would be the equivalent of saying anything rated between 1 and 5 isn’t jazz, while anything between 5 and 10 is.

    But Shyamsundar has extended this computation such that a Boolean conversion is no longer necessary. He calls this new technique the non-Boolean quantum amplitude amplification algorithm.

    “If a problem requires a yes-or-no answer, the new algorithm is identical to the previous one,” Shyamsundar said. “But this now becomes open to more tasks; there are a lot of problems that can be solved more naturally in terms of a score rather than a yes-or-no output.”

    A second algorithm introduced in the paper, dubbed the quantum mean estimation algorithm, allows scientists to estimate the average rating of all the records. In other words, it can assess how “jazzy” the stack is as a whole.

    Both algorithms do away with having to reduce scenarios into computations with only two types of output, and instead allow for a range of outputs to more accurately characterize information with a quantum speedup over classical computing methods.

    Procedures like these may seem primitive and abstract, but they build an essential foundation for more complex and useful tasks in the quantum future. Within physics, the newly introduced algorithms may eventually allow scientists to reach target sensitivities faster in certain experiments. Shyamsundar is also planning to leverage these algorithms for use in quantum machine learning.

    And outside the realm of science? The possibilities are yet to be discovered.

    “We’re still in the early days of quantum computing,” Shyamsundar said, noting that curiosity often drives innovation. “These algorithms are going to have an impact on how we use quantum computers in the future.”

    This work is supported by the Department of Energy’s Office of Science Office of High Energy Physics QuantISED program.

    The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Fermi National Accelerator Laboratory(US), located just outside Batavia, Illinois, near Chicago, is a United States Department of Energy national laboratory specializing in high-energy particle physics. Since 2007, Fermilab has been operated by the Fermi Research Alliance, a joint venture of the University of Chicago (US), and the Universities Research Association (URA) (US). Fermilab is a part of the Illinois Technology and Research Corridor.

    Fermilab’s Tevatron was a landmark particle accelerator; until the startup in 2008 of the Large Hadron Collider(CH) near Geneva, Switzerland, it was the most powerful particle accelerator in the world, accelerating antiprotons to energies of 500 GeV, and producing proton-proton collisions with energies of up to 1.6 TeV, the first accelerator to reach one “tera-electron-volt” energy. At 3.9 miles (6.3 km), it was the world’s fourth-largest particle accelerator in circumference. One of its most important achievements was the 1995 discovery of the top quark, announced by research teams using the Tevatron’s CDF and DØ detectors. It was shut down in 2011.

    In addition to high-energy collider physics, Fermilab hosts fixed-target and neutrino experiments, such as MicroBooNE (Micro Booster Neutrino Experiment), NOνA (NuMI Off-Axis νe Appearance) and SeaQuest. Completed neutrino experiments include MINOS (Main Injector Neutrino Oscillation Search), MINOS+, MiniBooNE and SciBooNE (SciBar Booster Neutrino Experiment). The MiniBooNE detector was a 40-foot (12 m) diameter sphere containing 800 tons of mineral oil lined with 1,520 phototube detectors. An estimated 1 million neutrino events were recorded each year. SciBooNE sat in the same neutrino beam as MiniBooNE but had fine-grained tracking capabilities. The NOνA experiment uses, and the MINOS experiment used, Fermilab’s NuMI (Neutrinos at the Main Injector) beam, which is an intense beam of neutrinos that travels 455 miles (732 km) through the Earth to the Soudan Mine in Minnesota and the Ash River, Minnesota, site of the NOνA far detector. In 2017, the ICARUS neutrino experiment was moved from CERN to Fermilab.
    In the public realm, Fermilab is home to a native prairie ecosystem restoration project and hosts many cultural events: public science lectures and symposia, classical and contemporary music concerts, folk dancing and arts galleries. The site is open from dawn to dusk to visitors who present valid photo identification.

    Asteroid 11998 Fermilab is named in honor of the laboratory.

    Weston, Illinois, was a community next to Batavia voted out of existence by its village board in 1966 to provide a site for Fermilab.

    The laboratory was founded in 1969 as the National Accelerator Laboratory; it was renamed in honor of Enrico Fermi in 1974. The laboratory’s first director was Robert Rathbun Wilson, under whom the laboratory opened ahead of time and under budget. Many of the sculptures on the site are of his creation. He is the namesake of the site’s high-rise laboratory building, whose unique shape has become the symbol for Fermilab and which is the center of activity on the campus.

    After Wilson stepped down in 1978 to protest the lack of funding for the lab, Leon M. Lederman took on the job. It was under his guidance that the original accelerator was replaced with the Tevatron, an accelerator capable of colliding protons and antiprotons at a combined energy of 1.96 TeV. Lederman stepped down in 1989. The science education center at the site was named in his honor.

    The later directors include:

    John Peoples, 1989 to 1996
    Michael S. Witherell, July 1999 to June 2005
    Piermaria Oddone, July 2005 to July 2013
    Nigel Lockyer, September 2013 to the present

    Fermilab continues to participate in the work at the Large Hadron Collider (LHC); it serves as a Tier 1 site in the Worldwide LHC Computing Grid.

    FNAL Icon

     
  • richardmitnick 3:08 pm on January 19, 2021 Permalink | Reply
    Tags: "Rethinking Spin Chemistry from a Quantum Perspective", “Superposition” lets algorithms represent two variables at once which then allows scientists to focus on the relationship between these variables without any need to determine their individual sta, Bayesian inference, , , , Superposition   

    From Osaka City University (大阪市立大学: Ōsaka shiritsu daigaku) (JP): “Rethinking Spin Chemistry from a Quantum Perspective” 

    From Osaka City University (大阪市立大学: Ōsaka shiritsu daigaku) (JP)

    Jan 18, 2021
    James Gracey
    Global Exchange Office
    kokusai@ado.osaka-cu.ac.jp

    Researchers at Osaka City University use quantum superposition states and Bayesian inference to create a quantum algorithm, easily executable on quantum computers, that accurately and directly calculates energy differences between the electronic ground and excited spin states of molecular systems in polynomial time.

    1
    A quantum circuit that enables for the maximum probability of P(0)
    in the measurement of the parameter J.

    Understanding how the natural world works enables us to mimic it for the benefit of humankind. Think of how much we rely on batteries. At the core is understanding molecular structures and the behavior of electrons within them. Calculating the energy differences between a molecule’s electronic ground and excited spin states helps us understand how to better use that molecule in a variety of chemical, biomedical and industrial applications. We have made much progress in molecules with closed-shell systems, in which electrons are paired up and stable. Open-shell systems, on the other hand, are less stable and their underlying electronic behavior is complex, and thus more difficult to understand. They have unpaired electrons in their ground state, which cause their energy to vary due to the intrinsic nature of electron spins, and makes measurements difficult, especially as the molecules increase in size and complexity. Although such molecules are abundant in nature, there is a lack of algorithms that can handle this complexity. One hurdle has been dealing with what is called the exponential explosion of computational time. Using a conventional computer to calculate how the unpaired spins influence the energy of an open-shell molecule would take hundreds of millions of years, time humans do not have.

    Quantum computers are in development to help reduce this to what is called “polynomial time”. However, the process scientists have been using to calculate the energy differences of open-shell molecules has essentially been the same for both conventional and quantum computers. This hampers the practical use of quantum computing in chemical and industrial applications.

    “Approaches that invoke true quantum algorithms help us treat open-shell systems much more efficiently than by utilizing classical computers”, state Kenji Sugisaki and Takeji Takui from Osaka City University. With their colleagues, they developed a quantum algorithm executable on quantum computers, which can, for the first time, accurately calculate energy differences between the electronic ground and excited spin states of open-shell molecular systems. Their findings were published in the journal Chemical Science on 24 Dec 2020.

    The energy difference between molecular spin states is characterized by the value of the exchange interaction parameter J. Conventional quantum algorithms have been able to accurately calculate energies for closed-shell molecules “but they have not been able to handle systems with a strong multi-configurational character”, states the group. Until now, scientists have assumed that to obtain the parameter J one must first calculate the total energy of each spin state. In open-shell molecules this is difficult because the total energy of each spin state varies greatly as the molecule changes in activity and size. However, “the energy difference itself is not greatly dependent on the system size”, notes the research team. This led them to create an algorithm with calculations that focused on the spin difference, not the individual spin states. Creating such an algorithm required that they let go of assumptions developed from years of using conventional computers and focus on the unique characteristics of quantum computing – namely “quantum superposition states”.

    “Superposition” lets algorithms represent two variables at once, which then allows scientists to focus on the relationship between these variables without any need to determine their individual states first. The research team used something called a broken-symmetry wave function as a superposition of wave functions with different spin states and rewrote it into the Hamiltonian equation for the parameter J. By running this new quantum circuit, the team was able to focus on deviations from their target and by applying Bayesian inference, a machine learning technique, they brought these deviations in to determine the exchange interaction parameter J. “Numerical simulations based on this method were performed for the covalent dissociation of molecular hydrogen (H2), the triple bond dissociation of molecular nitrogen (N2), and the ground states of C, O, Si atoms and NH, OH+, CH2, NF and O2 molecules with an error of less than 1 kcal/mol”, adds the research team,

    “We plan on installing our Bayesian eXchange coupling parameter calculator with Broken-symmetry wave functions (BxB) software on near-term quantum computers equipped with noisy (no quantum error correction) intermediate-scale (several hundreds of qubits) quantum devices (NISQ devices), testing the usefulness for quantum chemical calculations of actual sizable molecular systems.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Osaka City University (OCU) (大阪市立大学: Ōsaka shiritsu daigaku) (JP), is a public university in Japan. It is located in Sumiyoshi-ku, Osaka.

    OCU’s predecessor was founded in 1880, as Osaka Commercial Training Institute (大阪商業講習所) with donations by local merchants. It became Osaka Commercial School in 1885, then was municipalized in 1889. Osaka City was defeated in a bid to draw the Second National Commercial College (the winner was Kobe City), so the city authorities decided to establish a municipal commercial college without any aid from the national budget.

    In 1901, the school was reorganized to become Osaka City Commercial College (市立大阪高等商業学校), later authorized under Specialized School Order in 1904. The college had grand brick buildings around the Taishō period.

    In 1928, the college became Osaka University of Commerce (大阪商科大学), the first municipal university in Japan. The city mayor, Hajime Seki (関 一, Seki Hajime, 1873–1935) declared the spirit of the municipal university, that it should not simply copy the national universities and that it should become a place for research with a background of urban activities in Osaka. But, contrary to his words, the university was removed to the most rural part of the city by 1935. The first president of the university was a liberalist, so the campus gradually became what was thought to be “a den of the Reds (Marxists)”. During World War II, the Marxists and the socialists in the university were arrested (about 50 to 80 members) soon after the liberal president died. The campus was evacuated and used by the Japanese Navy.

    After the war, the campus was occupied by the U.S. Army (named “Camp Sakai”), and a number of students became anti-American fighters and “worshipers” of the Soviet Union. The campus was returned to the university, partly in 1952, and fully in 1955. In 1949, during the allied occupation, the university was merged (with other two municipal colleges) into Osaka City University, under Japan’s new educational system.

     
  • richardmitnick 10:48 am on January 19, 2021 Permalink | Reply
    Tags: "Transforming quantum computing’s promise into practice" William Oliver, , , Decoherence, , , MIT’s Lincoln Laboratory, , , Superposition   

    From MIT: “Transforming quantum computing’s promise into practice” William Oliver 

    MIT News

    From MIT News

    January 19, 2021
    Daniel Ackerman

    Electrical engineer William Oliver develops technology to enable reliable quantum computing at scale.

    1
    MIT electrical engineer William D. Oliver develops the fundamental technology to enable reliable quantum computers at scale.
    Credit: Adam Glanzman.

    It was music that sparked William Oliver’s lifelong passion for computers.

    Growing up in the Finger Lakes region of New York, he was an avid keyboard player. “But I got into music school on voice,” says Oliver, “because it was a little bit easier.”

    But once in school, first at State University of New York at Fredonia then the University of Rochester, he hardly shied away from a challenge. “I was studying sound recording technology, which led me to digital signal processing,” explains Oliver. “And that led me to computers.” Twenty-five years later, he’s still stuck on them.

    Oliver, a recently tenured associate professor in MIT’s Department of Electrical Engineering and Computer Science, is building a new class of computer — the quantum computer — with the potential to radically improve how we process information and simulate complex systems. Quantum computing is still in its early days, and Oliver aims to help usher the field out of the laboratory and into the real world. “Our mission is to build the fundamental technologies that are necessary to scale up quantum computing,” he says.

    Coast to coast and back again

    Oliver’s first stop at MIT was as a master’s student in the Media Lab with adviser Tod Machover. Their interactive Brain Opera project paired Oliver’s love for both music and computing. Oliver orchestrated users’ voices with a computer-generated “angelic arpeggiation of strings and a chorus.” The project was installed at the Haus der Musik museum in Vienna. “It was a fantastic master’s project. I really loved it,” says Oliver. “But the question was ‘okay, what do I do next?’”

    Eager for a new challenge, Oliver chose to explore more fundamental research. “I found quantum mechanics to be really puzzling and interesting,” says Oliver. So he traveled to Stanford University to earn a PhD studying quantum optics using free electrons. “I feel very fortunate that I could do those experiments, which have almost no practical application, but that allowed me to think really deeply about quantum mechanics,” he says.

    Oliver’s timing was fortunate too. He was delving into quantum mechanics just as the field of quantum computing was emerging. A classical computer, like the one you’re using to read this story, stores information in binary bits, each of which holds a value of 0 or 1. In contrast, a quantum computer stores information in qubits, each of which can hold a 0, 1, or any simultaneous combination of 0 and 1, thanks to a quantum mechanical phenomenon called superposition. That means quantum computers can process information far faster than classical computers, in some cases completing tasks in minutes where a classical computer would take millennia — at least in theory. When Oliver was completing his PhD, quantum computing was a field in its infancy, more idea than reality. But Oliver grasped the potential of quantum computing, so he returned to MIT to help it grow.

    The qubit quandary

    Quantum computers are frustratingly inconsistent. That’s in part because those qubit superposition states are fragile. In a process called decoherence, qubits can err and lose their quantum information from the slightest disturbance or material defect. In 2003, Oliver took a staff position at MIT’s Lincoln Laboratory to help solve problems like decoherence. His goal, with colleagues Terry Orlando, Leonya Levitov, and Seth Lloyd, was to engineer reliable quantum computing systems that can be scaled up for practical use. “Quantum computing is transitioning from scientific curiosity to technical reality,” says Oliver. “We know that it works at small scale. And we’re now trying to increase the size of the systems so we can do problems that are actually meaningful.”

    Even background levels of radiation can trigger decoherence in mere milliseconds. In a recent Nature paper, Oliver and his colleagues, including professor of physics Joe Formaggio, described this problem and proposed ways to shelter qubits from damaging radiation, like shielding them with lead.

    He is quick to emphasize the role of collaboration in solving these complex challenges. “Engineering these quantum systems into useful, larger scale machines is going to require almost every department at the Institute,” says Oliver. In his own research, he builds qubits from electrical circuits in aluminum that are supercooled to just a smidge warmer than absolute zero. At that temperature, the system loses electrical resistance and can be used as an anharmonic oscillator that stores quantum information. Engineering such an intricate system to reliably process information means “we need to bring in a lot of people with their own talents,” says Oliver.

    “For example, materials scientists will have a lot to say about the materials and the defects on the surfaces,” he adds. “Electrical engineers will have something to say about how to fabricate and control the qubits. Computer scientists and applied mathematicians will have something to say about the algorithms. Chemists and biologists know the hard problems to solve. And so on.” When he first joined Lincoln Laboratory, Oliver says just two Lincoln staff were focused on quantum technologies. That number now exceeds 100.

    In 2015, Oliver founded the Engineering Quantum Systems (EQuS) group to focus specifically on superconducting qubit technology. He is also a Lincoln Laboratory Fellow, director of MIT’s Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics.

    A quantum future

    Oliver envisions a steadily growing role for quantum computing. Already, Google has demonstrated that for a particular task, a 53-qubit quantum computer can far outpace even the world’s largest supercomputer, which features quadrillions of transistors. “That was like the flight at Kitty Hawk,” says Oliver. “It got off the ground.”

    Google quantum computer.

    In the near-term, Oliver thinks quantum and classical computers could work as partners. The classical machine would churn through an algorithm, dispatching specific calculations for the quantum computer to run before its qubits decohere. In the longer term, Oliver says that error-correcting codes could enable quantum computers to function indefinitely, even as some individual components remain faulty. “And that’s when quantum computers will basically be universal,” says Oliver. “They’ll be able to run any quantum algorithm at large scale.” That could enable vastly improved simulations of complex systems in fields like molecular biology, quantum chemistry, and climatology.

    Oliver will continue to push quantum computing toward that reality. “There are real accomplishments that have been happening,” he says. “At the same time, on the theoretical side, there are real problems we could solve if we just had a quantum computer big enough.” While focused on his mission to scale up quantum computing, Oliver hasn’t lost his passion for music. Although, he says he rarely sings these days: “Only in the shower.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    USPS “Forever” postage stamps celebrating Innovation at MIT.

    MIT Campus

     
  • richardmitnick 2:02 pm on December 29, 2020 Permalink | Reply
    Tags: "New Views of Quantum Jumps Challenge Core Tenets of Physics", , , By “noncatchable” the researchers mean that the jump back to the ground state will not always be smooth and predictable., , , , Superconducting qubits, Superposition, The most fundamental breakthrough arguably came in 1986 when researchers for the first time experimentally verified that quantum jumps are actual physical events that can be observed and studied., The new study shows that the predictable “catchable” quantum jumps must have a noncatchable counterpart., The scientists behind the work called catchable jumps “islands of predictability in a sea of uncertainty."   

    From Scientific American: “New Views of Quantum Jumps Challenge Core Tenets of Physics” 


    From Scientific American

    December 29, 2020
    Eleni Petrakou

    One of the most basic processes in all of nature—a subatomic particle’s transition between discrete energy states—is surprisingly complex and sometimes predictable, recent work shows.

    1
    Credit: Getty Images.

    Quantum mechanics, the theory that describes the physics of the universe at very small scales, is notorious for defying common sense. Consider, for instance, the way that standard interpretations of the theory suggest change occurs in the quantum turf: shifts from one state to another supposedly happen unpredictably and instantaneously. Put another way, if events in our familiar world unfolded similarly to those within atoms, we would expect to routinely see batter becoming a fully baked cake without passing through any intermediate steps. Everyday experience, of course, tells us this is not the case, but for the less accessible microscopic realm, the true nature of such “quantum jumps” has been a major unsolved problem in physics.

    In recent decades, however, technological advancements have allowed physicists to probe the issue more closely in carefully arranged laboratory settings. The most fundamental breakthrough arguably came in 1986, when researchers for the first time experimentally verified that quantum jumps are actual physical events that can be observed and studied. Ever since, steady technical progress has opened deeper vistas upon the mysterious phenomenon. Notably, an experiment published in 2019 [Nature] overturned the traditional view of quantum jumps by demonstrating that they move predictably and gradually once they start—and can even be stopped midway.

    That experiment, performed at Yale University, used a setup that let the researchers monitor the transitions with minimal intrusion. Each jump took place between two energy values of a superconducting qubit, a tiny circuit built to mimic the properties of atoms. The research team used measurements of “side activity” taking place in the circuit when the system had the lower energy. This is a bit like knowing which show is playing on a television in another room by only listening for certain key words. This indirect probe evaded one of the top concerns in quantum experiments—namely, how to avoid influencing the very system that one is observing. Known as “clicks” (from the sound that old Geiger counters made when detecting radioactivity), these measurements revealed an important property: jumps to the higher energy were always preceded by a halt in the “key words,” a pause in the side activity. This eventually permitted the team to predict the jumps’ unfolding and even to stop them at will.

    Now a new theoretical study delves deeper into what can be said about the jumps and when. And it finds that this seemingly simple and fundamental phenomenon is actually quite complex.

    CATCH ME IF YOU CAN

    The new study, published in Physical Review Research, models the step-by-step, cradle-to-grave evolution of quantum jumps—from the initial lower-energy state of the system, known as the ground state, then a second one where it has higher energy, called the excited state, and finally the transition back to the ground state. This modeling shows that the predictable, “catchable” quantum jumps must have a noncatchable counterpart, says author Kyrylo Snizhko, a postdoctoral researcher now at Karlsruhe Institute of Technology in Germany, who was formerly at the Weizmann Institute of Science in Israel, where the study was performed.

    Specifically, by “noncatchable” the researchers mean that the jump back to the ground state will not always be smooth and predictable. Instead the study’s results show that such an event’s evolution depends on how “connected” the measuring device is to the system (another peculiarity of the quantum realm, which, in this case, relates to the timescale of the measurements, compared with that of the transitions). The connection can be weak, in which case a quantum jump can also be predictable through the pause in clicks from the qubit’s side activity, in the way used by the Yale experiment.

    The system transitions by passing through a mixture of the excited state and ground state, a quantum phenomenon known as superposition. But sometimes, when the connection exceeds a certain threshold, this superposition will shift toward a specific value of the mixture and tend to stay at that state until it moves to the ground unannounced. In that special case, “this probabilistic quantum jump cannot be predicted and reversed midflight,” explains Parveen Kumar, a postdoctoral researcher at the Weizmann Institute and co-author of the most recent study. In other words, even jumps for which timing was initially predictable would be followed by inherently unpredictable ones.

    But there is yet more nuance when examining the originally catchable jumps. Snizhko says that even these possess an unpredictable element. A catchable quantum jump will always proceed on a “trajectory” through the superposition of the excited and ground states, but there can be no guarantee that the jump will ever finish. “At each point in the trajectory, there is a probability that the jump continues and a probability that it is projected back to the ground state,” Snizhko says. “So the jump may start happening and then abruptly get canceled. The trajectory is totally deterministic—but whether the system will complete the trajectory or not is unpredictable.”

    This behavior appeared in the Yale experiment’s results. The scientists behind that work called such catchable jumps “islands of predictability in a sea of uncertainty.” Ricardo Gutiérrez-Jáuregui, a postdoctoral researcher at Columbia University and one of the authors of the corresponding study, notes that “the beauty of that work was to show that in the absence of clicks, the system followed a predetermined path to reach the excited state in a short but nonzero time. The device, however, still has a chance to ‘click’ as the system transitions through this path, thus interrupting its transition.”

    “QUANTUM PHYSICS IS BROKEN!”

    Zlatko Minev, a researcher at the IBM Thomas J. Watson Research Center and lead author of the earlier Yale study, notes that the new theoretical paper “derives a very nice, simple model and explanation of the quantum jump phenomenon in the context of a qubit as a function of the parameters of the experiment.” Taken together with the experiment at Yale, the results “show that there is more to the story of discreteness, randomness and predictability in quantum mechanics than commonly thought.” Specifically, the surprisingly nuanced behavior of quantum jumps—the way a leap from the ground state to the excited state can be foretold—suggests a degree of predictability inherent to the quantum world that has never before been observed. Some would even consider it forbidden, had it not already been validated by experiment. When Minev first discussed the possibility of predictable quantum jumps with others in his group, a colleague responded by shouting back, “If this is true, then quantum physics is broken!”

    “In the end, our experiment worked, and from it one can infer that quantum jumps are random and discrete,” Minev says. “Yet on a finer timescale, their evolution is coherent and continuous. These two seemingly opposed viewpoints coexist.”

    As to whether such processes can apply to the material world at large—for instance, to atoms outside a quantum lab—Kumar is undecided, in large part because of how carefully specific the study’s conditions were. “It would be interesting to generalize our results,” he says. If the results turn out similar for different measurement setups, then this behavior—events that are in some sense both random and predictable, discrete yet continuous—could reflect more general properties of the quantum world.

    Meanwhile the predictions of the study could get checked soon. According to Serge Rosenblum, a researcher at the Weizmann Institute who did not participate in either study, these effects can be observed with today’s state-of-the-art superconducting quantum systems and are high on the list of experiments for the institute’s new qubits lab. “It was quite amazing to me that a deceptively simple system such as a single qubit can still hide such surprises when we measure it,” he adds.

    For a long time, quantum jumps—the most basic processes underlying everything in nature—were considered nearly impossible to probe. But technological progress is changing that. Kater Murch, an associate professor at Washington University in St. Louis, who did not participate in the two studies, remarks, “I like how the Yale experiment seems to have motivated this theory paper, which is uncovering new aspects of a physics problem that has been studied for decades. In my mind, experiments really help drive the ways that theorists think about things, and this leads to new discoveries.”

    The mystery might not just be going away, though. As Snizhko says, “I do not think that the quantum jumps problem will be resolved completely any time soon; it is too deeply ingrained in quantum theory. But by playing with different measurements and jumps, we might stumble upon something practically useful.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 11:51 am on November 24, 2020 Permalink | Reply
    Tags: "GPU Clusters Accelerate Quantum Computer Simulator", A group of researchers at the U.S. Department of Energy’s PNNL have invented a quantum computer simulator called DM-SIM that is 10 times faster than existing methods., , GPUs devote most of their chip area to compute units and have high-throughput memory., New method improves error investigations in deep quantum circuits., , Superposition, The team achieved the increase in speed by harnessing the power of graphical processing units (GPUs).   

    From DOE’s Pacific Northwest National Laboratory: “GPU Clusters Accelerate Quantum Computer Simulator” 

    From DOE’s Pacific Northwest National Laboratory

    November 13, 2020
    Rebekah Orton

    1
    Artist’s rendering of a quantum computer. Credit:Jeffrey London/PNNL.

    New method improves error investigations in deep quantum circuits

    Before quantum computers begin to be deployed, how will we know if they work? The answer: quantum computer simulators. These important tools, now under development, run on the world’s most powerful supercomputing resources and still take days or weeks to complete quantum computing scenarios.

    Now, a group of researchers at the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) have invented a quantum computer simulator, called DM-SIM, that is 10 times faster than existing methods. The feat was detailed in one of only a handful of articles nominated for “Best Paper” at the annual international conference for high-performance computing, SC20 [Paper: Density Matrix Quantum Circuit Simulation via theBSP Machine on Modern GPU Clusters.

    The team, led by computer scientist Ang Li, achieved the increase in speed by harnessing the power of graphical processing units (GPUs), the lightning-quick processors originally designed for images and videos.

    Simulating qubits: the basic unit of quantum computing

    The basic unit of quantum programming—the quantum bit, or qubit—is strikingly different from its counterpart, the bit, in a classical computer. Unlike bits, the binary units that represent ones and zeros in a classical computer, a quantum computer’s qubits can represent the possibility of both one and zero at the same time.

    2
    Qubits, the basic unit of quantum computing, can represent the possibility of both one and zero at the same time. Credit: Jeffrey London/PNNL.

    A reliable quantum computer simulator needs to capture the complexity of superposition, but it’s not that simple. Multiple qubits in a quantum computer can also exhibit quantum entanglement, meaning when they are entangled, if a single qubit collapses into an individual one or zero, the rest will also collapse like a house of cards.

    Superposition and entanglement are the reasons quantum computers are more useful than classical computers for certain problems. Researchers need powerful quantum computer simulators that can accurately mimic qubits’ billions of possibilities—and errors.

    “Physical qubits currently aren’t perfect or logical,” said Li. “They are more like nature where everything responds to its environment, so you have to find a way to represent noise and errors to create a more realistic simulator.”

    But realism takes more time to calculate—and Li wanted to see if GPUs could hurry things up.

    Layering virtual quantum circuits onto multiple GPUs

    GPUs have been sold for decades to move images quickly across screens, but using them for general-purpose computation, like scientific simulation, emerged in 2007. Unlike a cache-heavy central processing unit (CPU), GPUs devote most of their chip area to compute units and have high-throughput memory. This makes their computations much faster.

    Li started working with GPUs in 2009 before they were as widely used as they are today. He was partway through his PhD research in high-performance computing by the time researchers began to use GPUs to accelerate deep learning in 2013. He’s seen the value ever since.

    “Many major computational problems will move to GPU centric computations, and this work is part of that trend,” said Li. “Big applications need GPUs’ faster delivery to expand their expected performance.”

    3
    Connecting multiple graphical processing units (GPUs) amplifies their swift computing power as they simulate qubits. Credit: Jeffrey London/PNNL.

    Quantum circuits aren’t images. But because GPUs rely on a large number of compute units to deliver high performance, Li suspected they could more quickly perform the heavy computations that represent quantum gates—the building blocks of a quantum circuit.

    Creating deeper gates in a quantum computer simulator

    Quantum circuits are made through operations that change the qubit’s state. These operations are called gates. At the beginning of the gate, each qubit is like an arrow, or vector, pointing to the “0” direction. After the circuit ends and is measured, the qubits collapse to classic one or zero states. The statistical breakdown of ones and zeros indicates the results of the computation.

    An accurate quantum computer simulator needs to describe both pure and mixed quantum states within each circuit. That’s why Li and his team used a method to describe the statistical state of a quantum system called a density matrix. Unlike the widely used state-vector, a density matrix contains all the information about a particular quantum state.

    While researchers have used density matrices to represent qubits before, no one before Li’s team has combined an efficient density matrix quantum circuit simulator with a GPU-accelerated high-performance computing cluster. Because of the complexity in operating the large density matrix, it isn’t easy to manage the massive threads and communication interwoven between cooperating GPUs. And the researcher’s efforts could fail if they didn’t synchronize communication across the GPUs and GPU nodes holding part of the density matrix.

    But Li and his teammates Omer Subasi, Xiu Yang, and Sriram Krishnamoorthy were up for the challenge. After linking the GPUs, they proposed a new formula reform that can significantly avoid expensive communication between GPUs. More importantly, it reduces the communication overhead while conserving the natural error expected from noisy, real-world quantum gates.

    Faster and the future

    With the new method, the team ran a density matrix simulation with one million arbitrary gates in only 94 minutes. This outcome was far deeper and quicker than has been demonstrated before—10 times faster than simpler simulators, which represent quantum states in a state-vector.

    The PNNL team applied their GPU-centered method to help investigate how errors occur in quantum circuits, but the approach could be more broadly applicable. Until a perfect quantum computer is available, PNNL’s DM-Sim simulator can be used to help develop quantum algorithms that provide understanding of molecules for medical advances, explain complex chemistry problems, analyze big-data graphs, and perform quantum-based machine learning . In the meantime, PNNL’s DM-Sim quantum computing simulator will greatly influence making quantum computers work in practical terms.

    The research was funded by the Quantum Science, Advanced Accelerator (QUASAR) laboratory-directed research and development initiative. QUASAR research contributes to the National Quantum Initiative and is part of PNNL efforts to create the science and algorithms that advance hardware development and prepare for the future of quantum technology. The complete paper and a free download of the DM-Sim is available on GITHUB.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    DOE’s Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

     
  • richardmitnick 12:31 pm on August 27, 2020 Permalink | Reply
    Tags: "UArizona Scientists to Build What Einstein Wrote off as Science Fiction", A first-of-its-kind campuswide quantum networking testbed will be built at the University of Arizona., , , Quantum systems will provide a level of privacy security and computational clout that is impossible to achieve with today's internet., Superposition, The center will develop a quantum networking applications roadmap., The quantum internet will allow for applications that will never be possible on the internet as we know it., The quantum internet will rely on a global network of quantum processors speaking to one another via "quantum bits"., This is the third National Science Foundation Engineering Research Center led by the University of Arizona., UA Center for Quantum Networks,   

    From University of Arizona: “UArizona Scientists to Build What Einstein Wrote off as Science Fiction” 

    From University of Arizona

    8.26.20
    Media contacts:
    Daniel Stolte
    University Communications
    520-626-4402
    stolte@email.arizona.edu

    Brianna Moreno
    James C. Wyant College of Optical Sciences
    520-621-4842
    bmoreno@optics.arizona.edu

    With $26 million in federal funding, UArizona is charged with developing the internet of the future, ruled by quantum mechanical properties instead of conventional 0s and 1s. On Aug. 26, Arizona Gov. Doug Ducey, UArizona President Robert C. Robbins and others discussed the impact that the Center for Quantum Networks is expected to have on the way the world computes and communicates.

    1
    Training tomorrow’s quantum engineer workforce is just one declared goals of the new Center for Quantum Networks, funded by the National Science Foundation to create a socially responsible of the internet of the future. Narang Lab/Harvard University.

    Arizona Gov. Doug Ducey today joined University of Arizona President Robert C. Robbins and leading scientists from the new University of Arizona-based Center for Quantum Networks to talk about how the center will help develop the “internet of the future.”

    The National Science Foundation has awarded UArizona a five-year, $26 million grant – with an additional $24 million, five-year option – to lead the Center for Quantum Networks, or CQN, which is a National Science Foundation Engineering Research Center. The award has placed Arizona at the forefront of quantum networking technologies, which are expected to transform areas such as medicine, finance, data security, artificial intelligence, autonomous systems and smart devices, which together are often are referred to as “the internet of things.”

    “Arizona continues to lead the nation in innovation. Establishing the Center for Quantum Networks will position the state as a global leader in advancing this technology and developing the workforce of the future,” Gov. Doug Ducey said. “We’re proud of the work the University of Arizona has done to secure this grant and look forward to the scientific achievements that will result from it.”

    The CQN will take center stage in a burgeoning field. Companies like IBM, Microsoft and Google are racing to build reliable quantum computers, and China has invested billions of dollars in quantum technology research. The U.S. has begun a serious push to exceed China’s investment and to “win” the global race to harness quantum technologies.

    “Less than a year ago, a quantum computer for the first time performed certain calculations that are no longer feasible for even the largest conventional supercomputers,” said Saikat Guha, CQN director and principal investigator and associate professor in the UArizona James C. Wyant College of Optical Sciences, who joined Ducey and Robbins for the virtual event. “The quantum internet will allow for applications that will never be possible on the internet as we know it.”

    Unlike the existing internet – in which computers around the globe exchange data encoded in the familiar 0s and 1s – the quantum internet will rely on a global network of quantum processors speaking to one another via “quantum bits,” or qubits.

    Qubits offer dramatic increases in processing capacity over conventional bits because they can exist in not just one state, but two at the same time. Known as superposition, this difficult-to-grasp principle was first popularized by “Schrödinger’s Cat” – the famous thought experiment in which an imaginative cat inside a box is neither dead nor alive until an equally imaginative observer opens the box and checks.

    The key new resource that the quantum network enables – by being able to communicate qubits from one point to another – is to create “entanglement” across various distant users of the network. Entanglement – another hallmark of quantum mechanics so strange that even Einstein was reluctant to accept it at first – allows a pair of particles, including qubits, to stay strongly correlated despite being separated by large physical distances. Entanglement enables communication among parties that is impossible to hack.

    One of the center’s goals is to develop technologies that will put the entanglement principle to use in real-world applications – for example, to stitch together far-apart sensors, such as the radio telescopes that glimpsed the first image of a black hole in space, into one giant instrument that is far more capable than the sum of the individual sensors. Similar far-reaching implications are expected in the autonomous vehicles industry and in medicine.

    “Who knows, 50 years from now, your internet service provider may send a technician to your house to install a CQN-patented quantum-enabled router that does everything your current router does, but more,” Guha said. “It lets you hook up your quantum gadgets to what we are beginning to build today – the new internet of the future.”

    A first-of-its-kind campuswide quantum networking testbed will be built at the University of Arizona, connecting laboratories across the UArizona campus, initially spanning the College of Optical Sciences, Department of Electrical and Computer Engineering, Department of Materials Science and Engineering and the BIO5 Institute.

    “The next few years will be very exciting, as we are at a time when the community puts emerging quantum computers, processors, sensors and other gadgets to real use,” Guha said. “We are just beginning to connect small quantum computers, sensors and other gadgets into quantum networks that transmit quantum bits.”

    According to Guha, quantum-enabled sensors will be more sensitive than classical ones, and will dramatically improve technologies such as microscopes used in biomedical research to look for cancer cells, sensors on low-Earth-orbit satellites, and magnetic field sensors used for positioning and navigation.

    Guha says today’s internet is a playground for hackers, due to insecure communication links to inadequately guarded data in the cloud. Quantum systems will provide a level of privacy, security and computational clout that is impossible to achieve with today’s internet.

    “The Center for Quantum Networking stands as an example for the core priorities of our university-wide strategic plan,” said UArizona President Robert C. Robbins. “As a leading international research university bringing the Fourth Industrial Revolution to life, we are deeply committed to (our strategic plan to) advance amazing new information technologies like quantum networking to benefit humankind. And we are equally committed to examining the complex, social, legal, economic and policy questions raised by these new technologies.

    “In addition to bringing researchers together from intellectually and culturally diverse disciplines, the CQN will provide future quantum engineers and social scientists with incredible learning opportunities and the chance to work side by side with the world’s leading experts.”

    The center will bring together scientists, engineers and social scientists working on quantum information science and engineering and its societal impacts. UArizona has teamed up with core partners Harvard University, the Massachusetts Institute of Technology and Yale University to work on the core hardware technologies for quantum networks and create an entrepreneurial ecosystem for quantum network technology transfer.

    In addition to creating a diverse quantum engineer workforce, the center will develop a quantum networking applications roadmap, developed cooperatively with industry partners, to help prioritize CQN’s research investments as new application concepts are developed.

    Jane Bambauer, CQN co-deputy director and professor in the James E. Rogers College of Law, who also spoke about the center, said that “the classical internet changed our relationship to computers and each other.”

    “While we build the technical foundations for the quantum internet, we are also building the foundation for a socially responsible rollout of the new technology,” Bambauer said. “We are embedding policy and social science expertise into our center’s core research activities. We’re also creating effective and inclusive education programs to make sure that the opportunities for jobs and for invention are shared broadly.”

    This is the third National Science Foundation Engineering Research Center led by the University of Arizona. The other two are the ERC for Environmentally Benign Semiconductor Manufacturing, led by the College of Engineering, and the Center for Integrated Access Networks, led by the Wyant College of Optical Sciences. CQN will be bolstered by the Wyant College’s recent endowments – including the largest faculty endowment gift in the history of the University of Arizona – and the planned construction of the new Grand Challenges Research Building, supported by the state of Arizona.

    Additional speakers at today’s event included:

    Dirk Englund, CQN Deputy Director for Engineering Research, MIT Electrical Engineering & Computer Science
    Charlie Tahan, Assistant Director for Quantum Information Science and Director, National Quantum Coordination Office, White House Office of Science and Technology Policy
    Linda Blevins, Deputy Assistant Director of the Engineering Directorate, National Science Foundation
    Kon-Well Wang, Division Director, Division of Engineering Education and Centers, Directorate for Engineering, National Science Foundation


    Center for Quantum Networks Briefing

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Arizona (UA) is a place without limits-where teaching, research, service and innovation merge to improve lives in Arizona and beyond. We aren’t afraid to ask big questions, and find even better answers.

    In 1885, establishing Arizona’s first university in the middle of the Sonoran Desert was a bold move. But our founders were fearless, and we have never lost that spirit. To this day, we’re revolutionizing the fields of space sciences, optics, biosciences, medicine, arts and humanities, business, technology transfer and many others. Since it was founded, the UA has grown to cover more than 380 acres in central Tucson, a rich breeding ground for discovery.

    U Arizona mirror lab-Where else in the world can you find an astronomical observatory mirror lab under a football stadium?

    University of Arizona’s Biosphere 2, located in the Sonoran desert. An entire ecosystem under a glass dome? Visit our campus, just once, and you’ll quickly understand why the UA is a university unlike any other.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: