Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:17 pm on July 14, 2018 Permalink | Reply
    Tags: , , , , , Physics   

    From Brookhaven via Fermilab : “Theorists Publish Highest-Precision Prediction of Muon Magnetic Anomaly 

    From Brookhaven Lab

    via

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermilab , an enduring source of strength for the US contribution to scientific research world wide.

    7.12.18
    Karen McNulty Walsh
    kmcnulty@bnl.gov
    (631) 344-8350

    Peter Genzer
    genzer@bnl.gov
    (631) 344-3174

    Latest calculation based on how subatomic muons interact with all known particles comes out just in time for comparison with precision measurements at new “Muon g-2” experiment.

    FNAL Muon G-2 studio at FNAL

    Theoretical physicists at the U.S. Department of Energy’s (DOE’s) Brookhaven National Laboratory and their collaborators have just released the most precise prediction of how subatomic particles called muons—heavy cousins of electrons—“wobble” off their path in a powerful magnetic field. The calculations take into account how muons interact with all other known particles through three of nature’s four fundamental forces (the strong nuclear force, the weak nuclear force, and electromagnetism) while reducing the greatest source of uncertainty in the prediction. The results, published in Physical Review Letters as an Editors’ Suggestion, come just in time for the start of a new experiment measuring the wobble now underway at DOE’s Fermi National Accelerator Laboratory (Fermilab).

    A version of this experiment, known as “Muon g-2,” ran at Brookhaven Lab in the late 1990s and early 2000s, producing a series of results indicating a discrepancy between the measurement and the prediction. Though not quite significant enough to declare a discovery, those results hinted that new, yet-to-be discovered particles might be affecting the muons’ behavior. The new experiment at Fermilab, combined with the higher-precision calculations, will provide a more stringent test of the Standard Model, the reigning theory of particle physics. If the discrepancy between experiment and theory still stands, it could point to the existence of new particles.

    “If there’s another particle that pops into existence and interacts with the muon before it interacts with the magnetic field, that could explain the difference between the experimental measurement and our theoretical prediction,” said Christoph Lehner, one of the Brookhaven Lab theorists who led the latest calculations. “That could be a particle we’ve never seen before, one not included in the Standard Model.”

    Finding new particles beyond those already cataloged by the Standard Model has long been a quest for particle physicists. Spotting signs of a new particle affecting the behavior of muons could guide the design of experiments to search for direct evidence of such particles, said Taku Izubuchi, another leader of Brookhaven’s theoretical physics team.

    “It would be a strong hint and would give us some information about what this unknown particle might be—something about what the new physics is, how this particle affects the muon, and what to look for,” Izubuchi said.

    The muon anomaly

    The Muon g-2 experiment measures what happens as muons circulate through a 50-foot-diameter electromagnet storage ring. The muons, which have intrinsic magnetism and spin (sort of like spinning toy tops), start off with their spins aligned with their direction of motion. But as the particles go ’round and ’round the magnet racetrack, they interact with the storage ring’s magnetic field and also with a zoo of virtual particles that pop in and out of existence within the vacuum. This all happens in accordance with the rules of the Standard Model, which describes all the known particles and their interactions, so the mathematical calculations based on that theory can precisely predict how the muons’ alignment should precess, or “wobble” away from their spin-aligned path. Sensors surrounding the magnet measure the precession with extreme precision so the physicists can test whether the theory-generated prediction is correct.

    Both the experiments measuring this quantity and the theoretical predictions have become more and more precise, tracing a journey across the country with input from many famous physicists.

    A race and collaboration for precision

    “There is a race of sorts between experiment and theory,” Lehner said. “Getting a more precise experimental measurement allows you to test more and more details of the theory. And then you also need to control the theory calculation at higher and higher levels to match the precision of the experiment.”

    With lingering hints of a new discovery from the Brookhaven experiment—but also the possibility that the discrepancy would disappear with higher precision measurements—physicists pushed for the opportunity to continue the search using a higher-intensity muon beam at Fermilab. In the summer of 2013, the two labs teamed up to transport Brookhaven’s storage ring via an epic land-and-sea journey from Long Island to Illinois. After tuning up the magnet and making a slew of other adjustments, the team at Fermilab recently started taking new data.

    Meanwhile, the theorists have been refining their calculations to match the precision of the new experiment.

    “There have been many heroic physicists who have spent a huge part of their lives on this problem,” Izubuchi said. “What we are measuring is a tiny deviation from the expected behavior of these particles—like measuring a half a millimeter deviation in the flight distance between New York and Los Angeles! But everything about the fate of the laws of physics depends on that difference. So, it sounds small, but it’s really important. You have to understand everything to explain this deviation,” he said.

    The path to reduced uncertainty

    By “everything” he means how all the known particles of the Standard Model affect muons via nature’s four fundamental forces—gravity, electromagnetism, the strong nuclear force, and the electroweak force. Fortunately, the electroweak contributions are well understood, and gravity is thought to play a currently negligible role in the muon’s wobble. So the latest effort—led by the Brookhaven team with contributions from the RBC Collaboration (made up of physicists from the RIKEN BNL Research Center, Brookhaven Lab, and Columbia University) and the UKQCD collaboration—focuses specifically on the combined effects of the strong force (described by a theory called quantum chromodynamics, or QCD) and electromagnetism.

    “This has been the least understood part of the theory, and therefore the greatest source of uncertainty in the overall prediction. Our paper is the most successful attempt to reduce those uncertainties, the last piece at the so-called ‘precision frontier’—the one that improves the overall theory calculation,” Lehner said.

    The mathematical calculations are extremely complex—from laying out all the possible particle interactions and understanding their individual contributions to calculating their combined effects. To tackle the challenge, the physicists used a method known as Lattice QCD, originally developed at Brookhaven Lab, and powerful supercomputers. The largest was the Leadership Computing Facility at Argonne National Laboratory, a DOE Office of Science user facility, while smaller supercomputers hosted by Brookhaven’s Computational Sciences Initiative (CSI)—including one machine purchased with funds from RIKEN, CSI, and Lehner’s DOE Early Career Research Award funding—were also essential to the final result.

    “One of the reasons for our increased precision was our new methodology, which combined the most precise data from supercomputer simulations with related experimental measurements,” Lehner noted.

    Other groups have also been working on this problem, he said, and the entire community of about 100 theoretical physicists will be discussing all of the results in a series of workshops over the next several months to come to agreement on the value they will use to compare with the Fermilab measurements.

    “We’re really looking forward to Fermilab’s results,” Izubuchi said, echoing the anticipation of all the physicists who have come before him in this quest to understand the secrets of the universe.

    The theoretical work at Brookhaven was funded by the DOE Office of Science, RIKEN, and Lehner’s Early Career Research Award.

    The Muon g-2 experiment at Fermilab is supported by DOE’s Office of Science and the National Science Foundation. The Muon g-2 collaboration has almost 200 scientists and engineers from 34 institutions in seven countries. Learn more about the new Muon g-2 experiment or take a virtual tour.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

    Advertisements
     
  • richardmitnick 2:33 pm on July 13, 2018 Permalink | Reply
    Tags: , , , , , , , Physics   

    From Horizon The EU Research and Innovation Magazine: “Plasma accelerators could overcome size limitations of Large Hadron Collider” 

    1

    From Horizon The EU Research and Innovation Magazine

    09 July 2018
    Jon Cartwright

    1
    A plasma cell can help sustain stronger acceleration fields than in conventional accelerators, at only a fraction of their size. Image credit – © DESY, Heiner Müller-Elsner

    Plasma particle accelerators more powerful than existing machines could help probe some of the outstanding mysteries of our universe, as well as make leaps forward in cancer treatment and security scanning – all in a package that’s around a thousandth of the size of current accelerators. All that’s left is for scientists to build one.

    If you know what a particle accelerator is, you probably think first of the Large Hadron Collider (LHC) – that gargantuan ring on the Franco-Swiss border that smashes protons and ions together, exposing the secrets of the subatomic world.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    Built by the European lab CERN, the LHC accelerates particles to the kinds of speeds found during the eruption of the early universe. To do so, it needs a very, very big circumference – 27 kilometres.

    Yet the LHC is already finding limits to what it can explore. Physicists want even more powerful accelerators – but building one much bigger than the LHC is hard to contemplate.

    Dr Ralph Assmann, a leading scientist at the German particle physics lab DESY, believes a completely different approach is needed. He thinks accelerators can be powerful, yet up to 1,000 times smaller, if they are based on a strange type of matter known as a plasma – a cloud of negative electrons and positive ions.

    ‘Plasma accelerators provide a path to energies beyond the LHC,’ he said. ‘Particle physicists must take this opportunity very seriously.’

    Swing

    Conventional accelerators work by sending charged particles through oscillating electromagnetic fields. By switching back and forth, these fields kick the particles to an incrementally higher energy with every cycle – a bit like pushing a child on a swing.

    The trouble with this approach is that the individual kicks – which are generated by electrical components – can only be so powerful, or the field itself will break down. High energies therefore demand lots and lots of soft kicks, which is why conventional accelerators get so big.

    Plasmas, however, can sustain much bigger fields. Nearly 40 years ago, physicists discovered that if a laser pulse or a particle beam is sent into a plasma, it is possible to momentarily separate the negative and positive charges, generating a field of some 100 billion volts per metre.

    Any electrons stranded in the wake of this separation are propelled forwards. The effect, like a surfer riding a wave, is known as plasma wakefield acceleration.

    In recent years, the energies accessible with plasma wakefield accelerators have risen sharply. Scientists like Dr Assmann want to increase these energies, but also to improve the stability and quality of the electron beams coming out of the accelerator.

    Host of applications

    That would make plasma accelerators suitable for particle physics but also a host of other applications, including cancer treatment, medical diagnostics, security scanners and the study of advanced materials. Conventional accelerators already help with these applications, but their size and cost means that demand currently far outstrips supply.

    Dr Assmann is coordinating a project, EuPRAXIA, to come up with a design for the world’s first plasma wakefield accelerator with an energy of five giga-electronvolts (GeV) that can actually be used for research. That is less than one-thousandth the energy of the LHC but, as Dr Assmann points out, you have to walk before you can run.

    ‘Clearly, high-field accelerators, like plasma accelerators, (are) the logical long-term solution for advancing the energy frontier in particle physics,’ he said. ‘But it will require a realistic and sustained approach.’

    2
    A two-storey design limits the length of the 5 GeV EuPRAXIA plasma accelerator facility, although it could extend to 35-250m depending on what applications are added downstream. Diagram not to scale. Image credit – Horizon

    With 40 labs and universities on board, EuPRAXIA will have to answer key questions, such as whether all the accelerated electrons should come from the plasma, or whether additional electrons should be fed into the machine. The design is expected to be completed towards the end of next year.

    EuPRAXIA is not the only plasma accelerator project in town, however. At CERN, a powerful wakefield accelerator called AWAKE has already been built, but with a twist – it uses a proton beam to drive it.

    CERN AWAKE schematic


    CERN AWAKE

    Bigger impact

    Protons are more than 1,800 times more massive than electrons, which means they have a much bigger impact when it comes to dividing the charges in a plasma. According to Dr Edda Gschwendtner, the CERN project leader of the AWAKE experiment, that means a proton-driven plasma accelerator could accelerate electrons to high energies in just a single stage, rather than multiple stages, as is often proposed.

    AWAKE takes the proton beam from one of CERN’s existing accelerators, and in the last two years has successfully created strong plasma wakefields. This year, the goal is to actually accelerate electrons in that wakefield to energies exceeding 1 GeV.

    3
    The AWAKE experiment uses a proton beam to create a strong plasma wakefield. Image credit – CERN

    In years to come, Dr Gschwendtner wants to boost AWAKE’s output to several tens of GeV. That would be enough to probe certain theoretical proposals of today’s particle physics – dark photons, for instance, which some physicists believe could constitute the dark matter that predominates in the universe.

    Plasma accelerators still have a long way to go before they can out-perform the likes of the LHC. But when conventional accelerators are so big and costly, Dr Gschwendtner believes they could be the only way forward.

    ‘New technologies must be developed,’ she said. ‘Plasma wakefield acceleration is a very promising novel accelerator technique.’

    The research in this article has received EU funding.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 11:47 am on July 11, 2018 Permalink | Reply
    Tags: , , , , MICE experiment, , , Physics   

    From CERN Courier: “Muons cooled for action” 


    From CERN Courier

    9 July 2018
    Manuela Boscolo, INFN-LNF
    Patrick Huber, Virginia Tech
    Kenneth Long, Imperial College London and STFC.

    The recent demonstration of muon ionisation-cooling by the MICE collaboration opens a path to a neutrino factory and muon collider.

    Rutherford Appleton Lab Muon Ionization Cooling Experiment (or MICE) is a high energy physics experiment

    Fundamental insights into the constituents of matter have been gained by observing what happens when beams of high-energy particles collide. Electron–positron, proton–proton, proton–antiproton and electron–proton colliders have all contributed to the development of today’s understanding, embodied in the Standard Model of particle physics (SM). The Large Hadron Collider (LHC) brings 6.5 TeV proton beams into collision, allowing the Higgs boson and other SM particles to be studied and searches for new physics to be carried out. To reach physics beyond the LHC will require hadronic colliders at higher energies and/or lepton colliders that can deliver substantially increased precision.

    A variety of options are being explored to achieve these goals. For example, the Future Circular Collider study at CERN is investigating a 100 km-circumference proton–proton collider with beam energies of around 50 TeV the tunnel for which could also host an electron–positron collider (CERN Courier June 2018 p15).

    CERN FCC Future Circular Collider map

    Electron–positron annihilation has the advantage that all of the beam energy is available in the collision, rather than being shared between the constituent quarks and gluons as it is in hadronic collisions. But to reach very high energies requires either a state-of-the-art linear accelerator, such as the proposed Compact Linear Collider or the International Linear Collider, or a circular accelerator with an extremely large bending radius.

    Cern Compact Linear Collider


    CLIC Collider annotated

    ILC schematic, being planned for the Kitakami highland, in the Iwate prefecture of northern Japan

    Muons to the fore

    A colliding-beam facility based on muons has a number of advantages. First, since the muon is a lepton, all of the beam energy is available in the collision. Second, since the muon is roughly 200 times heavier than the electron and thus emits around 109 times less synchrotron radiation than an electron beam of the same energy, it is possible to produce multi-TeV collisions in an LHC-sized circular collider. The large muon mass also enhances the direct “s-channel” Higgs-production rate by a factor of around 40,000 compared to that in electron–positron colliders, making it possible to scan the centre-of-mass energy to measure the Higgs-boson line shape directly and to search for closely spaced states.

    __________________________________________________________
    2
    __________________________________________________________

    Stored muon beams could also serve the long-term needs of neutrino physicists (see box 1). In a neutrino factory, beams of electron and muon neutrinos are produced from the decay of muons circulating in a storage ring. It is straightforward to tune the neutrino-beam energy because the neutrinos carry away a substantial fraction of the muon’s energy. This, combined with the excellent knowledge of the beam composition and energy spectrum resulting from the very well-known characteristics of muon decays, makes the neutrino factory the ideal place to make precision measurements of neutrino properties and to look for oscillation phenomena that are outside the standard, three-neutrino-mixing paradigm.

    Given the many benefits of a muon collider or neutrino factory, it is reasonable to ask why one has yet to be built. The answer is that muons are unstable, decaying with a mean lifetime at rest of 2.2 microseconds. This presents two main challenges: first, a high-intensity primary beam must be used to create the muons that will form the beam; and, second, once captured, the muon beam must be accelerated rapidly to high energy so that the effective lifetime of the muon can be extended by the relativistic effect of time dilation.

    One way to produce beams for a muon collider or neutrino factory is to harness the muons produced from the decay of pions when a high-power (few-MW), multi-GeV proton beam strikes a target such as carbon or mercury. For this approach, new proton accelerators with the required performance are being developed at CERN, Fermilab, J-PARC and at the European Spallation Source.

    ESS European Spallation Source, currently under construction in Lund, Sweden.

    The principle of the mercury target was proved by the MERIT experiment that operated on the Proton Synchrotron at CERN. However, at the point of production, the tertiary muon beam emerging from such schemes occupies a large volume in phase space. To maximise the muon yield, the beam has to be “cooled” – i.e. its phase-space volume reduced – in a short period of time before it is accelerated.

    __________________________________________________________

    __________________________________________________________

    The proposed solution is called ionisation cooling, which involves passing the beam through a material in which it loses energy via ionisation and then re-accelerating it in the longitudinal direction to replace the lost energy. Proving the principle of this technique is the goal of the Muon Ionization Cooling Experiment (MICE) collaboration, which, following a long period of development, has now reported its first observation of ionisation cooling.

    An alternative path to a muon collider called the Low Emittance Muon Accelerator (LEMMA), recently proposed by accelerator physicists at INFN in Italy and the ESRF in France, provides a naturally cooled muon beam with a long lifetime in the laboratory by capturing muon–antimuon pairs created in electron–positron annihilation.

    Cool beginnings

    The benefits of a collider based on stored muon beams were first recognised by Budker and Tikhonin at the end of the 1960s. In 1974, when CERN’s Super Proton Synchrotron (SPS) was being brought into operation, Koshkarev and Globenko showed how muons confined within a racetrack-shaped storage ring could be used to provide intense neutrino beams. The following year, the SPS proton beam was identified as a potential muon source and the basic parameters of the muon beam, storage ring and neutrino beam were defined.

    The Super Proton Synchrotron (SPS), CERN’s second-largest accelerator. (Image: Julien Ordan/CERN

    It was quickly recognised that the performance of this facility—the first neutrino factory to be proposed – could be enhanced if the muon beam was cooled. In 1978, Budker and Skrinsky identified ionisation cooling as a technique that could produce sufficient cooling in a timeframe short compared to the muon lifetime and, the following year, Neuffer proposed a muon collider that exploited ionisation cooling to increase the luminosity.

    The study of intense, low-emittance muon beams as the basis of a muon collider and/or neutrino factory was re-initiated in the 1990s, first in the US and then in Europe and Japan. Initial studies of muon production and capture, phase-space manipulation, cooling and acceleration were carried out and neutrino- and energy-frontier physics opportunities evaluated. The reduction of the tertiary muon-beam phase space was recognised as a key technological challenge and at the 2001 NuFact workshop the international MICE collaboration was created, comprising 136 physicists and engineers from 40 institutes in Asia, Europe and the US.

    __________________________________________________________
    3
    __________________________________________________________

    he MICE cooling cell, in common with the cooling channels studied since the seminal work of the 1990s, is designed to operate at a beam momentum of around 200 MeV/c. This choice is a compromise between the size of the ionisation-cooling effect and its dependence on the muon energy, the loss rate of muon-beam intensity through decay, and the ease of acceleration following the cooling channel. The ideal absorber has, at the same time, a large ionisation energy loss per unit length (to maximise ionisation cooling) and a large radiation length (to minimise heating through multiple Coulomb scattering). Liquid hydrogen meets these requirements and is an excellent absorber material; a close runner-up, with the practical advantage of being solid, is lithium hydride. MICE was designed to study the properties of both. The critical challenges faced by the collaboration therefore included: the integration of high-field superconducting magnets operating in a magnetically coupled lattice; high-gradient accelerating cavities capable of operation in a strong magnetic field; and the safe implementation of liquid-hydrogen absorber modules – all solved through more than a decade of R&D.

    In 2003 the MICE collaboration submitted a proposal to mount the experiment (figure 1) on a new beamline at the ISIS proton and muon source at the Science and Technology Facilities Council’s (STFC) Rutherford Appleton Laboratory in the UK. Construction began in 2005 and first beam was delivered on 29 March 2008. The detailed design of the spectrometer solenoids was also carried out at this time and the procurement process was started. During the period from 2008 to 2012, the collaboration carried out detailed studies of the properties of the beam delivered to the experiment and, in parallel, designed and fabricated the focus-coil magnets and a first coupling coil.

    4
    No image caption or credit.

    Delays were incurred in addressing issues that arose in the manufacture of the spectrometer solenoids. This, combined with the challenges of integrating the four-cavity linac module with the coupling coil, led, in November 2014, to a reconfiguration of the MICE cooling cell. The simplified experiment required two, single-cavity modules and beam transport was provided by the focus-coil modules. An intense period of construction followed, culminating with the installation of the spectrometer solenoids and the focus-coil module in the summer of 2015. Magnet commissioning progressed well until, a couple of months later, a coil in the downstream solenoid failed during a training quench. The modular design of the apparatus meant the collaboration was able to devise new settings rapidly, but it proved not to be possible to restore the downstream spectrometer magnet to full functionality. This, combined with the additional delays incurred in the recovery of the magnet, eventually led to the cancellation of the installation of the RF cavities in favour of the extended operation of a configuration of the experiment without the cavities.

    It is interesting to reflect, as was done in a recent lessons-learnt exercise convened by the STFC, whether a robust evaluation of alternative options for the cooling-demonstration lattice at the outset of MICE might have identified the simplified lattice as a “less-risky” option and allowed some of the delays in implementing the experiment to be avoided.

    5

    The bulk of the data-taking for MICE was carried out between November 2015 and December 2017, using lithium-hydride and liquid-hydrogen absorbers. The campaign was successful: more than 5 × 108 triggers were collected over a range of initial beam momentum and emittance for a variety of configurations of the magnetic channel for each absorber material. The key parameter to measure when demonstrating ionisation cooling is the “amplitude” of each muon – the distance from the beam centre in transverse phase space, reconstructed from its position and momentum. The muon’s amplitude is measured before it enters the absorber and again as it leaves, and the distributions of amplitudes are then examined for evidence of cooling: a net migration of muons from high to low amplitudes. As can be seen (figure 2), the particle density in the core of the MICE beam is increased as a result of the beam’s passage through the absorber, leading to a lower transverse emittance and thereby providing a higher neutrino flux or a larger luminosity.

    The MICE observation of the ionisation-cooling of muon beams is an important breakthrough, achieved through the creativity and tenacity of the collaboration and the continuous support of the funding agencies and host laboratory. The results match expectations, and the next step would be to design an experiment to demonstrate cooling in all six phase-space dimensions.

    Completing the MICE programme

    Having completed its experimental programme, MICE will now focus on the detailed analysis of the factors that determine ionisation-cooling performance over a range of momentum, initial emittance and lattice configurations for both liquid-hydrogen and lithium-hydride absorbers. MICE was operated such that data were recorded one particle at a time. This single-particle technique will allow the collaboration to study the impact of transverse-emittance growth in rapidly varying magnetic fields and to devise mechanisms to mitigate such effects. Furthermore, MICE has taken data to explore a scheme in which a wedge-shaped absorber is used to decrease the beam’s longitudinal emittance while allowing a controlled growth in its transverse emittance. This is required for a proton-based muon collider to reach the highest luminosities.

    With the MICE observation of ionisation cooling, the last of the proof-of-principle demonstrations of the novel technologies that underpin a proton-based neutrino factory or muon collider has now been delivered. The drive to produce lepton–antilepton collisions at centre-of-mass energies in the multi-TeV range can now include consideration of the muon collider, for which two routes are offered: one, for which the R&D is well advanced, that exploits muons produced using a high-power proton beam and which requires ionisation cooling; and one that exploits positron annihilation with electrons at rest to create a high-energy cold muon source. The high muon flux that can be achieved using the proton-based technique has the potential to serve a neutrino-physics programme of unprecedented sensitivity, and the MICE collaboration’s timely results will inform the coming update of the European Strategy for Particle Physics.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN map

    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 10:55 am on July 11, 2018 Permalink | Reply
    Tags: , E821 storage-ring experiment at Brookhaven National Laboratory, , , , , Physics   

    From CERN Courier: “Muons accelerated in Japan” 


    From CERN Courier

    9 July 2018

    1
    Installation. No image credit.

    Muons have been accelerated by a radio-frequency accelerator for the first time, in an experiment performed at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, Japan. The work paves the way for a compact muon linac that would enable precision measurements of the muon anomalous magnetic moment and the electric dipole moment.

    Japan Proton Accelerator Research Complex J-PARC, located in Tokai village, Ibaraki prefecture, on the east coast of Japan


    Japan Proton Accelerator Research Complex J-PARC, located in Tokai village, Ibaraki prefecture, on the east coast of Japan

    Around 15 years ago, the E821 storage-ring experiment at Brookhaven National Laboratory (BNL) reported the most precise measurement of the muon anomalous magnetic moment (g-2).

    1
    E821 storage-ring experiment at Brookhaven National Laboratory (BNL)

    Achieving an impressive precision of 0.54 parts per million (ppm), the measured value differs from the Standard Model prediction by more than three standard deviations. Following a major effort over the past few years, the BNL storage ring has been transported to and upgraded at Fermilab and recently started taking data to improve on the precision of E821.

    FNAL Muon g-2 studio

    In the BNL/Fermilab setup, a beam of protons enters a fixed target to create pions, which decay into muons with aligned spins. The muons are then transferred to the 14 m-diameter storage ring, which uses electrostatic focusing to provide vertical confinement, and their magnetic moments are measured as they precess in a magnetic field.

    The new J-PARC experiment, E34, proposes to measure muon g-2 with an eventual precision of 0.1 ppm by storing ultra-cold muons in a mere 0.66 m-diameter magnet, aiming to reach the BNL precision in a first phase. The muons are produced by laser-ionising muonium atoms (bound states of a positive muon and an electron), which, since they are created at rest, results in a muon beam with very little spread in the transverse direction – thus eliminating the need for electrostatic focusing.

    The ultracold muon beam is stored in a high-precision magnet where the spin-precession of muons is measured by detecting muon decays. This low-emittance technique, which allows a smaller magnet and lower muon energies, enables researchers to circumvent some of the dominant systematic uncertainties in the previous g-2 measurement. To avoid decay losses, the J-PARC approach requires muons to be accelerated via a conventional radio-frequency accelerator.

    In October 2017, a team comprising physicists from Japan, Korea and Russia successfully demonstrated the first acceleration of negative muonium ions, reaching an energy of 90 keV. The experiment was conducted using a radio-frequency quadrupole linac (RFQ) installed at a muon beamline at J-PARC, which is driven by a high-intensity pulsed proton beam. Negative muonium atoms were first accelerated electrostatically and then injected into the RFQ, after which they were guided to a detector through a transport beamline. The accelerated negative muonium atoms were identified from their time of flight: because a particle’s velocity at a given energy is uniquely determined from its mass, its type is identified by measuring the velocity (see figure).

    The researchers are now planning to further accelerate the beam from the RFQ. In addition to precise measurements in particle physics, the J-PARC result offers new muon-accelerator applications including the construction of a transmission muon microscope for use in materials and life-sciences research, says team member Masashi Otani of KEK laboratory. “Part of the construction of the experiment has started with partial funding, which includes the frontend muon beamline and detector. The experiment can start properly three years after full funding is provided.”

    Muon acceleration is also key to a potential muon collider and neutrino factory, for which it is proposed that the large, transverse emittance of the muon beam can be reduced using ionisation cooling (see Muons cooled for action).

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN map

    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 11:38 am on July 10, 2018 Permalink | Reply
    Tags: , , FACCTS-France And Chicago Collaborating in The Sciences, , , Physics   

    From Fermilab: “With FACCTS on its side, Fermilab strengthens trans-Atlantic research” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermilab , an enduring source of strength for the US contribution to scientific research world wide.

    July 5, 2018
    Ali Sundermier

    In 2001, the University of Chicago — one of the managing institutions for the U.S. Department of Energy’s Fermi National Accelerator Laboratory — created the France Chicago Center, an interdisciplinary organization with the goal of facilitating and promoting fruitful intellectual and artistic exchange between students and researchers in Chicago and their colleagues in France.

    Although the organization was concentrated in the humanities and social sciences at first, in 2008, they began contacting various partners both within and outside the university, including Fermilab, to gather funds that could be made available to teams of scientists from Chicago and France, enabling them to conduct research together. This is how the France And Chicago Collaborating in The Sciences program, or FACCTS, was born.

    The program has been particularly successful at Fermilab, where FACCTS has now supported four collaborations with a total of almost $60,000. Through these collaborations, funded by Fermi Research Alliance LLC, researchers develop technologies that will allow us to do things like detect dwarf galaxies with unprecedented sensitivity and better investigate neutrino oscillations, which could revolutionize our understanding of the universe.

    “We believe that both the university and France are enriched and made more lively because there’s dynamic exchange between scholars, professors and students,” said Daniel Bertsche, the associate director of the center. “We put a lot of effort into getting people and ideas back and forth across the Atlantic.”

    Each year, researchers at Fermilab, the University of Chicago and Argonne National Laboratory can apply for FACCTS awards, which provide seed funding for new project-based collaborations between researchers in Chicago and France. Because of the way the financial structures of the university and national labs are arranged, FACCTS hosts three different pools of applicants for each institution.

    For the Fermilab pool, FACCTS invites any researcher from the lab to submit an application directly to them.

    In the seven years since it was created, the total amount of money available for these awards has risen from $100,000 to $260,000. Individual awards, available for one or two years, range between $5,000 and $25,000.

    “The idea is that there’s really good science happening both in the United States and in France,” Bertsche said, “and often there’s a lot to be gained by collaborative research. But there are a lot of barriers to trans-Atlantic collaborative research because of the way science is funded in both countries. There are restrictions to the funds that make it difficult for researchers to use them for things like trans-Atlantic airfare or housing while they run experiments. We realize we don’t have large sums of money that we’re offering, but we try to augment the usefulness of our funds by really minimizing the number of strings attached.”

    In 2018, Fermilab scientist Dan Broemmelsiek, whose research focus is on superconducting accelerators, was given a FACCTS award to seed work on novel instrumentation for superconducting accelerating structures. The idea is to use noninvasive techniques to understand electron beam parameters in superconducting radio-frequency accelerating cavities.

    A Fermilab guest scientist from the French institution CEA Saclay, Olivier Napoly originally saw an opportunity at Fermilab’s electron linear accelerator to test some of his ideas.

    “Olivier is one of the world’s experts on superconducting radio-frequency cavities,” Broemmelsiek said. “The Fermilab Accelerator Science and Technology facility is a research and development facility, largely unencumbered by experimental priorities the way the other accelerators at Fermilab are. Prototype development and beam-based feasibility studies are what we do. Olivier’s ideas could have extensive impact on the performance of current and near future accelerators.”

    1
    Fermilab’s most recent recipient of a FACCTS Award is Dan Broemmelsiek, who is working to advance instrumentation on superconducting accelerators. Pictured here is a cryomodule for a superconducting electron accelerator. Cryomodules are major units of a superconducting accelerators and are typically lined up end to end to form the machine. Photo: Reidar Hahn.

    In 2015, former Fermilab scientist Giulia Brunetti, who is now based in Italy, was awarded a FACCTS grant for a project on liquid-argon detectors, in particular on what is called dual-phase technology. This technology may have implications for the upcoming Fermilab-hosted Deep Underground Neutrino Experiment, a large, international experiment to investigate neutrino oscillations and answer fundamental questions about the nature of matter and the evolution of the universe.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    Although scientists had investigated the dual-phase technology in the past, it had not been tested at the scale needed for such an experiment. Brunetti was able to fund trips to Europe where she could work on larger-scale detectors — an important part of her research that would not have been possible without the award.

    “It was a great opportunity for me to gain some expertise on liquid-argon detectors,” Brunetti said. “I was able to be at CERN for part of the time, and I worked with a collaboration on the assembly of the detector. That experience is fundamental to understand all the details of the detectors and how carefully they need to be built and operated.”

    In 2017, Fermilab scientist Alex Drlica-Wagner, whose interests focus on the fundamental nature of dark matter, was given a FACCTS award to support ongoing work to simulate and test the sensitivity of the upcoming Large Synoptic Survey Telescope to detecting dwarf galaxies.

    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The work is conducted in collaboration with Johann Cohen-Tanugi and Eric Nuss and University of Montpellier and Laboratoire Univers et Particules de Montpellier.

    Drlica-Wagner also works on the Fermilab-based Dark Energy Survey, or DES.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    “These funds have enabled two workshops over the past year — one in Montpellier and one in Chicago,” Drlica-Wagner said. “The meetings have been invaluable for establishing the project, defining its scope and introducing new collaborators. Fermilab has an enormous amount of experience with DES data collection and reduction, while our French collaborators have a lot of experience on the LSST simulation toolkit and simulated data analysis. Bringing the two areas of expertise together has been enormously productive for both parties.”

    FACCTS is a valuable tool for scientists who want to conduct trans-Atlantic collaborative research, but for whom the logistical barriers are just too high, Bertsche said.

    “We’re a program that can provide a small amount of funding that can remove those barriers, allow for the initial stages of research to unfold between complementary labs and provide needed resources that can allow these labs to generate preliminary data that can then in turn be used for much larger grants,” Bertsche said. “Not all seeds germinate and grow into oak trees, but the ones that do are pretty impressive.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.


    FNAL/MINERvA

    FNAL DAMIC

    FNAL Muon g-2 studio

    FNAL Short-Baseline Near Detector under construction

    FNAL Mu2e solenoid

    Dark Energy Camera [DECam], built at FNAL

    FNAL DUNE Argon tank at SURF

    FNAL/MicrobooNE

    FNAL Don Lincoln

    FNAL/MINOS

    FNAL Cryomodule Testing Facility

    FNAL Minos Far Detector

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    FNAL ICARUS

    FNAL Holometer

     
  • richardmitnick 9:23 am on July 10, 2018 Permalink | Reply
    Tags: , , , , Higgs boson observed decaying to b quarks – at last!, , , Physics   

    From CERN ATLAS: “Higgs boson observed decaying to b quarks – at last!” 

    CERN ATLAS Higgs Event

    CERN/ATLAS
    From CERN ATLAS

    9th July 2018

    The Brout-Englert-Higgs mechanism solves the apparent theoretical impossibility of weak vector bosons (W and Z) to have mass. The discovery of the Higgs boson in 2012 via its decays into photon, Z and W pairs was a triumph of the Standard Model built upon this mechanism. The Higgs field can also be used in an elegant way to provide mass to charged fermions (quarks and leptons) through interactions involving “Yukawa couplings”, with strength proportional to the particle mass. The observation of the Higgs boson decaying into pairs of τ leptons provided the first direct evidence of this type of interaction.

    Six years after its discovery, ATLAS has observed about 30% of the Higgs boson decays predicted in the Standard Model. However, the favoured decay of the Higgs boson into a pair of b quarks (H→bb), which is expected to account for almost 60% of all possible decays, had remained elusive up to now. Observing this decay mode and measuring its rate is a mandatory step to confirm (or not…) the mass generation for fermions via Yukawa interactions, as predicted in the Standard Model.

    Today, at the 2018 International Conference on High Energy Physics (ICHEP) in Seoul, the ATLAS experiment reported a preliminary result establishing the observation of the Higgs boson decaying into pairs of b quarks, furthermore at a rate consistent with the Standard Model prediction. In the community of particle physics (and beyond), for the detection of a process to be qualified as an “observation”, it is necessary to exclude at a level of one in three million the probability that it arises from a fluctuation of the background that could mimic the process in question. When such a probability is at the level of only one in a thousand, the detection is qualified as an “evidence”. Evidence of the H→bb decay was first provided at the Tevatron in 2012, and a year ago by the ATLAS and CMS Collaborations, independently.

    FNAL/Tevatron map

    FNAL/Tevatron

    CERN/CMS Detector


    CERN CMS Higgs Event

    Combing through the haystack of b quarks

    Given the abundance of the H→bb decay, and how much rarer decay modes such as H→γγ had already been observed at the time of discovery, why did it take so long to achieve this observation?

    The main reason: the most copious production process for the Higgs boson in proton-proton interactions leads to just a pair of particle jets originating from the fragmentation of b quarks (b-jets). These are almost impossible to distinguish from the overwhelming background of b-quark pairs produced via the strong interaction (quantum chromodynamics or QCD). To overcome this challenge, it was necessary to consider production processes that are less copious, but exhibit features not present in QCD. The most effective of these is the associated production of the Higgs boson with a vector boson, W or Z. The leptonic decays, W→ℓν, Z→ℓℓ and Z→νν (where ℓ stands for an electron or a muon) provide signatures that allow for efficient triggering and powerful QCD background reduction.

    However, the Higgs boson signal remains orders of magnitude smaller than the remaining backgrounds arising from top quark or vector boson production, which lead to similar signatures. For instance, a top quark pair can decay as tt→[(W→ℓν)b][(W→qq)b] with a final state containing an electron or a muon and two b quarks, exactly as the (W→ℓν)(H→bb) signal.

    The main handle to discriminate the signal from such backgrounds is the invariant mass, mbb, of pairs of b-jets identified by sophisticated “b-tagging” algorithms. An example of such a mass distribution is shown in Figure 1, where the sum of the signal and background components is confronted to the data.

    1
    Figure 1: Distribution of mbb in the (W→ℓν)(H→bb) search channel. The signal is shown in red, the different backgrounds in various colours. The data are shown as points with error bars. (Image: ATLAS Collaboration/CERN)

    When all WH and ZH channels are combined and the backgrounds (apart from WZ and ZZ production) subtracted from the data, the distribution shown in Figure 2 exhibits a clear peak arising from Z boson decays to b-quark pairs, which validates the analysis procedure. The shoulder on the upper side is consistent in shape and rate with the expectation from Higgs boson production.

    2
    Figure 2: Distribution of mbb from all search channels combined after subtraction of all backgrounds except for WZ and ZZ production. The data (points with error bars) are compared to the expectations from the production of WZ and ZZ (in grey) and of WH and ZH (in red). (Image: ATLAS Collaboration/CERN)

    When all WH and ZH channels are combined and the backgrounds (apart from WZ and ZZ production) subtracted from the data, the distribution shown in Figure 2 exhibits a clear peak arising from Z boson decays to b-quark pairs, which validates the analysis procedure. The shoulder on the upper side is consistent in shape and rate with the expectation from Higgs boson production.

    This is, however, not sufficient to reach the level of detection that can be qualified as observation. To this end, the mass of the b-jet pair is combined with other kinematic variables that show distinct differences between the signal and the various backgrounds, for instance the angular separation between the two b-jets, or the transverse momentum of the associated vector boson. This combination of multiple variables is performed using the technique of boosted decision trees (BDTs). A combination of the BDT outputs from all channels, reordered in terms of signal-to-background ratio, is shown in Figure 3. It can be seen that the signal closely follows the distribution expected from the Standard Model. The BDT outputs are subjected to a sophisticated statistical analysis to extract the “significance” of the signal. This is another way to measure the probability of a fake observation in terms of standard deviations, σ, of a Gaussian distribution. The magic number corresponding to the observation of a signal is 5σ.

    3
    Figure 3: Distribution showing the combination of all BDT outputs reordered in terms of log(S/B), where S and B are the signal and background yields, respectively. The signal is shown in red, and the different backgrounds in various colours. The data are shown as points with error bars. The lower panel shows the “pull”, i.e. the ratio of data minus background to the statistical uncertainty of the background. (Image: ATLAS Collaboration/CERN)

    Observation achieved!

    The analysis of 13 TeV data collected by ATLAS during Run 2 of the LHC in 2015, 2016 and 2017 leads to a significance of 4.9σ – alone almost sufficient to claim observation. This result was combined with those from a similar analysis of Run 1 data and from other searches by ATLAS for the H→bb decay mode, namely where the Higgs boson is produced in association with a top quark pair or via a process known as vector boson fusion (VBF). The significance achieved by this combination is 5.4σ.

    Furthermore, combining the present analysis with others that target Higgs boson decays to pairs of photons and Z bosons measured at 13 TeV provides the observation at 5.3σ of associated VH (V = Z or W) production, in agreement with the Standard Model prediction. All four primary Higgs boson production modes at hadron colliders have now been observed, of which two only this year. In order of discovery: (1) fusion of gluons to a Higgs boson, (2) fusion of weak bosons to a Higgs boson, (3) associated production of a Higgs boson with two top quarks, and (4) associated production of a Higgs boson with a weak boson.

    With these observations, a new era of detailed measurements in the Higgs sector opens up, through which the Standard Model will be further challenged.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    CERN map


    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN


    CERN Courier

    QuantumDiaries


    Quantum Diaries

     
  • richardmitnick 8:42 am on July 10, 2018 Permalink | Reply
    Tags: , , , Combined measurements of Higgs boson couplings reach new level of precision, , , , Physics   

    From CERN ATLAS: “Combined measurements of Higgs boson couplings reach new level of precision” 

    CERN ATLAS Higgs Event

    CERN/ATLAS
    From CERN ATLAS

    9th July 2018

    1
    Figure 1: Measured cross-sections of main Higgs boson production modes at the LHC, namely gluon-gluon fusion (ggF), weak boson fusion (VBF), associated production with a weak vector boson W or Z (WH and ZH), and associated production with top quarks (ttH and tH), normalized to Standard Model predictions. The uncertainty of each measurement (indicated by the error bar) is broken down into statistical (yellow box) and systematic (blue box) parts. The theory uncertainty (grey box) on the Standard Model prediction (vertical red line at unity) is also shown. (Image: ATLAS Collaboration/CERN)

    The Higgs boson, discovered at the LHC in 2012, has a singular role in the Standard Model of particle physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Standard Model of Particle Physics from Symmetry Magazine

    Most notable is the Higgs boson’s affinity to mass, which can be likened to the electric charge for an electric field: the larger the mass of a fundamental particle, the larger the strength of its interaction, or “coupling”, with the Higgs boson. Deviations from these predictions could be a hallmark of new physics in this as-yet little-explored part of the Standard Model.

    Higgs boson couplings manifest themselves in the rate of production of the Higgs boson at the LHC, and its decay branching ratios into various final states. These rates have been precisely measured by the ATLAS experiment, using up to 80 fb–1 of data collected at a proton-proton collision energy of 13 TeV from 2015 to 2017. Measurements were performed in all of the main decay channels of the Higgs boson: to pairs of photons, W and Z bosons, bottom quarks, taus, and muons. The overall production rate of the Higgs boson was measured to be in agreement with Standard Model predictions, with an uncertainty of 8%. The uncertainty is reduced from 11% in the previous combined measurements released last year.

    The measurements are broken down into production modes (assuming Standard Model decay branching ratios), as shown in Figure 1. All four main production modes have now been observed at ATLAS with a significance of more than 5 standard deviations: the long-established gluon-gluon fusion mode, the recently-observed associated production with top-quark pair, and last-remaining weak boson fusion mode, presented today by ATLAS. Together with the observation of production in association with a weak boson and of the H→bb decay in a separate measurement, these results paint a complete picture of Higgs boson production and decay.

    Physicists can use these new results to study the couplings of the Higgs boson to other fundamental particles. As shown in Figure 2, these couplings are in excellent agreement with the Standard Model prediction over a range covering 3 orders of magnitude in mass, from the top quark (the heaviest particle in the Standard Model and thus with the strongest interaction with the Higgs boson) to the much lighter muons (for which only an upper limit of the coupling with the Higgs boson has been obtained so far).

    2
    Figure 2: Higgs boson coupling strength to each particle (error bars) as a function of particle mass compared with Standard Model prediction (blue dotted line). (Image: ATLAS Collaboration/CERN)

    The measurements also probe the coupling of the Higgs boson to gluons in the gluon-gluon fusion production process, which proceeds through a loop diagram and is thus particularly sensitive to new physics. In the Standard Model, the loop is mediated mainly by top quarks. Therefore, possible new physics contributions can be tested by comparing the gluon coupling with the direct measurement of the top quark coupling in Higgs boson production in association with top quarks, as shown in Figure 3.

    3
    Figure 3: Ratios of coupling strengths to each particle. By taking ratios, model assumptions (such as on the total width of the Higgs boson) can be significantly reduced. Among all the interesting tests performed, the one comparing the gluon-gluon fusion and Higgs boson production in association with top quarks is represented by λtg in the plot. (Image: ATLAS Collaboration/CERN)

    The excellent agreement with the Standard Model, which is observed throughout, can be used to set stringent limits on new physics models. These are based on possible modifications to Higgs couplings and complement direct searches performed at the LHC.

    Links:

    Combined measurements of Higgs boson production and decay using up to 80 fb−1 of proton-proton collision data at 13 TeV collected with the ATLAS experiment (ATLAS-CONF-2018-031)
    ICHEP2018 presentation by Nicolas Morange: Measurements of Higgs boson properties using a combination of different Higgs decay channels
    ICHEP2018 presentation by Tancredi Carli: Highlights from the ATLAS and ALICE Experiments
    ICHEP2018 presentation by Giacinto Piacquadio (coming Tuesday 9 July)
    See also the full lists of ATLAS Conference Notes and ATLAS Physics Papers.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    CERN map

    CERN LHC Grand Tunnel

    CERN LHC particles
    LHC at CERN


    CERN Courier

    QuantumDiaries


    Quantum Diaries

     
  • richardmitnick 8:19 am on July 9, 2018 Permalink | Reply
    Tags: , , Physics, ,   

    From Quanta Magazine: “Physicists Find a Way to See the ‘Grin’ of Quantum Gravity” 

    Quanta Magazine
    From Quanta Magazine

    March 6, 2018
    Natalie Wolchover

    Re-released 7.8.18

    A recently proposed experiment would confirm that gravity is a quantum force.

    1
    Two microdiamonds would be used to test the quantum nature of gravity. Olena Shmahalo/Quanta Magazine

    In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.

    Bronstein figured out how to describe gravity in terms of quantized particles, now called gravitons, but only when the force of gravity is weak — that is (in general relativity), when the space-time fabric is so weakly curved that it can be approximated as flat. When gravity is strong, “the situation is quite different,” he wrote. “Without a deep revision of classical notions, it seems hardly possible to extend the quantum theory of gravity also to this domain.”

    His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics.

    2
    To Solve the Biggest Mystery in Physics, Join Two Kinds of Law. Robbert Dijkgraaf . James O’Brien for Quanta Magazine.Reductionism breaks the world into elementary building blocks. Emergence finds the simple laws that arise out of complexity. These two complementary ways of viewing the universe come together in modern theories of quantum gravity. September 7, 2017

    Perhaps, given the chance, the whip-smart Bronstein might have helped to speed things along. Aside from quantum gravity, he contributed to astrophysics and cosmology, semiconductor theory, and quantum electrodynamics, and he also wrote several science books for children, before being caught up in Stalin’s Great Purge and executed in 1938, at the age of 31.

    The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.

    The problem is gravity’s extreme weakness. Whereas the quantized particles that convey the strong, weak and electromagnetic forces are so powerful that they tightly bind matter into atoms, and can be studied in tabletop experiments, gravitons are individually so weak that laboratories have no hope of detecting them. To detect a graviton with high probability, a particle detector would have to be so huge and massive that it would collapse into a black hole. This weakness is why it takes an astronomical accumulation of mass to gravitationally influence other massive bodies, and why we only see gravity writ large.

    Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity — where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed — always hide behind the horizons of black holes.

    3
    Mike Zeng for Quanta Magazine. Where Gravity Is Weak and Naked Singularities Are Verboten. Natalie Wolchover Recent calculations tie together two conjectures about gravity, potentially revealing new truths about its elusive quantum nature.

    “Even a few years ago it was a generic consensus that, most likely, it’s not even conceivably possible to measure quantization of the gravitational field in any way,” said Igor Pikovski, a theoretical physicist at Harvard University.

    Now, a pair of papers recently published in Physical Review Letters has changed the calculus.

    Spin Entanglement Witness for Quantum Gravity https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.240401
    Gravitationally Induced Entanglement between Two Massive Particles is Sufficient Evidence of Quantum Effects in Gravity https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.240402

    The papers contend that it’s possible to access quantum gravity after all — while learning nothing about it. The papers, written by Sougato Bose at University College London and nine collaborators and by Chiara Marletto and Vlatko Vedral at the University of Oxford, propose a technically challenging, but feasible, tabletop experiment that could confirm that gravity is a quantum force like all the rest, without ever detecting a graviton. Miles Blencowe, a quantum physicist at Dartmouth College who was not involved in the work, said the experiment would detect a sure sign of otherwise invisible quantum gravity — the “grin of the Cheshire cat.”

    2
    A levitating microdiamond (green dot) in Gavin Morley’s lab at the University of Warwick, in front of the lens used to trap the diamond with light. Gavin W Morley

    The proposed experiment will determine whether two objects — Bose’s group plans to use a pair of microdiamonds — can become quantum-mechanically entangled with each other through their mutual gravitational attraction. Entanglement is a quantum phenomenon in which particles become inseparably entwined, sharing a single physical description that specifies their possible combined states. (The coexistence of different possible states, called a “superposition,” is the hallmark of quantum systems.) For example, an entangled pair of particles might exist in a superposition in which there’s a 50 percent chance that the “spin” of particle A points upward and B’s points downward, and a 50 percent chance of the reverse. There’s no telling in advance which outcome you’ll get when you measure the particles’ spin directions, but you can be sure they’ll point opposite ways.

    The authors argue that the two objects in their proposed experiment can become entangled with each other in this way only if the force that acts between them — in this case, gravity — is a quantum interaction, mediated by gravitons that can maintain quantum superpositions. “If you can do the experiment and you get entanglement, then according to those papers, you have to conclude that gravity is quantized,” Blencowe explained.

    To Entangle a Diamond

    Quantum gravity is so imperceptible that some researchers have questioned whether it even exists. The venerable mathematical physicist Freeman Dyson, 94, has argued since 2001 that the universe might sustain a kind of “dualistic” description, where “the gravitational field described by Einstein’s theory of general relativity is a purely classical field without any quantum behavior,” as he wrote that year in The New York Review of Books, even though all the matter within this smooth space-time continuum is quantized into particles that obey probabilistic rules.

    Dyson, who helped develop quantum electrodynamics (the theory of interactions beween matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.

    He is not the only skeptic. The renowned British physicist Sir Roger Penrose and, independently, the Hungarian researcher Lajos Diósi have hypothesized that space-time cannot maintain superpositions. They argue that its smooth, solid, fundamentally classical nature prevents it from curving in two different possible ways at once — and that its rigidity is exactly what causes superpositions of quantum systems like electrons and photons to collapse. This “gravitational decoherence,” in their view, gives rise to the single, rock-solid, classical reality experienced at macroscopic scales.

    The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.

    Bose’s and Marletto’s proposals appeared simultaneously mostly by chance, though experts said they reflect the zeitgeist. Experimental quantum physics labs around the world are putting ever-larger microscopic objects into quantum superpositions and streamlining protocols for testing whether two quantum systems are entangled. The proposed experiment will have to combine these procedures while requiring further improvements in scale and sensitivity; it could take a decade or more to pull it off. “But there are no physical roadblocks,” said Pikovski, who also studies how laboratory experiments might probe gravitational phenomena. “I think it’s challenging, but I don’t think it’s impossible.”

    The plan is laid out in greater detail in the paper by Bose and co-authors — an Ocean’s Eleven cast of experts for different steps of the proposal. In his lab at the University of Warwick, for instance, co-author Gavin Morley is working on step one, attempting to put a microdiamond in a quantum superposition of two locations. To do this, he’ll embed a nitrogen atom in the microdiamond, next to a vacancy in the diamond’s structure, and zap it with a microwave pulse. An electron orbiting the nitrogen-vacancy system both absorbs the light and doesn’t, and the system enters a quantum superposition of two spin directions — up and down — like a spinning top that has some probability of spinning clockwise and some chance of spinning counterclockwise. The microdiamond, laden with this superposed spin, is subjected to a magnetic field, which makes up-spins move left while down-spins go right. The diamond itself therefore splits into a superposition of two trajectories.

    In the full experiment, the researchers must do all this to two diamonds — a blue one and a red one, say — suspended next to each other inside an ultracold vacuum. When the trap holding them is switched off, the two microdiamonds, each in a superposition of two locations, fall vertically through the vacuum. As they fall, the diamonds feel each other’s gravity. But how strong is their gravitational attraction?

    If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.

    In each case, the different degrees of gravitational attraction affect the evolving components of the diamonds’ superpositions. The two diamonds become interdependent, meaning that their states can only be specified in combination — if this, then that — so that, in the end, the spin directions of their two nitrogen-vacancy systems will be correlated.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    After the microdiamonds have fallen side by side for about three seconds — enough time to become entangled by each other’s gravity — they then pass through another magnetic field that brings the branches of each superposition back together. The last step of the experiment is an “entanglement witness” protocol developed by the Dutch physicist Barbara Terhal and others: The blue and red diamonds enter separate devices that measure the spin directions of their nitrogen-vacancy systems. (Measurement causes superpositions to collapse into definite states.) The two outcomes are then compared. By running the whole experiment over and over and comparing many pairs of spin measurements, the researchers can determine whether the spins of the two quantum systems are correlated with each other more often than a known upper bound for objects that aren’t quantum-mechanically entangled. In that case, it would follow that gravity does entangle the diamonds and can sustain superpositions.

    “What’s beautiful about the arguments is that you don’t really need to know what the quantum theory is, specifically,” Blencowe said. “All you have to say is there has to be some quantum aspect to this field that mediates the force between the two particles.”

    Technical challenges abound. The largest object that’s been put in a superposition of two locations before is an 800-atom molecule. Each microdiamond contains more than 100 billion carbon atoms — enough to muster a sufficient gravitational force. Unearthing its quantum-mechanical character will require colder temperatures, a higher vacuum and finer control. “So much of the work is getting this initial superposition up and running,” said Peter Barker, a member of the experimental team based at UCL who is improving methods for laser-cooling and trapping the microdiamonds. If it can be done with one diamond, Bose added, “then two doesn’t make much of a difference.”

    Why Gravity Is Unique

    Quantum gravity researchers do not doubt that gravity is a quantum interaction, capable of inducing entanglement. Certainly, gravity is special in some ways, and there’s much to figure out about the origin of space and time, but quantum mechanics must be involved, they say. “It doesn’t really make much sense to try to have a theory in which the rest of physics is quantum and gravity is classical,” said Daniel Harlow, a quantum gravity researcher at the Massachusetts Institute of Technology. The theoretical arguments against mixed quantum-classical models are strong (though not conclusive).

    On the other hand, theorists have been wrong before, Harlow noted: “So if you can check, why not? If that will shut up these people” — meaning people who question gravity’s quantumness — “that’s great.”

    Dyson wrote in an email, after reading the PRL papers, “The proposed experiment is certainly of great interest and worth performing with real quantum systems.” However, he said the authors’ way of thinking about quantum fields differs from his. “It is not clear to me whether [the experiment] would settle the question whether quantum gravity exists,” he wrote. “The question that I have been asking, whether a single graviton is observable, is a different question and may turn out to have a different answer.”

    In fact, the way Bose, Marletto and their co-authors think about quantized gravity derives from how Bronstein first conceived of it in 1935. (Dyson called Bronstein’s paper “a beautiful piece of work” that he had not seen before.) In particular, Bronstein showed that the weak gravity produced by a small mass can be approximated by Newton’s law of gravity. (This is the force that acts between the microdiamond superpositions.) According to Blencowe, weak quantized-gravity calculations haven’t been developed much, despite being arguably more physically relevant than the physics of black holes or the Big Bang. He hopes the new experimental proposal will spur theorists to find out whether there are any subtle corrections to the Newtonian approximation that future tabletop experiments might be able to probe.

    Leonard Susskind, a prominent quantum gravity and string theorist at Stanford University, saw value in carrying out the proposed experiment because “it provides an observation of gravity in a new range of masses and distances.” But he and other researchers emphasized that microdiamonds cannot reveal anything about the full theory of quantum gravity or space-time. He and his colleagues want to understand what happens at the center of a black hole, and at the moment of the Big Bang.

    Perhaps one clue as to why it is so much harder to quantize gravity than everything else is that other force fields in nature exhibit a feature called “locality”: The quantum particles in one region of the field (photons in the electromagnetic field, for instance) are “independent of the physical entities in some other region of space,” said Mark Van Raamsdonk, a quantum gravity theorist at the University of British Columbia. But “there’s at least a bunch of theoretical evidence that that’s not how gravity works.”

    In the best toy models of quantum gravity (which have space-time geometries that are simpler than those of the real universe), it isn’t possible to assume that the bendy space-time fabric subdivides into independent 3-D pieces, Van Raamsdonk said. Instead, modern theory suggests that the underlying, fundamental constituents of space “are organized more in a 2-D way.” The space-time fabric might be like a hologram, or a video game: “Even though the picture is three-dimensional, the information is stored in some two-dimensional computer chip,” he said. In that case, the 3-D world is illusory in the sense that different parts of it aren’t all that independent. In the video-game analogy, a handful of bits stored in the 2-D chip might encode global features of the game’s universe.

    The distinction matters when you try to construct a quantum theory of gravity. The usual approach to quantizing something is to identify its independent parts — particles, say — and then apply quantum mechanics to them. But if you don’t identify the correct constituents, you get the wrong equations. Directly quantizing 3-D space, as Bronstein did, works to some extent for weak gravity, but the method fails when space-time is highly curved.

    Witnessing the “grin” of quantum gravity would help motivate these abstract lines of reasoning, some experts said. After all, even the most sensible theoretical arguments for the existence of quantum gravity lack the gravitas of experimental facts. When Van Raamsdonk explains his research in a colloquium or conversation, he said, he usually has to start by saying that gravity needs to be reconciled with quantum mechanics because the classical space-time description fails for black holes and the Big Bang, and in thought experiments about particles colliding at unreachably high energies. “But if you could just do this simple experiment and get the result that shows you that the gravitational field was actually in a superposition,” he said, then the reason the classical description falls short would be self-evident: “Because there’s this experiment that suggests gravity is quantum.”

    Correction March 6, 2018: An earlier version of this article referred to Dartmouth University. Despite the fact that Dartmouth has multiple individual schools, including an undergraduate college as well as academic and professional graduate schools, the institution refers to itself as Dartmouth College for historical reasons.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:27 pm on July 5, 2018 Permalink | Reply
    Tags: , , , , , , Physics, Quarks observed to interact via minuscule 'weak lightsabers'   

    From CERN ATLAS: “Quarks observed to interact via minuscule ‘weak lightsabers'” 

    CERN ATLAS Higgs Event

    CERN/ATLAS
    From CERN ATLAS

    5th July 2018
    1
    Left: Especially at invariant jet-jet masses, mjj, > 1000 GeV the yellow signal of W±W± W±W± scattering can be clearly seen above the background from other processes. Right: The orange signal of W±Z W±Z scattering is evident as the white contribution at large values of the score value of a multivariate boosted decision tree (BDT). (Image: ATLAS Collaboration/CERN)

    Two among the rarest processes probed so far at the LHC, the scattering between W and Z bosons emitted by quarks in proton-proton collisions, have been established by the ATLAS experiment at CERN.

    W and Z bosons play the same mediating role for the weak nuclear interaction as photons do for electromagnetism. As light beams of photons from torches or lasers unaffectedly penetrate each other, electromagnetic “lightsabers” will forever stay science fiction. However, beams of W and Z bosons – or “weak light rays” – can scatter from one another.

    One of the key motivations for building the Large Hadron Collider (LHC) at CERN was to study exactly this process, called weak “vector boson scattering” (VBS). One quark in each of two colliding protons has to radiate a W or a Z boson. These extremely short-lived particles are only able to fly a distance of 0.1×10-15m before transforming into other particles, and their interaction with other particles is limited to a range of 0.002×10-15m. In other words, these extremely short “weak lightsabers” extend only about 1/10th of a proton’s radius and have to approach each other by 1/500th of a proton’s radius! Such an extremely improbable coincidence happens only about once in 20,000 billion proton-proton interactions, recorded typically in one day of LHC operation.

    Using 2016 data, ATLAS has now doubtlessly observed WZ and WW electroweak production, with the dominant part of it being the weak vector boson scattering: W±W± → W±W± and W±Z → W±Z. This continues the experiment’s long journey to scrutinize the VBS process: using 8 TeV data from 2012, ATLAS had obtained the first evidence for the W±W± → W±W± process with 18 candidate events. Such a yield would occur with a probability of less than 1:3000 as a pure statistical fluctuation. Now, at a higher center-of-mass energy of 13 TeV, ATLAS has identified 60 W±W± → W±W± events, which only would happen less than once in 200 billion cases as a fluctuation from pure background processes. This corresponds to a statistical significance of 6.9 standard deviations (σ) above background. Besides the decay products of the scattered W or Z bosons, the signature of the process are two high-energetic particle jets originating from the two quarks that initially radiated the W or Z.

    ATLAS has also combined 2015 and 2016 data to establish the scattering of W±Z → W±Z with a statistical significance of 5.6 σ above background. In this channel, the lower-energy data of 2012 had revealed a significance of only 1.9σ, not sufficient to claim any evidence for the process. This time, thanks to a multivariate “BDT” analysis technique implemented in 2016, ATLAS was able to isolate 44 signal candidate events, of which about half reveal “BDT score” values above 0.4, where only little background is present.

    For this scattering process of vector bosons, three basic Standard Model “vertices” contribute: the interaction via the well-known “triple-boson-coupling” (green) is drastically reduced by the contributions of “quartic-boson-couplings” (red) and the “boson-Higgs-couplings” (orange). Only the latter ensures that the rate of this scattering for large centre-of-mass energies obeys the basic “unitarity” law, that a probability cannot be bigger than 100%. With the discovery of VBS, a new chapter of Standard Model tests has started, allowing ATLAS to scrutinize the so far experimentally inaccessible quartic-boson-couplings and properties of the Higgs boson.

    Related journal articles
    _________________________________________________
    See the full article for further references with links.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN


    CERN Courier

    QuantumDiaries


    Quantum Diaries

     
  • richardmitnick 8:40 am on July 5, 2018 Permalink | Reply
    Tags: , , , , , , , James Lovelock, Lynn Margulis, Physics,   

    From Science Alert: “These Scientists Have a Tantalising New Answer to The Mysterious ‘Gaia Puzzle’ “ 

    ScienceAlert

    From Science Alert

    5 JUL 2018
    JAMES DYKE,
    TIM LENTON

    1
    (Louis Maniquet/Unsplash)

    2
    marianaboterop

    We will likely never know how life on Earth started. Perhaps in a shallow sunlit pool.

    Or in the crushing ocean depths miles beneath the surface near fissures in the Earth’s crust that spewed out hot mineral-rich soup. While there is good evidence for life at least 3.7 billion years ago, we don’t know precisely when it started.

    But these passing aeons have produced something perhaps even more remarkable: life has persisted.

    Despite massive asteroid impacts, cataclysmic volcano activity and extreme climate change, life has managed to not just cling on to our rocky world but to thrive.

    How did this happen? Research we recently published with colleagues in Trends in Ecology and Evolution offers an important part of the answer, providing a new explanation for the Gaia hypothesis.

    Developed by scientist and inventor James Lovelock, and microbiologist Lynn Margulis, the Gaia hypothesis originally proposed that life, through its interactions with the Earth’s crust, oceans, and atmosphere, produced a stabilising effect on conditions on the surface of the planet – in particular the composition of the atmosphere and the climate.

    With such a self-regulating process in place, life has been able to survive under conditions which would have wiped it out on non-regulating planets.

    Lovelock formulated the Gaia hypothesis while working for NASA in the 1960s. He recognised that life has not been a passive passenger on Earth.

    Rather it has profoundly remodelled the planet, creating new rocks such as limestone, affecting the atmosphere by producing oxygen, and driving the cycles of elements such as nitrogen, phosphorus and carbon.

    Human-produced climate change, which is largely a consequence of us burning fossil fuels and so releasing carbon dioxide, is just the latest way life affects the Earth system.

    While it is now accepted that life is a powerful force on the planet, the Gaia hypothesis remains controversial. Despite evidence that surface temperatures have, bar a few notable exceptions, remained within the range required for widespread liquid water, many scientists attribute this simply to good luck.

    If the Earth had descended completely into an ice house or hot house (think Mars or Venus) then life would have become extinct and we would not be here to wonder about how it had persisted for so long.

    This is a form of anthropic selection argument that says there is nothing to explain.

    Clearly, life on Earth has been lucky. In the first instance, the Earth is within the habitable zone – it orbits the sun at a distance that produces surface temperatures required for liquid water.

    There are alternative and perhaps more exotic forms of life in the universe, but life as we know it requires water. Life has also been lucky to avoid very large asteroid impacts.

    A lump of rock significantly larger than the one that lead to the demise of the dinosaurs some 66 million years ago could have completely sterilised the Earth.

    But what if life had been able to push down on one side of the scales of fortune? What if life in some sense made its own luck by reducing the impacts of planetary-scale disturbances?

    This leads to the central outstanding issue in the Gaia hypothesis: how is planetary self-regulation meant to work?

    While natural selection is a powerful explanatory mechanism that can account for much of the change we observe in species over time, we have been lacking a theory that could explain how the living and non-living elements of a planet produce self-regulation.

    Consequently the Gaia hypothesis has typically been considered as interesting but speculative – and not grounded in any testable theory.

    Selecting for stability

    We think we finally have an explanation for the Gaia hypothesis. The mechanism is “sequential selection”. In principle it’s very simple.

    As life emerges on a planet it begins to affect environmental conditions, and this can organise into stabilising states which act like a thermostat and tend to persist, or destabilising runaway states such as the snowball Earth events that nearly extinguished the beginnings of complex life more than 600 million years ago.

    If it stabilises then the scene is set for further biological evolution that will in time reconfigure the set of interactions between life and planet. A famous example is the origin of oxygen-producing photosynthesis around 3 billion years ago, in a world previously devoid of oxygen.

    If these newer interactions are stabilising, then the planetary-system continues to self-regulate. But new interactions can also produce disruptions and runaway feedbacks.

    In the case of photosynthesis it led to an abrupt rise in atmospheric oxygen levels in the “Great Oxidation Event” around 2.3 billion years ago.

    This was one of the rare periods in Earth’s history where the change was so pronounced it probably wiped out much of the incumbent biosphere, effectively rebooting the system.

    The chances of life and environment spontaneously organising into self-regulating states may be much higher than you would expect.

    In fact, given sufficient biodiversity, it may be extremely likely. But there is a limit to this stability.

    Push the system too far and it may go beyond a tipping point and rapidly collapse to a new and potentially very different state.

    This isn’t a purely theoretical exercise, as we think we may able to test the theory in a number of different ways. At the smallest scale that would involve experiments with diverse bacterial colonies.

    On a much larger scale it would involve searching for other biospheres around other stars which we could use to estimate the total number of biospheres in the universe – and so not only how likely it is for life to emerge, but also to persist.

    The relevance of our findings to current concerns over climate change has not escaped us. Whatever humans do life will carry on in one way or another.

    But if we continue to emit greenhouse gasses and so change the atmosphere, then we risk producing dangerous and potentially runaway climate change.

    This could eventually stop human civilisation affecting the atmosphere, if only because there will not be any human civilisation left.

    The ConversationGaian self-regulation may be very effective. But there is no evidence that it prefers one form of life over another. Countless species have emerged and then disappeared from the Earth over the past 3.7 billion years.

    We have no reason to think that Homo sapiens are any different in that respect.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: