Tagged: Particle Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:11 pm on September 23, 2020 Permalink | Reply
    Tags: "Berkeley Team Plays Key Role in Analysis of Particle Interactions That Produce Matter From Light", , , CERN’s ATLAS detector produced W bosons from photons which are particles of light., , , , Particle Physics, Photons are particles of light that carry the electromagnetic force which is the fundamental force associated with magnetism and electricity., , W bosons carry the weak force which is associated with the fusion that powers the sun and with nuclear fission that takes place in nuclear power plant reactors.   

    From Lawrence Berkeley National Lab: “Berkeley Team Plays Key Role in Analysis of Particle Interactions That Produce Matter From Light” 

    From Lawrence Berkeley National Lab

    September 23, 2020
    Glenn Roberts Jr.
    (510) 520-0843

    This image shows a reconstruction of a particle event at CERN’s ATLAS detector that produced W bosons from photons, which are particles of light. (Credit: ATLAS collaboration.)

    Researchers at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) played a key role in an analysis of data from the world’s largest particle collider that found proof of rare, high-energy particle interactions in which matter was produced from light.

    Simone Pagan Griso, a Berkeley Lab physicist and Divisional Fellow who coordinated the efforts of the Berkeley Lab team, said his team found about 174 particle interactions that are consistent with the creation of pairs of heavy force-carrying particles called W bosons from the collision of two photons.

    Photons are particles of light that carry the electromagnetic force, which is the fundamental force associated with magnetism and electricity. W bosons carry the weak force, which is associated with the fusion that powers the sun, and with a nuclear reaction called nuclear fission that takes place in nuclear power plant reactors.

    From 2015 to 2018, only about one of the approximately 30 trillion proton interactions measured at the ATLAS detector at CERN’s Large Hadron Collider (LHC) would produce W boson pairs from the interaction of two photons per data-taking day, Pagan Griso said.

    CERN ATLAS Image Claudia Marcelloni.

    The LHC is designed to accelerate and collide protons, which are positively charged particles found in atomic nuclei. The acceleration of bunches of these particles to nearly the speed of light produces strong electromagnetic fields that accompany the proton bunches and act like a field of photons. So when these high-energy photon bunches pass each other very closely at the LHC, their electromagnetic fields may interact, causing what’s known as an “ultra-peripheral collision.”

    Unlike the destructive proton-proton particle collisions that are used to generate a swarm of constituent particles and lead to particle interactions that are typically studied at the LHC, these ultra-peripheral collisions are more like rocks skipping across a surface. The interacting fields produce “quasi-real” photons, which are effects that resemble genuine photons in their characteristics but are not actual particles.

    In this latest analysis, the researchers focused on those rare occasions when the quasi-real photons produced pairs of W bosons.

    “It’s around 1,000 times less likely to happen than a quark-initiated W boson pair creation,” which presented a challenge in filtering out these more common types of interactions, Pagan Griso noted.

    “We had to be able to predict how much background we expected relative to a signal, and all of the other interactions that happen nearby,” he said. “This meant a lot of modeling and simulations to understand what different phenomena will look like.” Ultimately, the W boson pairs produced from the photon-photon interactions decayed down to an electron and a muon, a particle in the same class as the electron but with a mass 200 times greater.

    Also participating in Berkeley Lab’s analysis were Aleksandra Dimitrievska, a postdoctoral researcher and Chamberlain Fellow in the Physics Division; William Patrick Mccormack, a Ph.D. student at UC Berkeley and a Berkeley Lab researcher; and Maurice Garcia-Sciveres and Juerg Beringer, who are both senior staff scientists at Berkeley Lab.

    Pagan Griso noted that there was some hint for the production of W boson pairs from photon pairs in earlier data-taking at the LHC, though it was far less conclusive than this latest analysis.

    “We wrote this measurement from A to Z,” Pagan Griso said of the Berkeley Lab team involved in the studies. “We literally were involved in the entire spectrum of this analysis.”

    He added, “With even more data expected at the LHC in the future, we can probe this even better,” and learn more about the production rate of W boson pairs from photon-photon interactions, and the strength of the self-interaction among these four bosons, which is a stringent test of the Standard Model of particle physics. The team will also try to improve its analysis techniques, he said.

    Pagan Griso and other members of the Berkeley Lab ATLAS Group started the analysis, together with international collaborators, about a year and a half ago, he said. These recent results are preliminary and the study will soon be submitted to a scientific journal.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    LBNL Molecular Foundry

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

  • richardmitnick 1:46 pm on September 23, 2020 Permalink | Reply
    Tags: A chiral twin has been found for every matter and antimatter particle in the Standard Model—with the exception of neutrinos., An object that can coincide with its mirror-image twin in every coordinate such as a dumbbell or a spoon is not chiral., Another broken symmetry: the current predominance of matter over antimatter in our universe., , Chirality is of the universe., , Chirality was discovered in 1848 by biomedical scientist Louis Pasteur., Every time an elementary particle is detected an intrinsic property called its spin must be in one of two possible states., For a completely unknown reason the weak nuclear force only interacts with left-handed particles., Maybe the neutrino masses come from a special Higgs boson that only talks to neutrinos., Particle Physics, Physicists often talk about three mirror symmetries in nature: charge (which can be positive or negative); time (which can go forward or backward) and parity (which can be right- or left-handed)., , Researchers have only ever observed left-handed neutrinos and right-handed antineutrinos., , Understanding the difference between right-chiral and left-chiral objects is important for many scientific applications., You will find chirality in things like proteins; spiral galaxies and most elementary particles.   

    From Symmetry: “Nature through the looking glass” 

    Symmetry Mag
    From Symmetry

    Oscar Miyamoto Gomez

    Illustration by Sandbox Studio, Chicago.

    Handedness—and the related concept of chirality—are double-sided ways of understanding how matter breaks symmetries.

    Our right and left hands are reflections of one another, but they are not equal. To hide one hand perfectly behind the other, we must face our palms in opposite directions.

    In physics, the concept of handedness (or chirality) works similarly: It is a property of objects that are not dynamically equivalent to their mirror images. An object that can coincide with its mirror-image twin in every coordinate, such as a dumbbell or a spoon, is not chiral.

    Because our hands are chiral, they do not interact with other objects and space in the exact same way. In nature, you will find this property in things like proteins, spiral galaxies and most elementary particles.

    These different-handed object pairs reveal some puzzling asymmetries in the way our universe works. For example, the weak force—the force responsible for nuclear decay— has an effect only on particles that are left-handed. Also, life itself—every plant and creature we know—is built almost exclusively with right-handed sugars and left-handed amino acids.

    “If you have anything with a dual principle, it can be related to chirality,” says Penélope Rodríguez, a postdoctoral researcher at the Physics Institute of the National Autonomous University of Mexico. “This is not exclusive to biology, chemistry or physics. Chirality is of the universe.”

    Reflections of life

    Chirality was discovered in 1848 by biomedical scientist Louis Pasteur. He noticed that right-handed and left-handed crystals formed when racemic acid dried out.

    He separated them, one by one, into two samples, and dissolved them again. Although both were chemically identical, one sample consistently rotated polarized light clockwise, while the other did it counterclockwise.

    Pasteur referred to chirality as “dissymmetry” at the time, and he speculated that this phenomenon—consistently found in organic compounds—was a prerequisite for the handed chemistry of life. He was right.

    In 1904, scientist Lord Kelvin introduced the word “chirality” into chemistry, borrowing it from the Greek kheír, or hand.

    “Chirality is an intrinsic property of nature,” says Riina Aav, Professor at Tallinn University of Technology in Estonia. “Molecules in our bodily receptors are chiral. This means that our organism reacts selectively to the spatial configuration of molecules it interacts with.”

    Understanding the difference between right-chiral and left-chiral objects is important for many scientific applications. Scientists use the property of chirality to produce safer pharmaceuticals, build biocompatible metallic nanomaterials, and send binary messages in quantum computing (a field called spintronics).

    Broken mirrors

    Physicists often talk about three mirror symmetries in nature: charge (which can be positive or negative), time (which can go forward or backward) and parity (which can be right- or left-handed).

    Gravity, electromagnetism and the strong nuclear force are ambidextrous, treating particles equally regardless of their handedness. But, as physicist Chien-Shiung Wu experimentally proved in 1956, the weak nuclear force plays favorites.

    “For a completely unknown reason, the weak nuclear force only interacts with left-handed particles,” says Marco Drewes, a professor at Catholic University of Louvain in Belgium. “Why that might be is one of the big questions in physics.”

    Research groups are exploring the idea that such an asymmetry could have influenced the origin of the preferred handedness in biomolecules observed by Pasteur. “There is a symmetry breaking that gives birth to a molecular arrangement, which eventually evolves until it forms DNA, right-handed sugars and left-handed amino acids,” Rodríguez says.

    From an evolutionary perspective, this would mean that chirality is a useful feature for living organisms, making it easier for proteins and nucleic acids to self-replicate due to the preferred handedness of their constituent biomolecules.

    Missing twins

    Every time an elementary particle is detected, an intrinsic property called its spin must be in one of two possible states. The spin of a right-chiral particle points along the particle’s direction of motion, while the spin of a left-chiral particle points opposite to the particle’s direction of motion.

    A chiral twin has been found for every matter and antimatter particle in the Standard Model—with the exception of neutrinos. Researchers have only ever observed left-handed neutrinos and right-handed antineutrinos. If no right-handed neutrinos exist, the fact that neutrinos have mass could indicate that they function as their own antiparticles. It could also mean that neutrinos get their mass in a different way from the other particles.

    “Maybe the neutrino masses come from a special Higgs boson that only talks to neutrinos,” says, André de Gouvêa, a professor at Northwestern University. “There are many other kinds of possible answers, but they all indicate that there are other particles out there.”

    The difference between left- and right-handed could have influenced another broken symmetry: the current predominance of matter over antimatter in our universe.

    “Right-handed neutrinos could be responsible for the fact that there is matter in the universe at all,” Drewes says. “It could be that they prefer to decay into matter over antimatter.”

    According to de Gouvêa, the main lesson that chirality teaches scientists is that we should always be prepared to be surprised. “The big question is whether asymmetry is a property of our universe, or a property of the laws of nature,” he says. “We should always be willing to admit that our best ideas are wrong; nature does not do what we think is best.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:54 pm on September 23, 2020 Permalink | Reply
    Tags: "Unraveling Nature's secrets: vector boson scattering at the LHC", , , , , Particle Physics,   

    From CERN ATLAS- “Unraveling Nature’s secrets: vector boson scattering at the LHC” 

    CERN/ATLAS detector

    CERN ATLAS Higgs Event

    CERN ATLAS another view Image Claudia Marcelloni ATLAS CERN


    22nd September 2020
    Lucia Di Ciaccio
    Simone Pagan Griso

    Figure 1: “Wandering the immeasurable”, a sculpture designed by Gayle Hermick welcomes the CERN visitors. From the Mesopotamians’ cuneiform script to the mathematical formalism behind the discovery of the Higgs boson, the sculpture narrates the story of how knowledge is passed through the generations and illustrates the aesthetic nature of the mathematics behind physics. (Image: J. Guillaume/CERN.)

    In 2017, the ATLAS and CMS Collaborations announced the detection of a process in high-energy proton–proton collisions that had not been observed before: the vector boson scattering. It results in the production of two W particles with the same electric charge as well as two collimated sprays of particles called “jets” (see Figure 2). The observation of vector boson scattering didn’t receive as much attention from the media as the Higgs discovery in 2012, even though it was an important event for the particle physics community. Another missing piece of the big puzzle had been found – the puzzle that is the mathematical description of the microscopic world (see Figure 1).

    The W+ and W– bosons are unstable particles, which decay (transform) into a lepton and an antilepton or a quark and an antiquark with a mean lifetime of only a few 10-25 seconds. They have integer spin (characteristic of bosons) and are carriers of the weak force. Though the weak force is not directly experienced in everyday life, it is nevertheless important as it is responsible for radioactive β decay, which plays a role in the fusion of hydrogen into helium that powers the Sun’s thermonuclear process.

    To appreciate the importance of this discovery, it is instructive to follow the history of how and why the W+ and W– bosons were introduced; it illustrates nicely how the interplay between experimental information, theoretical models and mathematical principles drives progress in physics.

    Figure 2: Simplified view of a proton–proton collision event recorded with the ATLAS detector that was selected as a candidate for vector-boson-scattering production. The insert depicts a schematic view of the candidate physics process. Protons (p) from the LHC beam travel from left to right and from right to left in this view. They collide at approximately the centre of the detector. Within a very short period of time, too short to be resolved, two W bosons are emitted independently by the incoming quarks (q) from each of the LHC proton beams. These W bosons interact and each of the resulting W bosons decays to a muon (μ) and a neutrino (ν), where the neutrinos leave the ATLAS detector undetected. The outgoing quarks undergo a process called hadronization and manifest as a spray of particles called a “jet”. (Image: ATLAS Collaboration/CERN.)

    Enrico Fermi originally formulated a mathematical description of the weak force in 1933 as a “contact interaction” between particles, occurring at a single point without a carrier particle propagating the force. This formulation successfully described the known experimental observations, including the radioactive β decay for which it was developed. However it was soon realised that its predictions at high energy, a regime not yet experimentally accessible at that time, were bound to fail.

    Indeed, Fermi’s theory predicts that the production rate of some processes caused by the weak force – such as the elastic scattering of neutrinos on electrons – increases linearly with the neutrino energy. This continuous growth, however, leads to the violation of a limit derived from the conservation of probability in a scattering process. In other words, predictions become unphysical at a high enough energy. To overcome this problem, physicists modified Fermi’s theory by introducing “by hand” two massive spin-one (“vector”) charged particles propagating the interaction between neutrinos and electrons, dubbed “intermediate vector bosons”. This development came well before the discovery of the W bosons decades later.

    So even if the discovery of the long-awaited W± bosons in 1983 – and, five months later, of a neutral companion, the Z boson – didn’t come as a real surprise to physicists, it was certainly an epochal experimental achievement. Fermi’s theory remains an example of an effective theory valid only at low energy (well below the mass of the force carrier boson) – an approximation of a more general, universally valid theory.

    Along this line, the search for a consistent description of the fundamental forces between the ultimate constituents of matter has led to the Standard Model of particle physics: a mathematical construction based on fundamental principles and experimental observations. The Standard Model provides a coherent, unified picture of three of the four fundamental interactions, namely the electromagnetic, weak and strong force. The fourth force, not included in the Standard Model, is gravity. So far, the Standard Model has been successful at describing a myriad of experimental measurements in the microscopic world. Its success is, by all means, mind blowing.

    Standard Model of Particle Physics, Quantum Diaries.

    We do not know why natural phenomena are so well described by mathematical entities and relations but, experimentally, we know that it works. Just as Galileo said four hundred years ago, the big book of Nature is written in a mathematical language[1] – the Standard Model and Einstein’s theory of gravity, for example, are additional chapters of this book.

    Particle physics makes use of a theoretical tool in which all particles are represented mathematically by quantum fields. These entities encode properties like spin and mass of a particle. In the Standard Model, the existence of the electromagnetic, weak and strong force carriers follows from the invariance of the behaviour of quantum fields under a “local gauge transformation”. This is a transformation from one field configuration to another, which can be imagined as a rotation in an abstract mathematical space. The parameters of the transformation may vary from point to point in space-time, and thus the transformation is defined as “local”. Gauge invariance or gauge symmetry is the lack of changes in measurable quantities under gauge transformations, despite the quantum fields (which represent particles) being transformed.

    Gauge invariance holds if spin-one particles are introduced, which interact in a well-defined manner with the elementary constituents of matter such as electrons and quarks (constituents of the proton and neutron, or, more generally, of hadrons). These spin-one particles are interpreted as “the carriers of the interaction” between the matter particles, with the photon the carrier for the electromagnetic force, the W–, W+ and Z bosons for the weak force, and eight gluons for the strong force. These are the (intermediate) vector bosons introduced above.

    In this way, the Standard Model forces (or interactions) emerge in a very elegant manner from one general principle, namely a fundamental symmetry of Nature. Interestingly enough, in the model, the electromagnetic and weak interactions manifest themselves at high energy as different aspects of a single “electroweak” force, while at low energy the weak interaction remains feebler than the electromagnetic interaction. As a consequence, the photon, Z boson and W± bosons are collectively named “electroweak gauge bosons”.

    In the example above, Fermi’s followed a bottom-up approach: going from an observation to a mathematical description (a contact-interaction theory), which was modified “by hand” with few additions to obey the general principle of probability conservation (known as “unitarity” in physics). Starting from this premise, the work of many physicists consequently led to a more general theory. One in which the description of the fundamental forces follows the opposite path: predictions are obtained from fundamental principles (as gauge invariance) in a mathematically and physically coherent framework.[2] The interplay between these two ways of developing knowledge had been common in physics since before Newton’s time, and still valid today.

    In both cases, a theory is successful not only when it describes the known experimental facts, but also when it has predictive power. The Standard Model possesses both virtues and examples of its predictive power include the discoveries of the Higgs boson and the neutral kind of weak interaction mediated by the Z boson.

    As a matter of fact, the Standard Model tells us (much) more: the quantum fields representing the new spin-one particles will also transform under a local gauge transformation. To ensure that the measurable quantities describing their behaviour do not change (gauge invariance, mentioned above), interactions among the carriers of the weak force must also exist, as well as among the carriers of the strong force. These self-interactions may involve three or four gauge bosons. No self-interaction among photons is possible, except indirectly through virtual processes involving intermediate particles such as electrons, as observed in a dedicated ATLAS measurement.

    The process first observed by the ATLAS and CMS Collaborations in 2017, characterised by the presence of two W bosons with the same electric charge and two jets, is a signature of the occurrence of an electroweak interaction. The dominant part of the process is due to the self-interaction among four weak gauge bosons; another central prediction of the Standard Model finally confirmed by the LHC experiments. This self-interaction manifests as a “vector boson scattering”, where two incoming gauge bosons interact and produce two, potentially different, gauge bosons as final state particles. The production rate of this electroweak process is very low – lower than that of Higgs boson – which is why it was observed only recently. And just like the Higgs boson discovery, the observation of this process didn’t come out of the blue.

    At the Large Electron–Positron (LEP) collider, which operated at CERN between 1989 and 2000 in what is today the LHC tunnel, physicists had already observed the self-interaction among three gauge bosons. They measured the production of a pair of gauge bosons of opposite charge, a W+ and a W– boson, in the collisions of beams of electrons and positrons, the antiparticle of the electron. According to the Standard Model, three main processes contribute to this production. They proceed via the exchange of either a photon, neutrino or Z boson between the electron and positron of the initial state and the W pair of the final state (Figure 3).

    CERN Large Electron–Positron Collider.

    They measured the production of a pair of gauge bosons of opposite charge, a W+ and a W– boson, in the collisions of beams of electrons and positrons, the antiparticle of the electron. According to the Standard Model, three main processes contribute to this production. They proceed via the exchange of either a photon, neutrino or Z boson between the electron and positron of the initial state and the W pair of the final state (Figure 3).

    Figure 3: Diagrams representing three processes contributing to the e+e- → W+W- production. They illustrate the exchange between the initial e+e- and final W+W- of (from left to right), a neutrino (ν), photon (γ), and a Z boson, respectively. The symbol γ* is often used when a photon mediates the interaction. Following the rules of quantum mechanics, the production rate of a process is computed by the square of the sum of all possible diagrams contributing to it. The diagrams in the sum may have different relative signs, so they may cancel (destructive interference), in the same way that waves can cancel each other if they arrive out of synchronization. In the case discussed here, each diagram is necessary to avoid an unphysical continuous increase of the production rate with the collision energy and to ensure the preservation of the gauge invariance of the theory. (Image: S. Pagan Griso, L. Di Ciaccio/ATLAS Collaboration.)

    The exchange of a photon or a Z boson occurs via the self-interaction of three weak gauge bosons: WWγ and WWZ, respectively. The main point here is that without considering all three processes, the calculated production rate would grow continuously with energy, leading to the already encountered unphysical behaviour. The observation of this process at LEP, with a production rate consistent with the Standard Model prediction, therefore confirmed the existence of a self-interaction among three bosons.

    It is striking that the theory predicts the structure of each underlying process such that, even though each of them gives to the calculated production rate a contribution which at high energy becomes unphysical, violating unitarity, the unphysical behaviour cancels out when all of the processes are considered together.

    So far, so good – but there’s a catch. The W± and Z bosons observed and identified by experiments as the carriers of the weak interaction are massive, yet gauge invariance is only preserved if the carriers are massless. Should physicists give up the principle of gauge invariance to reconcile the theory with experimental facts?

    A solution to this puzzle was proposed in 1964, postulating the existence of a new spin-zero (“scalar”) field with a slightly more complex mathematical structure. While the basic laws of the forces remain exactly gauge symmetric, in the sense explained above, Nature has randomly chosen (among many possibilities) a particular lowest-energy state of this field, breaking with this choice the gauge symmetry in a limited way, called “spontaneous”.

    The consequences are dramatic. Out of this new field, a new particle emerges – the scalar Higgs boson – and the W± and Z bosons become massive. Physicists now believe that gauge symmetry was not always spontaneously broken. The universe transitioned from an “unbroken phase” with massless gauge bosons to our current “broken phase” during expansion and cool-down, a fraction of a second after the Big Bang.

    The discovery of the Higgs boson in 2012 by the ATLAS and CMS Collaborations is a great success of the Standard Model theory, especially when considering that it was found to have the mass that indirect clues were pointing to.

    CERN CMS Higgs Event May 27, 2012.

    CERN ATLAS Higgs Event
    June 12, 2012.

    While the Higgs boson mass is not predicted by theory, the existence of the Higgs boson with a given mass leaves a delicate footprint in natural phenomena such that, if measured very precisely (as was done at LEP and at Tevatron, the smaller predecessor of the LHC at Fermilab, nearby Chicago, USA), physicists could derive constraints on its mass. The Higgs boson’s discovery was thus an experimental prowess as well as a consecration of the Standard Model. It emphasized the remarkable role of the precision measurements at LEP, even though the energy of that accelerator was not high enough to directly produce the Higgs boson.

    Obviously, the story doesn’t end here. Solid indications exist that the Standard Model is not complete and that it must be encompassed in a more general theory. This possibility is not surprising. As Fermi’s weak interaction theory exemplifies, history has shown that a theory’s validity is related to the energy range (or, equivalently, size of space) accessible by experiments.

    More generally, classical mechanics is appropriate and predictive for the macroscopic world, when the speed of the objects is small with respect to the speed of light. To describe the microscopic world, however, quantum mechanics must be invoked, and the special theory of relativity must be applied to appropriately describe the behaviour of objects moving close to light speed.

    How can physicists find experimental signs that may help to formulate a more general theory than the Standard Model?

    A valuable approach is to directly search collision events for particles not included in the Standard Model. However this is inherently limited: only particles with a mass at or below the collision energy can be directly produced, due to the fundamental principle of energy conservation and following the equivalence between mass and energy. Alternative avenues, which suffer less from this limitation but are indirect, include performing very precise measurements of fundamental parameters of the Standard Model or measuring rare processes to look for deviations with respect to theoretical predictions. Such measurements are able to explore a higher energy domain, as the LEP Higgs-boson example showed.

    Vector boson scattering is one of these rare processes. It is special because closely related to the Higgs mechanism, and able to shed light on unexplored corners of Nature at the highest energy available in a laboratory. Similar to the LEP vector-boson study described above, vector boson scattering is expected to proceed via several processes, this time including the self-interaction of four gauge bosons as well as the exchange of a Higgs boson (see Figure 4). Without accounting for all of the processes, the calculated scattering rate grows indefinitely with energy, leading to the above-mentioned unphysical behaviour (violation of unitarity).

    It could be argued that this question is already settled, since we know that the Higgs boson exists. The key issue is that the way in which the Higgs boson interacts with the gauge bosons in the Standard Model is exactly what is required to moderate the growth of the scattering rate at high energy; a minimal deviation of the Higgs mechanism from the Standard Model prediction could result in an apparent breakdown of unitarity.

    Vector boson scattering would then occur at a rate different from what is predicted by the Standard Model, and unitarity would have to be recovered by a yet-unknown mechanism. The study of vector boson scattering thus allows physicists to investigate the Higgs mechanism in the highest energy domain accessible, where there may be signs of new physics.

    Figure 4: Diagrams of some of the processes contributing to the W+W+ → W+W+ process. Analogous diagrams contribute to the W-W- → W-W- process. Similarly to the explanation given in Figure 3, in order to compute the production rate each contribution is first added before their sum is squared. The individual contributions may have different relative signs leading to cancellations. In this case each contribution is necessary to avoid an unphysical continuous increase of the production rate with (the square or fourth power of) the scattering energy. (Image: S. Pagan Griso, L. Di Ciaccio/ATLAS Collaboration.)

    The LHC is the perfect place to look for rare processes like vector boson scattering, as it collides protons with the highest energy and rate ever reached. Furthermore, the ATLAS and CMS experiments are designed to select and record these rare events.

    As weak gauge bosons are extremely short-lived particles, experiments search for the scattering of vector bosons by looking for the production of two jets and two lepton–antilepton pairs in proton-proton collisions. Imagine this as two gauge bosons being emitted by the quarks from each of the incoming LHC proton beams. These gauge bosons subsequently scatter off each other and the bosons emerging from this interaction promptly decay (see Figure 2). The quarks are subsequently deflected and appear in the detector as jets of particles, typically emitted at a relatively small angle with respect to the beam direction. This is called an “electroweak” process as it is mediated by electroweak gauge bosons.

    The experimental signature of vector boson scattering is therefore characterised by the presence of the decay particles of the two bosons, accompanied by two jets with large angular separation. The W and Z bosons predominantly decay into a quark and antiquark pair. Nevertheless, the search of these rare events preferentially exploits the decays into a lepton and an anti-lepton because a concurrent process, the multi-jet production, being mediated by the strong interaction has an overwhelming rate and obscures processes with a much smaller rate.

    Still, the search for vector boson scattering is very challenging. This is not only because the rate of the process is low – accounting for only one in hundreds of trillions of proton–proton interactions – but also because, even making use of the leptonic decays, several “background” processes produce the same kinds of particles in the detector, mimicking the process’ signal.

    Due to its high rate, a particularly challenging background process is one in which the jets accompanying the decay products of the gauge bosons arise as a result of the strong-force interaction. The impact of this background with respect to the signal depends on the kind of gauge bosons which scatter. When they are W bosons with the same electric charge, the production rate of the two processes (signal and background) is comparable.

    For this reason, same-charge WW production is considered the golden channel for experimental measurements and was the first target for the ATLAS Collaboration to study vector-boson-scattering processes. ATLAS physicists reported for the first time strong hints of the process in a 2014 paper [Evidence for Electroweak Production of W±W±jj in pp Collisions at s√=8 TeV with the ATLAS Detector] – a milestone in the LHC physics programme. However, it took three more years to arrive at an unambiguous observation, passing the five-sigma threshold that particle physicists use to define a discovery and corresponding to a probability of less than one in 3.5 million that a signal observation could be due to a mere upward statistical fluctuation of the number of background events. In the years between the first hint and discovery, the LHC was upgraded to increase its proton–proton collision energy – from 8 TeV to 13 TeV – as well as its collision rate – yielding about six times more collected data. These improvements made observation of vector boson scattering possible – the era of its study had at last begun.

    However, not all electroweak bosons are equal. While the observation of two same-charge W bosons has allowed physicists to start testing the interaction of four W bosons (WWWW, Figure 2), the quest to test other self-interactions remained. The Standard Model only allows a specific set of combinations of four-gauge-boson self-interactions: WWWW, WWγγ, WWZγ and WWZZ, forbidding interactions among four neutral bosons.

    Not all of these electroweak interactions are predicted to have the same strength and, because of this, probing them requires identifying processes that are less and less frequent. Similarly to the case of two same-charge W bosons, electroweak processes involving two jets and a WZ pair, a Zγ pair, or a ZZ pair are increasingly rare or have significantly larger backgrounds. Hunting for such processes among the billions of proton–proton collisions recorded by ATLAS requires physicists to look for subtle differences in order to distinguish a signal from very similar background processes occurring at much higher rates.

    Table 1. List of processes presented in the text (first column) that are used to study vector boson scattering: WW with same charge, WZ, ZZ or Zɣ production in association with two jets (j), and photon-induced production of two W bosons. For each process a check indicates the four bosons involved in the self-interaction. Other measurements performed at the LHC also play a role to test these self-interactions but have been omitted in this table for simplicity. (Image: ATLAS Collaboration/CERN.)

    While such a task was commonly regarded as requiring a much larger amount of data than collected so far, the LHC experiments used artificial-intelligence algorithms to distinguish between the sought-after signal and the much larger background. Thanks to such innovations, in 2018 and 2019, ATLAS reported the observations of WZ and ZZ electroweak production, and saw a hint of the Zγ process. Suddenly, this brand-new field saw a surge in the number of processes that could be used to probe the self-interaction of gauge bosons.

    The most recent addition is ATLAS’ observation of two W bosons produced by the interaction of two photons, each radiated by the LHC protons. This phenomenon occurs when the accelerated protons skim each other, producing extremely high electromagnetic fields, with photons mediating an electromagnetic interaction between them. Such an interaction is only possible when quantum mechanical effects of electromagnetism are taken into account.

    This is a direct and clean probe of the γγWW gauge bosons interaction. A peculiarity of this process is that the protons participate as a whole and can remain intact after the interaction; this is very different from inelastic interactions where the quarks, the protons’ constituents, are the main actors (see Figure 2).

    Table 1 [above] summarises the processes that are used to study vector boson scattering at the LHC. It also shows the four bosons involved in the self-interaction. The study of each process provides a different test of the Standard Model, as modifications of the theory can differently alter the strength of the self-interactions.

    Now, ten years on from the first high-energy collisions took place in the LHC, the study of the vector boson scattering is a very active field – though still in its adolescence, both from the experimental and theoretical point of view. Experimentally, the size of the available signal sample is limited. The upcoming data-taking period (from 2022 to 2024) and the high-luminosity phase of the LHC (starting in 2027) will increase the amount of collected data by more than a factor two and by an additional factor of ten, respectively. An extensive upgrade of the LHC experiments is also ongoing, which will improve further the detection capabilities for the vector-boson-scattering processes.

    In parallel, physicists will continue to improve their analysis methods, relying on more and more advanced artificial-intelligence algorithms to disentangle the rare signal processes from the abundant backgrounds. Physicists are also employing advanced calculation techniques to improve the precision of Standard Model predictions to match the increased measurement precision.

    Furthermore, a bottom-up approach is being introduced which follows in the footsteps of Enrico Fermi. Physicists have developed a theoretical framework that allows new mathematical terms, respecting basic conservation rules and symmetries, to be added “by hand” to the Standard Model, without relying on a specific new physics model. These terms change the predictions in the high-energy regime where new physics could be expected (Figure 5). The simplest form of this approach is called Standard Model Effective Field Theory.

    Figure 5. Distribution of the photon energy in the search for events resulting from the electroweak production of two vector bosons (a Z and a γ) associated with two jets (Zγjj-EW). The black polymarkers represent the data, the full histograms with different colours represent the Standard Model predicted contributions for the signal (in brown) and the many background processes (in different colours). All expected contributions are stacked. The dotted blue line in the upper panel indicates the calculated signal distribution when a new term is added to the Standard Model theory. (Image: ATLAS Collaboration/CERN.)

    Even though we know that an effective theory cannot work at an arbitrary high energy scale, history has shown that, supplemented by measurements, it can provide useful guidance at lower energy. Different production-rate measurements – including those of the Higgs boson, boson self-interactions and the top quark – can be, separately or simultaneously, compared to predictions in the same effective theoretical framework.

    It would be a sensation if more precise measurements indicated that such new terms are necessary to describe the data. It would be a sign of physics beyond the Standard Model and indication of the direction to take in order to develop a more complete theory, depending on which kinds of terms are needed. The interplay between experimental observations and models in the quest for a complete theory would continue.

    Ultimately, all ongoing experimental collider and non-collider studies in particle physics will contribute to building knowledge – be they direct searches for new particles, precision measurements exploiting the power of quantum fluctuations or studies of rare processes. This experimental work is complemented by ever more precise theoretical calculations. In this task, the next generation of powerful particle accelerators now being planned are indispensable tools to find new phenomena that would help us understand the remaining mysteries of the microscopic world.

    See the full article here.

    Please help promote STEM in your local schools.

    Stem Education Coalition

    CERN Courier

    Quantum Diaries

    CERN map

    CERN LHC underground tunnel and tube.

    SixTRack CERN LHC particles.

  • richardmitnick 10:28 am on September 23, 2020 Permalink | Reply
    Tags: "Powerful New Observatory Will Taste Neutrinos’ Flavors", As neutrinos arrive at the detector from the nuclear power plants several kilometers away only about 30 percent of them will remain in their original identity., , Deep Underground Neutrino Experiment (DUNE) in the U.S., Existing evidence shows that two of the flavors are close in mass and that the third one is different., How do the masses of the three known types of neutrinos compare to one another?, Hyper-Kamiokande (Hyper-K) in Japan., JUNO can also catch the so-called geoneutrinos from below Earth’s surface., JUNO Underground Observatory at Kaiping Jiangmen in Southern China., JUNO will detect and study neutrinos from other sources: anywhere between 10 and 1000 of the particles from the sun per day and a sudden influx if a supernova explodes at a certain distance from Earth, JUNO will use two nearby nuclear power plants as neutrino sources., , Once operational JUNO expects to see roughly 60 such signals a day., Particle Physics,   

    From Scientific American: “Powerful New Observatory Will Taste Neutrinos’ Flavors” 

    From Scientific American

    September 22, 2020
    Ling Xin

    Aerial photograph taken on June 23, 2019, shows the construction site of the Jiangmen Underground Neutrino Observatory (JUNO) in southern China’s province of Guangdong. Credit: Liu Dawei Alamy.

    JUNO Underground Observatory, at Kaiping, Jiangmen in Southern China.

    Neutrinos are the oddballs of the subatomic particle family. They are everywhere, pouring in from the sun, deep space, and Earth and zipping through our bodies by the trillions every second. The particles are so tiny that they seldom interact with anything, making them extremely elusive and hard to study. Moreover, though neutrinos come in different types, or flavors, they can switch from one type to another as they travel near the speed of light. These weird behaviors, scientists believe, might point toward insights about the history of the universe and the future of physics.

    After nearly six years of excavation, a gigantic neutrino laboratory is taking shape in the rolling hills of southern China, about 150 kilometers west of Hong Kong. The Jiangmen Underground Neutrino Observatory (JUNO) will be one of the world’s most powerful neutrino experiments, along with the Hyper-Kamiokande (Hyper-K) in Japan and the Deep Underground Neutrino Experiment (DUNE) in the U.S.

    Hyper-Kamiokande, a neutrino physics laboratory to be located underground in the Mozumi Mine of the Kamioka Mining and Smelting Co. near the Kamioka section of the city of Hida in Gifu Prefecture, Japan.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA.

    SURF DUNE LBNF Caverns at Sanford Lab.

    Using two nearby nuclear power plants as neutrino sources, JUNO will aim to learn more about these particles and answer a fundamental question: How do the masses of the three known types of neutrinos compare to one another? Though researchers know the particles have a small amount of mass, the exact amount is unknown. Existing evidence shows that two of the flavors are close in mass and that the third one is different. But scientists do not know if that third type is heavier or lighter than the others: the former scenario is called the “normal mass ordering,” and the latter is named the “inverted mass ordering.”

    The mass ordering of the neutrino is a key parameter for researchers to determine, says theoretical physicist Joseph Lykken of the Fermi National Accelerator Laboratory in Batavia, Ill.

    “In fact, all kinds of other things depend on the answer to that question,” he adds. For instance, the answer can help scientists better estimate the total mass of neutrinos in the universe and determine how they have influenced the formation of the cosmos and the distribution of galaxies. Even though neutrinos are the lightest of all known matter particles, there are so many of them in space that they must have had a big effect on the way ordinary matter is distributed. Understanding how neutrino masses are ordered could also help explain why the particles have mass at all, which contradicts earlier predictions.

    More than 650 scientists, nearly half of whom are outside China, have been working on JUNO, which was first proposed in 2008. Later this year or in early 2021 researchers will start assembling the experiment’s 13-story-tall spherical detector. Inside, it will be covered by a total of 43,000 light-detecting phototubes and filled with 20,000 metric tons of specially formulated liquid. At 700 meters below the ground, once in a blue moon, an electron antineutrino (the specific type of particle that is produced by a nuclear reactor) will bump into a proton and trigger a reaction in the liquid, which will result in two flashes of light less than a millisecond apart. “This little ‘coincidence’ will count as a reactor neutrino signal,” says particle physicist Juan Pedro Ochoa-Ricoux of the University of California, Irvine, who co-leads one of the two phototube systems for JUNO.

    As neutrinos arrive at the detector from the nuclear power plants several kilometers away, only about 30 percent of them will remain in their original identity. The rest will have switched to other flavors, according to Jun Cao, a deputy spokesperson for JUNO at the Institute of High Energy Physics (IHEP) at the Chinese Academy of Sciences, the project’s leading institution.

    The observatory will be able to measure this percentage with great precision.

    Once operational, JUNO expects to see roughly 60 such signals a day. To have a statistically convincing answer to the mass ordering question, however, scientists need 100,000 signals—which means the experiment must run for years to find it. In the meantime JUNO will detect and study neutrinos from other sources, including anywhere between 10 and 1,000 of the particles from the sun per day and a sudden influx of thousands of them if a supernova explodes at a certain distance from Earth.

    JUNO can also catch the so-called geoneutrinos from below Earth’s surface, where radioactive elements such as uranium 238 and thorium 232 go through natural decay. So far studying geoneutrinos is the only effective way to learn how much chemical energy is left down there to drive our planet, says geologist William McDonough of the University of Maryland, who has been involved in the experiment since its early days. “JUNO is a game changer in this regard,” he says. Though all the existing detectors in Japan, Europe and Canada combined can see about 20 events per year, JUNO alone should detect more than 400 geoneutrinos annually.

    Right now the experiment is dealing with a flooding issue that has delayed the construction schedule by two years, says Yifang Wang, a JUNO spokesperson and director of IHEP. Engineers need to pump out 120,000 metric tons of underground water every day, but the water level has dropped significantly. It is not uncommon to run into flooding issues while building underground labs—an issue also experienced by the Sudbury Neutrino Observatory in Ontario.

    Sudbury Neutrino Observatory, , no longer operating.

    Wang believes that the problem will be solved before construction is completed.

    JUNO should be up and running by late 2022 or early 2023, Wang says. Toward to end of this decade, it will be joined by DUNE and Hyper-K. Using accelerator-based neutrinos, DUNE will be able to measure the particle’s mass ordering with the greatest precision. It will also study a crucial parameter called CP violation, a measure of how differently neutrinos act from their antimatter counterparts. This measurement could reveal whether the tiny particles are part of the reason the majority of the universe is made of matter. “JUNO’s result on the neutrino mass ordering will help DUNE make the best possible discovery and measurement of CP violation,” Lykken says. The former experiment, along with the other neutrino observatories in development, could also reveal something scientists have not predicted. The history of neutrino studies shows that these particles often behave unexpectedly, Lykken says. “I suspect that the combination of these experiments is going to produce surprises,” he adds.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

  • richardmitnick 8:26 am on September 23, 2020 Permalink | Reply
    Tags: "I focus on how electrons behave within solids.", "Think We Already Know Everything About Electrons? Think Again", , , , Emergent phenomena-in which groups of atoms or electrons act in unexpected ways., Groups of electrons collectively behave differently than we would expect from the way each individual electron acts on its own., Particle Physics, Scanning tunneling microscopy (STM), , Songtian Sonia Zhang, Superconductivity is an example of an emergent phenomenon within physics., , ,   

    From Simons Foundation: Women in STEM-Songtian Sonia Zhang”Think We Already Know Everything About Electrons? Think Again” 

    From Simons Foundation

    Songtian Sonia Zhang. Credit: Rick Soden, Princeton University
    Physicist Songtian Sonia Zhang explores how electrons work within the tiniest objects and finds that they sometimes do unexpected things.

    September 22, 2020
    Marcus Banks

    Songtian Sonia Zhang envisioned a life in finance, until she discovered that learning how electrons work is much more rewarding. A fundamental physicist with a bachelor’s degree from the University of Waterloo in Ontario, Canada, and a doctorate from Princeton University, Zhang has already discovered unexpected behaviors among electrons found in quantum materials like superconductors or magnets. But many mysteries remain about the behavior of these tiny particles. Now beginning a postdoctoral appointment at Columbia University in the physics lab of Dmitri N. Basov, Zhang already has lots of ideas about what she wants to explore next. Our conversation has been edited for clarity.

    You began as a dual major in economics and physics at the University of Waterloo. What prompted the sharper focus on physics instead?

    When I was an undergraduate at Waterloo, I planned to pursue a career in finance, perhaps even on Wall Street. I was interested in physics too, but I never imagined becoming a physicist. That all changed after I completed a physics research project in my third year of college, which happened to overlap with my first job at a financial services firm. This gave me the chance to directly compare finance to physics work — and physics won out handily.

    When I was doing physics, I felt like I was helping to bring new understanding into the world. I know that sounds corny, but it’s true. And it was far more rewarding to me than my work at the financial firm, where I essentially was moving money around. Don’t get me wrong, we need money! But I knew early that physics was for me.

    That sounds clarifying! But the study of physics is broad. How did you narrow your interests?

    During my last semester at Waterloo I researched a special kind of magnet known as ‘spin ice,’ in which the atoms are arranged in a complex lattice pattern. Most magnets have a north and south pole. And if you cut a magnet in two, each new magnet will then also have a north and south pole. But spin ice magnets have such a complex structure that we call them geometrically frustrated. Spin ice magnets can behave like monopoles — that is, a magnet with only one pole instead of the normal two. We still don’t know if monopoles even exist! But the spin ice magnet I studied sure seemed like a monopole, which was fascinating to me. When I first began to study physics, I assumed I would become an astrophysicist. Instead I decided to be more down-to-earth — literally. Today I study the physics of solids, not stars.

    This sounds like quantum physics.

    In many ways, yes. But quantum physics is an extremely broad term that can apply to many things, so in some ways it’s too general. The specific field I work in is condensed matter physics. I focus on how electrons behave within solids. I’m particularly interested in how groups of electrons behave, and especially how their collective behavior cannot be predicted by how each individual in the group acts.

    You’re saying that groups of electrons collectively behave differently than we would expect from the way each individual electron acts on its own?

    Exactly. We call this overall concept ‘emergent phenomena,’ in which groups of atoms or electrons act in unexpected ways. There are many examples of emergent phenomena in nature that go well beyond quantum physics. Think of individual birds migrating together as a cohesive flock, or a school of fish swimming upriver to spawn. Even though each individual bird or fish moves independently, they become entangled in the group and impossible to distinguish from one another.

    Superconductivity is an example of an emergent phenomenon within physics. Regular electrical conductors carry a current known as electricity; this is how we light lightbulbs, for example, by connecting an electricity source to an object that emits light. These kinds of everyday conductors come with inherent inefficiencies — energy is always lost because the electricity faces resistance as it travels. This is why lightbulbs eventually burn out.

    In contrast, a superconductor operating in extremely low temperatures (−450 F) can keep conducting electricity forever, because the electricity meets no resistance at all. Nobody could have predicted that superconductors could do this. It had to be discovered through observation, and it’s an example of how we are constantly learning about new types of emergent phenomena. Sometimes this is purely about developing knowledge for its own sake, but oftentimes this work has practical applications.

    We’ll loop back to the practical applications in a moment. First, though, what was your most exciting discovery at Princeton?

    At Princeton I studied kagome magnets. The atoms that comprise these magnets are arranged in a lattice which evokes the famous Japanese basket lattice pattern of the same name.

    Scanning tunneling microscopy (STM) image of magnetic adatoms deposited on topological superconductor candidate PbTaSe2. Inset: a 2D enlarged view. Each magnetic adatom can host a Majorana zero mode acting as a topological qubit which has the potential to be used for robust quantum computation. Credit: Songtian Zhang.

    Like the spin ice magnets I previously mentioned, kagome magnets are geometrically frustrated. In our research, we did various things to these magnets, such as observing them within magnetic fields of various strengths or alternating their temperatures. This was all to see how the electrons behaved in different conditions. In high magnetic fields the kagome magnets started acting like negative magnets — meaning they exerted more energy when moving in the same direction as a magnetic field and not when going ‘against the wind,’ so to speak.

    We published these unexpected results in Nature Physics last year.

    The fact that we discovered something totally unexpected shows the importance of keeping an open mind, of not being locked into any one idea. Instead, we are always finding new questions to ask.

    As you begin your postdoctoral work at Columbia, what do you plan to focus on, at least initially, until you discover new questions?

    I’m interested in learning more about topological insulators: objects that, on the surface, conduct electricity but in their interior act as an insulator. When the material is cut, the new surface, which was previously the insulating bulk, becomes conductive and can now support surface currents. Besides topological insulators there are topological superconductors, which can superconduct currents of electricity. The physics community has made some headway in understanding these superconductors, but there’s a lot more work that needs to be done.

    And how do you hope this knowledge will inform our understanding of the natural and physical world?

    Topological superconductors come from the math concept of topology. A good way to think about topology is the relationship between a doughnut, with a hole in the middle, and a ball, which has no holes. In this comparison, the doughnut and the ball are topologically distinct.

    By comparison, the doughnut would be topologically identical to a ring, which also has a hole in the middle. In this example, the number of holes is a topological property that can’t be destroyed without changing the underlying nature of the object.

    In physics, we’re interested in electronic behaviors that are similarly robust, such as in topological superconductors. There’s great potential for topological superconductors to be used for powerful, reliable and robust quantum computation, which will be a giant leap past the computers we use today. I can’t say exactly how my work will contribute to this, but I do know I’m excited to be on the journey. And I know I will enjoy it more than working on Wall Street.

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Mission and Model

    The Simons Foundation’s mission is to advance the frontiers of research in mathematics and the basic sciences.

    Co-founded in New York City by Jim and Marilyn Simons, the foundation exists to support basic — or discovery-driven — scientific research undertaken in the pursuit of understanding the phenomena of our world.

    The Simons Foundation’s support of science takes two forms: We support research by making grants to individual investigators and their projects through academic institutions, and, with the launch of the Flatiron Institute in 2016, we now conduct scientific research in-house, supporting teams of top computational scientists.

  • richardmitnick 12:16 pm on September 19, 2020 Permalink | Reply
    Tags: "Key Partners Mark Launch of Electron-Ion Collider Project", , , , , Particle Physics, The Electron-Ion Collider will be a 3D “microscope” for studying quarks and gluons which are the building blocks of protons; neutrons; and atomic nuclei—in other words all visible matter., Thomas Jefferson National Accelerator Facility   

    From Brookhaven National Laboratory: “Key Partners Mark Launch of Electron-Ion Collider Project” 

    From Brookhaven National Laboratory

    September 18, 2020

    Karen McNulty Walsh
    (631) 344-8350

    Peter Genzer
    (631) 344-3174

    Electron-Ion Collider (EIC) at BNL, inside the tunnel that currently houses the RHIC.

    State-of-the-art facility and partnership among Department Of Energy, New York State, Brookhaven National Laboratory, and Thomas Jefferson National Accelerator Facility will open a new frontier in nuclear physics, a field essential to our understanding of the visible universe with applications in national security, human health, and more.

    U.S. Department of Energy (DOE) Under Secretary for Science Paul Dabbar, leaders from DOE’s Brookhaven National Laboratory (Brookhaven Lab) and Thomas Jefferson National Accelerator Facility (Jefferson Lab), and elected officials from New York State and Virginia today commemorated the start of the Electron-Ion Collider project. The event was an opportunity for in-person and virtual speakers to voice their support for this one-of-a-kind nuclear physics research facility, which will be built at Brookhaven Lab by a worldwide collaboration of physicists over the next decade.

    The 2.4-mile-circumference particle collider will act as a high-precision sub-atomic “microscope” for exploring the innermost three-dimensional structures of protons and larger atomic nuclei. Experiments at the EIC will reveal how those particles’ fundamental building blocks (quarks and gluons) are arranged, how their interactions build up the mass of most of the visible matter in the universe and uncover the secrets of the strongest force in Nature. The journey into this new frontier in nuclear physics will attract the world’s best and brightest scientists, produce scientific and technological advances that extend to medicine and national security, and serve as a hub of innovation, collaboration, and STEM education for decades to come.

    “DOE scientists have been at the forefront of so many discoveries in nuclear physics,” said DOE Under Secretary for Science Paul Dabbar. “Thanks to the support of President Trump’s leadership, Congress, our Office of Science, and the State of New York, we’ve come together to create a one-of-a-kind research facility that is strengthened by collaboration with partners at Jefferson Lab and other national labs and institutions around the world. From the most basic components of matter to the farthest reaches of the cosmos to the next technologies that will drive the economy of the United States and the world, the DOE will continue this mission right here at Brookhaven and at our labs across the country.”

    With a proposed budget in the range of ~$1.6-2.6 billion from DOE’s Office of Science and $100 million from New York State, the project will draw on expertise from throughout the DOE complex and at universities and laboratories around the world. Physicists from Brookhaven Lab and Jefferson Lab will play leading roles.

    “As affirmed by the National Academy of Sciences, the EIC will maintain leadership in nuclear physics and accelerator science and technology with impacts on our technological, economic, and national security,” said Brookhaven Lab Director Doon Gibbs. “We are delighted to be partners with Thomas Jefferson National Accelerator Facility in designing, constructing, and operating the EIC. We will build upon strengths at both laboratories, but also reach out to other laboratories both in the U.S. and internationally.”

    “Jefferson Lab is proud to partner with Brookhaven Lab to bring this next-generation research facility to fruition,” said Jefferson Lab Director Stuart Henderson. “The EIC will enable a new era of scientific discovery that promises to answer some of the most fundamental, yet profound questions that we can ask about matter, such as: How does the mass and spin of protons and neutrons arise from their constituent pieces? How does the strongest force in the universe—the force that holds quarks together inside protons and neutrons—give rise to the properties of protons, neutrons and all visible matter? What does a proton or neutron ‘look like’ on the inside? In a sense, the EIC will allow us to complete our century-long adventure of figuring out what atoms are made of and how they work.”

    Replay of Electron-Ion Collider project launch event at Brookhaven Lab, September 18, 2020.

    Elected officials from both New York and Virginia, who provided critical support in moving the EIC project forward, took part in the event at Brookhaven Lab.

    “BNL has the talent, the technology, and the track-record to make the most of this national project,” said U.S. Senator Charles Schumer (NY). “The Lab is used to taking on big projects, critical research, and the most serious questions science can pose. This multi-billion-dollar federal investment on Long Island will guarantee Brookhaven National Lab continues to be a world-class research facility for the next generation.”

    “This cutting-edge project will inject billions of dollars and an extensive number of jobs into our communities all while churning out scores of scientific discoveries that help us understand the world around us, harness the untapped potential of the natural world and, from human health to our national security and beyond, benefit nearly every aspect of our lives,” said Congressman Lee Zeldin (NY). “Brookhaven National Lab has pioneered the future of clean and green energy, medical and cancer research, astrophysics, and far more, all while encouraging and cultivating the bright minds of future generations of researchers. Throughout this process, as co-chair of the National Labs Caucus and the Representative in the House for BNL, I’ve worked closely with Secretary Brouillette, Secretary Perry, Under Secretary Dabbar, and BNL leadership on this effort. It’s amazing to see this project become a reality right here on Long Island.”

    “COVID-19 has shown us how critically important it is to invest in our scientific infrastructure so we’re ready for future crises, and New York is already investing significant resources to make it a hub for scientific innovation and research,” said New York State Governor Andrew M. Cuomo. “The state’s $100 million investment in this project is part and parcel with that commitment, and this project is a win-win both for scientific development and the New York economy.”

    “Innovation is in New York’s DNA leading the way in scientific advancement and discovery,” said New York State Lieutenant Governor Kathy Hochul. “Long Island is driving a big part of our cutting-edge research, including at the Brookhaven National Laboratory. We’re proud that the federal government chose Long Island to house the world’s first electron ion collider, bringing a billion-dollar plus investment to the region. This will create thousands of new jobs, attract the best and the brightest minds, and spur millions in additional investment growing Brookhaven, Long Island and the entire state. We are committed to continuing to advance and support the scientific research and development on Long Island as we build back better and reimagine New York State for the post-pandemic future.”

    “Development of the EIC will help the U.S. maintain its global leadership in nuclear physics and answer outstanding questions about matter and the physical world,” said U.S. Senator Mark R. Warner (VA). “I am proud of the significant role that Jefferson Lab will play in the construction and operation of the EIC and look forward to the groundbreaking scientific discoveries that will occur as a result of this project.”

    “Brookhaven National Laboratory has put Long Island at the forefront of scientific innovation while helping our region create a Research Corridor and spur economic growth,” said Kevin S. Law, President & CEO of the Long Island Association. “The state-of-the-art Electron-Ion Collider will open up new opportunities for us to continue down that path.”

    “Two of the things that drew me to Stony Brook University are the impressive research accomplishments, and powerful partnerships like the ones we celebrate here today at the launch of the Electron-Ion Collider project,” said Stony Brook University President Maurie McInnis.

    “I am especially pleased that Stony Brook, as a partner in Brookhaven Science Associates, has been able to participate and contribute to the advancement of this work over many years. With our long history of leading research in nuclear and high-energy physics, we are proud that several of our faculty contributed to the conceptual design and scientific justification for the EIC, and I know that many are eager to participate in experiments that will be conducted here.”

    EIC science and other benefits

    The Electron-Ion Collider will be a 3D “microscope” for studying quarks and gluons, which are the building blocks of protons, neutrons, and atomic nuclei—in other words, all visible matter. Gluons are the subatomic particles that bind quarks into the more familiar particles that make up matter in today’s world.

    Collisions at the EIC will reveal how quarks and gluons interact to form these larger building blocks via the strong nuclear force. A deeper understanding of the strong force—which is 100 times more powerful than the electromagnetic force that governs today’s electronic technologies—may lead to insights and discoveries that power the technologies of tomorrow.

    The EIC will also give physicists the tool they need to fully explore the origin of proton spin. Proton spin, an intrinsic angular momentum somewhat analogous to the spin of a toy top, is used in nuclear magnetic resonance imaging (NMR and MRI), but scientists still don’t know how this property arises from the proton’s inner building blocks.

    The technological advances already under development to make the EIC a reality—e.g., innovative accelerator, particle-tracking, and data-management components—could have widespread impact on new approaches to cancer therapy, solving other “big data” challenges, and improving accelerator facilities for testing batteries, catalysts, and other energy-related materials. In addition, the collider-accelerator infrastructure that powers the EIC will be available to researchers who use particle beams to produce and conduct studies on medical isotopes and to study the effects of simulated space radiation with the aim of protecting future astronauts.

    All findings stemming from research at the EIC will be available through openly published research to scientists from other labs, academia, and industry seeking to learn from that knowledge and expand the limits of technology. The EIC science community is looking forward to sharing its success and discoveries with the nation and the world.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Brookhaven Campus.

    BNL Center for Functional Nanomaterials.



    BNL RHIC Campus.

    BNL/RHIC Star Detector.

    BNL/RHIC Phenix.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 4:14 pm on September 18, 2020 Permalink | Reply
    Tags: , , Particle Physics, , The comparison tests for tiny differences between matter and antimatter that could with even more computing power and other refinements point to physics phenomena not explained by the Standard Model., Theorists publish improved prediction for the tiny difference in kaon decays observed by experiments., This was an international collaboration of theoretical physicists—including scientists from Brookhaven National Laboratory and the RIKEN-BNL Research Center.   

    From Brookhaven National Lab: “New Calculation Refines Comparison of Matter with Antimatter” 

    From Brookhaven National Lab

    September 17, 2020
    Karen McNulty Walsh,
    (631) 344-8350

    Peter Genzer,
    (631) 344-3174

    Theorists publish improved prediction for the tiny difference in kaon decays observed by experiments.

    A new calculation performed using the world’s fastest supercomputers allows scientists to more accurately predict the likelihood of two kaon decay pathways, and compare those predictions with experimental measurements. The comparison tests for tiny differences between matter and antimatter that could, with even more computing power and other refinements, point to physics phenomena not explained by the Standard Model.

    An international collaboration of theoretical physicists—including scientists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory (BNL) and the RIKEN-BNL Research Center (RBRC)—has published a new calculation relevant to the search for an explanation of the predominance of matter over antimatter in our universe. The collaboration, known as RBC-UKQCD, also includes scientists from CERN (the European particle physics laboratory), Columbia University, the University of Connecticut, the University of Edinburgh, the Massachusetts Institute of Technology, the University of Regensburg, and the University of Southampton. They describe their result in a paper to be published in the journal Physical Review D and has been highlighted as an “editor’s suggestion.”

    Scientists first observed a slight difference in the behavior of matter and antimatter—known as a violation of “CP symmetry”—while studying the decays of subatomic particles called kaons in a Nobel Prize winning experiment at Brookhaven Lab in 1963. While the Standard Model of particle physics was pieced together soon after that, understanding whether the observed CP violation in kaon decays agreed with the Standard Model has proved elusive due to the complexity of the required calculations.

    Standard Model of Particle Physics from Symmetry Magazine.

    The new calculation gives a more accurate prediction for the likelihood with which kaons decay into a pair of electrically charged pions vs. a pair of neutral pions. Understanding these decays and comparing the prediction with more recent state-of-the-art experimental measurements made at CERN and DOE’s Fermi National Accelerator Laboratory gives scientists a way to test for tiny differences between matter and antimatter, and search for effects that cannot be explained by the Standard Model.

    The new calculation represents a significant improvement over the group’s previous result, published in Physical Review Letters in 2015. Based on the Standard Model, it gives a range of values for what is called “direct CP symmetry violation” in kaon decays that is consistent with the experimentally measured results. That means the observed CP violation is now, to the best of our knowledge, explained by the Standard Model, but the uncertainty in the prediction needs to be further improved since there is also an opportunity to reveal any sources of matter/antimatter asymmetry lying beyond the current theory’s description of our world.

    “An even more accurate theoretical calculation of the Standard Model may yet lie outside of the experimentally measured range. It is therefore of great importance that we continue our progress, and refine our calculations, so that we can provide an even stronger test of our fundamental understanding,” said Brookhaven Lab theorist Amarjit Soni.

    Matter/antimatter imbalance

    “The need for a difference between matter and antimatter is built into the modern theory of the cosmos,” said Norman Christ of Columbia University. “Our current understanding is that the present universe was created with nearly equal amounts of matter and antimatter. Except for the tiny effects being studied here, matter and antimatter should be identical in every way, beyond conventional choices such as assigning negative charge to one particle and positive charge to its anti-particle. Some difference in how these two types of particles operate must have tipped the balance to favor matter over antimatter,” he said.

    “Any differences in matter and antimatter that have been observed to date are far too weak to explain the predominance of matter found in our current universe,” he continued. “Finding a significant discrepancy between an experimental observation and predictions based on the Standard Model would potentially point the way to new mechanisms of particle interactions that lie beyond our current understanding—and which we hope to find to help to explain this imbalance.”

    Modeling quark interactions

    All of the experiments that show a difference between matter and antimatter involve particles made of quarks, the subatomic building blocks that bind through the strong force to form protons, neutrons, and atomic nuclei—and also less-familiar particles like kaons and pions.

    “Each kaon and pion is made of a quark and an antiquark, surrounded by a cloud of virtual quark-antiquark pairs, and bound together by force carriers called gluons,” explained Christopher Kelly, of Brookhaven National Laboratory.

    The Standard Model-based calculations of how these particles behave must therefore include all the possible interactions of the quarks and gluons, as described by the modern theory of strong interactions, known as quantum chromodynamics (QCD).

    In addition, these bound particles move at close to the speed of light. That means the calculations must also include the principles of relativity and quantum theory, which govern such near-light-speed particle interactions.

    “Because of the huge number of variables involved, these are some of the most complicated calculations in all of physics,” noted Tianle Wang, of Columbia University.

    Computational challenge

    To conquer the challenge, the theorists used a computing approach called lattice QCD, which “places” the particles on a four-dimensional space-time lattice (three spatial dimensions plus time). This box-like lattice allows them to map out all the possible quantum paths for the initial kaon to decay to the final two pions. The result becomes more accurate as the number of lattice points increases. Wang noted that the “Feynman integral” for the calculation reported here involved integrating 67 million variables!

    These complex calculations were done by using cutting-edge supercomputers. The first part of the work, generating samples or snapshots of the most likely quark and gluon fields, was performed on supercomputers located in the US, Japan, and the UK. The second and most complex step of extracting the actual kaon decay amplitudes was performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at DOE’s Lawrence Berkeley National Laboratory.

    But using the fastest computers is not enough; these calculations are still only possible even on these computers when using highly optimized computer codes, developed for the calculation by the authors.

    “The precision of our results cannot be increased significantly by simply performing more calculations,” Kelly said. “Instead, in order to tighten our test of the Standard Model we must now overcome a number of more fundamental theoretical challenges. Our collaboration has already made significant strides in resolving these issues and coupled with improvements in computational techniques and the power of near-future DOE supercomputers, we expect to achieve much improved results within the next three to five years.”

    The authors of this paper are, in alphabetical order: Ryan Abbott (Columbia), Thomas Blum (UConn), Peter Boyle (BNL & U of Edinburgh), Mattia Bruno (CERN), Norman Christ (Columbia), Daniel Hoying (UConn), Chulwoo Jung (BNL), Christopher Kelly (BNL), Christoph Lehner (BNL & U of Regensburg), Robert Mawhinney (Columbia), David Murphy (MIT), Christopher Sachrajda (U of Southampton), Amarjit Soni (BNL), Masaaki Tomii (UConn), and Tianle Wang (Columbia).

    The majority of the measurements and analysis for this work were performed using the Cori supercomputer at NERSC, with additional contributions from the Hokusai machine at the Advanced Center for Computing and Communication at Japan’s RIKEN Laboratory and the IBM BlueGene/Q (BG/Q) installation at Brookhaven Lab (supported by the RIKEN BNL Research Center and Brookhaven Lab’s prime operating contract from DOE’s Office of Science).

    At NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC is a DOE Office of Science User Facility.

    Riken HOKUSAI Big-Waterfall supercomputer built on the Fujitsu PRIMEHPC FX100 platform based on the SPARC64 processor.

    Additional supercomputing resources used to develop the lattice configurations included: the BG/Q installation at Brookhaven Lab, the Mira supercomputer at the Argonne Leadership Class Computing Facility (ALCF) at Argonne National Laboratory, Japan’s KEKSC 1540 computer, the UK Science and Technology Facilities Council DiRAC machine at the University of Edinburgh, and the National Center for Supercomputing Applications Blue Waters machine at the University of Illinois (funded by the U.S. National Science Foundation). NERSC and ALCF are DOE Office of Science user facilities. Individual researchers received support from various grants issued by the DOE Office of Science and other sources in the U.S. and abroad.

    BNL BGQ IBM BlueGene/Q (BG/Q) Linux supercomputer.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility.

    DiRAC BlueGene/Q EPCC at The University of Edinburgh.

    NCSA U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer,
    at the National Center for Supercomputing Applications.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    BNL Campus

    Brookhaven campus

    Brookhaven Campus.

    BNL Center for Functional Nanomaterials.



    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL Phenix Detector

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 10:45 am on September 18, 2020 Permalink | Reply
    Tags: "How Mathematical ‘Hocus-Pocus’ Saved Particle Physics", , , By some measures field theories are the most successful theories in all of science., , Particle Physics, , Renormalization has become perhaps the single most important advance in theoretical physics in 50 years., Renormalization is “the mathematical version of a microscope.   

    From Quanta Magazine: “How Mathematical ‘Hocus-Pocus’ Saved Particle Physics” 

    From Quanta Magazine

    September 17, 2020
    Charlie Wood

    You don’t have to analyze individual water molecules to understand the behavior of droplets, or droplets to study a wave. This ability to shift focus across various scales is the essence of renormalization. Credit: Samuel Velasco/Quanta Magazine.

    Renormalization has become perhaps the single most important advance in theoretical physics in 50 years.

    Renormalization-Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.

    In the 1940s, trailblazing physicists stumbled upon the next layer of reality. Particles were out, and fields — expansive, undulating entities that fill space like an ocean — were in. One ripple in a field would be an electron, another a photon, and interactions between them seemed to explain all electromagnetic events.

    There was just one problem: The theory was glued together with hopes and prayers. Only by using a technique dubbed “renormalization,” which involved carefully concealing infinite quantities, could researchers sidestep bogus predictions. The process worked, but even those developing the theory suspected it might be a house of cards resting on a tortured mathematical trick.

    “It is what I would call a dippy process,” Richard Feynman later wrote. “Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent.”

    Justification came decades later from a seemingly unrelated branch of physics. Researchers studying magnetization discovered that renormalization wasn’t about infinities at all. Instead, it spoke to the universe’s separation into kingdoms of independent sizes, a perspective that guides many corners of physics today.

    Renormalization, writes David Tong, a theorist at the University of Cambridge, is “arguably the single most important advance in theoretical physics in the past 50 years.”

    A Tale of Two Charges

    By some measures, field theories are the most successful theories in all of science. The theory of quantum electrodynamics (QED), which forms one pillar of the Standard Model of particle physics, has made theoretical predictions that match up with experimental results to an accuracy of one part in a billion [Science].

    But in the 1930s and 1940s, the theory’s future was far from assured. Approximating the complex behavior of fields often gave nonsensical, infinite answers that made some theorists think field theories might be a dead end.

    Feynman and others sought whole new perspectives — perhaps even one that would return particles to center stage — but came back with a hack instead. The equations of QED made respectable predictions, they found, if patched with the inscrutable procedure of renormalization.

    The exercise goes something like this. When a QED calculation leads to an infinite sum, cut it short. Stuff the part that wants to become infinite into a coefficient — a fixed number — in front of the sum. Replace that coefficient with a finite measurement from the lab. Finally, let the newly tamed sum go back to infinity.

    To some, the prescription felt like a shell game. “This is just not sensible mathematics,” wrote Paul Dirac, a groundbreaking quantum theorist.

    The core of the problem — and a seed of its eventual solution — can be seen in how physicists dealt with the charge of the electron.

    In the scheme above, the electric charge comes from the coefficient — the value that swallows the infinity during the mathematical shuffling. To theorists puzzling over the physical meaning of renormalization, QED hinted that the electron had two charges: a theoretical charge, which was infinite, and the measured charge, which was not. Perhaps the core of the electron held infinite charge. But in practice, quantum field effects (which you might visualize as a virtual cloud of positive particles) cloaked the electron so that experimentalists measured only a modest net charge.

    Two physicists, Murray Gell-Mann and Francis Low, fleshed out this idea in 1954. They connected the two electron charges with one “effective” charge that varied with distance. The closer you get (and the more you penetrate the electron’s positive cloak), the more charge you see.

    Their work was the first to link renormalization with the idea of scale. It hinted that quantum physicists had hit on the right answer to the wrong question. Rather than fretting about infinites, they should have focused on connecting tiny with huge.

    Renormalization is “the mathematical version of a microscope,” said Astrid Eichhorn, a physicist at the University of Southern Denmark who uses renormalization to search for theories of quantum gravity. “And conversely you can start with the microscopic system and zoom out. It’s a combination of a microscope and a telescope.”

    Magnets Save the Day

    A second clue emerged from the world of condensed matter, where physicists were puzzling over how a rough magnet model managed to nail the fine details of certain transformations. The Ising model consisted of little more than a grid of atomic arrows that could each point only up or down, yet it predicted the behaviors of real-life magnets with improbable perfection.

    At low temperatures, most atoms align, magnetizing the material. At high temperatures they grow disordered and the lattice demagnetizes. But at a critical transition point, islands of aligned atoms of all sizes coexist. Crucially, the ways in which certain quantities vary around this “critical point” appeared identical in the Ising model, in real magnets of varying materials, and even in unrelated systems such as a high-pressure transition where water becomes indistinguishable from steam. The discovery of this phenomenon, which theorists called universality, was as bizarre as finding that elephants and egrets move at precisely the same top speed.

    Physicists don’t usually deal with objects of different sizes at the same time. But the universal behavior around critical points forced them to reckon with all length scales at once.

    Leo Kadanoff, a condensed matter researcher, figured out how to do so in 1966. He developed a “block spin” technique, breaking an Ising grid too complex to tackle head-on into modest blocks with a few arrows per side. He calculated the average orientation of a group of arrows and replaced the whole block with that value. Repeating the process, he smoothed the lattice’s fine details, zooming out to grok the system’s overall behavior.

    Finally, Ken Wilson — a former graduate student of Gell-Mann with feet in the worlds of both particle physics and condensed matter — united the ideas of Gell-Mann and Low with those of Kadanoff. His “renormalization group,” which he first described in 1971 [Physical Review B], justified QED’s tortured calculations and supplied a ladder to climb the scales of universal systems. The work earned Wilson a Nobel Prize and changed physics forever.

    The best way to conceptualize Wilson’s renormalization group, said Paul Fendley, a condensed matter theorist at the University of Oxford, is as a “theory of theories” connecting the microscopic with the macroscopic.

    Consider the magnetic grid. At the microscopic level, it’s easy to write an equation linking two neighboring arrows. But taking that simple formula and extrapolating it to trillions of particles is effectively impossible. You’re thinking at the wrong scale.

    Wilson’s renormalization group describes a transformation from a theory of building blocks into a theory of structures. You start with a theory of small pieces, say the atoms in a billiard ball. Turn Wilson’s mathematical crank, and you get a related theory describing groups of those pieces — perhaps billiard ball molecules. As you keep cranking, you zoom out to increasingly larger groupings — clusters of billiard ball molecules, sectors of billiard balls, and so on. Eventually you’ll be able to calculate something interesting, such as the path of a whole billiard ball.

    This is the magic of the renormalization group: It helps identify which big-picture quantities are useful to measure and which convoluted microscopic details can be ignored. A surfer cares about wave heights, not the jostling of water molecules. Similarly, in subatomic physics, renormalization tells physicists when they can deal with a relatively simple proton as opposed to its tangle of interior quarks.

    Wilson’s renormalization group also suggested that the woes of Feynman and his contemporaries came from trying to understand the electron from infinitely close up. “We don’t expect [theories] to be valid down to arbitrarily small [distance] scales,” said James Fraser, a philosopher of physics at Durham University in the U.K. Mathematically cutting the sums short and shuffling the infinity around, physicists now understand, is the right way to do a calculation when your theory has a built-in minimum grid size. “The cutoff is absorbing our ignorance of what’s going on” at lower levels, said Fraser.

    In other words, QED and the Standard Model simply can’t say what the bare charge of the electron is from zero nanometers away. They are what physicists call “effective” theories. They work best over well-defined distance ranges. Finding out exactly what happens when particles get even cozier is a major goal of high-energy physics.

    From Big to Small

    Today, Feynman’s “dippy process” has become as ubiquitous in physics as calculus, and its mechanics reveal the reasons for some of the discipline’s greatest successes and its current challenges. During renormalization, complicated submicroscopic capers tend to just disappear. They may be real, but they don’t affect the big picture. “Simplicity is a virtue,” Fendley said. “There is a god in this.”

    That mathematical fact captures nature’s tendency to sort itself into essentially independent worlds. When engineers design a skyscraper, they ignore individual molecules in the steel. Chemists analyze molecular bonds but remain blissfully ignorant of quarks and gluons. The separation of phenomena by length, as quantified by the renormalization group, has allowed scientists to move gradually from big to small over the centuries, rather than cracking all scales at once.

    Yet at the same time, renormalization’s hostility to microscopic details works against the efforts of modern physicists who are hungry for signs of the next realm down. The separation of scales suggests they’ll need to dig deep to overcome nature’s fondness for concealing its finer points from curious giants like us.

    “Renormalization helps us simplify the problem,” said Nathan Seiberg, a theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey. But “it also hides what happens at short distances. You can’t have it both ways.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 6:16 pm on September 17, 2020 Permalink | Reply
    Tags: "A new way to search for dark matter reveals hidden materials properties", , , , , , , Particle Physics   

    From Chalmers University of Technology SE: “A new way to search for dark matter reveals hidden materials properties” 

    From Chalmers University of Technology SE

    New research from Chalmers and ETH Zürich, Switzerland, suggests a promising way to detect elusive dark matter particles through previously unexplored atomic responses occurring in the detector material. ​The illustration above is a composite image (optical, x-ray, computed dark-matter) of mass distribution in the bullet cluster of galaxies.​​​​
    Image: Chandra X-Ray Observatory, NASA/CXC/M. Weiss/Wikimedia Commons

    NASA/Chandra X-ray Telescope

    New research from Chalmers, together with ETH Zürich, Switzerland, suggests a promising way to detect elusive dark matter particles through previously unexplored atomic responses occurring in the detector material.

    The new calculations enable theorists to make detailed predictions about the nature and strength of interactions between dark matter and electrons, which were not previously possible.

    “Our new research into these atomic responses reveals material properties that have until now remained hidden. They could not be investigated using any of the particles available to us today – only dark matter could reveal them,” says Riccardo Catena, Associate Professor at the Department at Physics at Chalmers.

    For every star, galaxy or dust cloud visible in space, there exists five times more material which is invisible – dark matter. Discovering ways to detect these unknown particles which form such a significant part of the Milky Way is therefore a top priority in astroparticle physics. In the global search for dark matter, large detectors have been built deep underground to try to catch the particles as they bounce off atomic nuclei.

    So far, these mysterious particles have escaped detection. According to the Chalmers researchers, a possible explanation could be that dark matter particles are lighter than protons, and thereby do not cause the nuclei to recoil – imagine a ping pong ball colliding into a bowling ball. A promising way to overcome this problem could therefore be to shift focus from nuclei to electrons, which are much lighter.

    In their recent paper, the researchers describe how dark matter particles can interact with the electrons in atoms. They suggest that the rate at which dark matter can kick electrons out of atoms depends on four independent atomic responses – three of which were previously unidentified. They have calculated the ways that electrons in argon and xenon atoms, used in today’s largest detectors, should respond to dark matter.

    The results were recently published in the journal Physical Review Research and performed within a new collaboration with condensed-matter physicist Nicola Spaldin and her group at ETH. Their predictions can now be tested in dark matter observatories around the globe.

    “We tried to remove as many access barriers as possible. The paper is published in a fully open access journal and the scientific code to compute the new atomic response functions is open source, for anyone who wants to take a look ‘under the hood’ of our paper,” says Timon Emken, a postdoctoral researcher in the dark matter group at the Department of Physics at Chalmers.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Chalmers University of Technology (Swedish: Chalmers tekniska högskola, often shortened to Chalmers) is a Swedish university located in Gothenburg that focuses on research and education in technology, natural science, architecture, maritime and other management areas

    The University was founded in 1829 following a donation by William Chalmers, a director of the Swedish East India Company. He donated part of his fortune for the establishment of an “industrial school”. Chalmers was run as a private institution until 1937, when the institute became a state-owned university. In 1994, the school was incorporated as an aktiebolag under the control of the Swedish Government, the faculty and the Student Union. Chalmers is one of only three universities in Sweden which are named after a person, the other two being Karolinska Institutet and Linnaeus University.

  • richardmitnick 3:31 pm on September 17, 2020 Permalink | Reply
    Tags: "Parking the LHC proton train", , , , If something goes wrong we have to be able to get rid of this beam immediately from the accelerator and send it somewhere safe where it can’t do any damage., It’s an exciting multidisciplinary activity in which the boundary of engineering physics is put to the extreme., , Particle Physics, , Sometimes that’s because the circulating particles have lost too much energy to produce good collisions., , The beam dump is a solid graphite cylinder 8 meters long and under a meter in diameter. It’s wrapped in a stainless-steel case filled with nitrogen gas and surrounded by iron and concrete shielding., The beam dumps provide an excellent mixture of physics and engineering challenges within a challenging radiation environment., The proton train must be parked outside the LHC about three times a day.   

    From Symmetry: “Parking the LHC proton train” 

    Symmetry Mag
    From Symmetry

    Zack Savitsky

    Particle accelerators like the LHC require intricate beam dump systems to safely dispose of high-energy particles after each run.

    CERN/LHC Map

    SixTRack CERN LHC particles

    The Large Hadron Collider at CERN is the world’s most powerful particle accelerator. It hurtles hundreds of trillions of protons around a circular track at just under the speed of light.

    While each individual proton has the kinetic energy of a flying mosquito, the whole proton beam—a collection of 2500 bunches of particles—has as much energy as a 10-carriage railway train traveling 200 mph.

    Like an underground high-speed train, the energetic protons ride along the LHC’s 17-mile track about 100 meters below the surface of the Earth. At the end of each run, or when there are issues on the track, the proton train needs to be able to stop quickly and carefully.

    “If something goes wrong, we have to be able to get rid of this beam immediately from the accelerator and send it somewhere safe where it can’t do any damage,” says Brennan Goddard, leader of the beam transfer group at CERN.

    The proton train must be parked outside the LHC about three times a day. Sometimes that’s because the circulating particles have lost too much energy to produce good collisions. Other times it’s due to an electrical malfunction in the machine. For either case, scientists and engineers have designed a system that immediately diverts the beam to its own train station: the beam dump.

    “The LHC is often referred to as the most complex machine ever built,” says Alex Krainer, a doctoral student at CERN currently designing beam dumps for future accelerators. “The beam dump needs to be the most reliable system in the whole collider. We couldn’t dream of running the machine without it.”

    But how can scientists divert and park a train that is many miles long, the width of a dime, and contains enough stored energy to melt a ton of copper? Like any modern locomotive, it starts with a signal from a complex warning system.

    The LHC is outfitted with tens of thousands of sensors that are continually monitoring the size, spacing and trajectory of the proton beam. If the beam misbehaves, these sensors send an automated signal that triggers a set of magnets to kick the proton train onto a different track. Once the signal is received, the beam switches paths in under 90 microseconds—within one rotation around the LHC.

    On this new track, the proton train is stripped into its constituent carriages, or bunches, which spread out as they enter the beam dump—diluting the energy density that could otherwise damage it.

    The beam dump is a solid graphite cylinder 8 meters long and under a meter in diameter. It’s wrapped in a stainless-steel case, filled with nitrogen gas, and surrounded by iron and concrete shielding.

    It’s made mostly of low-density graphite “with a sandwich of higher-density materials at the end,” says Marco Calviani, leader of the targets, collimators and dumps group at CERN. “If we used only graphite, you’d still have a lot of uncollided protons passing through. And if you put the higher density material toward the front, the block would melt.”

    While dumping the beam, particle collisions cook the cylinder to over 1000 degrees Celsius and produce some new, harmless particles that pass through the block and quickly decay. Most of the proton bunches slow down while traveling through the layers of graphite and safely park in their own spots, distributing the energy of the proton train across the beam dump.

    This solves the problem of overburdening the beam dump. But a different problem arises when the rapid heating and cooling from beam collisions cause the dump to physically move.

    “In recent examination, we found that the dump has actually jumped several centimeters from the regular thermal expansion and contraction,” says Simone Gilardoni, group leader of the Sources, Targets and Interactions group at CERN.

    If the dump gets pushed too far one way, it’ll pull on the pipes connected to it. If it shuffles too far the other, it’ll hit an iron wall. There’s also the issue of wear and tear—the present block is 10 years old.

    The beam dump repair team must attend to melting, moving and bruising concerns creatively, since constant high-energy collisions create radioactive elements. Scientists at the lab are using remote-controlled robots to swap out the main absorber with an upgraded spare and implement a detached cradle for the dump, which allows it to swing back and forth to dampen the harsh movement.

    Such care is necessary to keep the experiment safe and functional. As the LHC crew prepares for its high-luminosity upgrade, which will more than double the intensity of the beam, scientists are working to reinforce the already intricate system. They plan to add more magnetic kickers to handle the beam before it hits the dump. US involvement in the HL-LHC upgrade is supported by the US Department of Energy’s Office of Science and the National Science Foundation.

    “The beam dumps provide an excellent mixture of physics and engineering challenges within a challenging radiation environment,” says Calviani. “It’s an exciting multidisciplinary activity in which the boundary of engineering physics is put to the extreme.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: