Tagged: particlebites Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:52 pm on February 4, 2023 Permalink | Reply
    Tags: "A world with no weak forces", , , , , particlebites   

    From particlebites: “A world with no weak forces” 

    particlebites bloc

    From particlebites

    2.4.23
    Nirmal Raj

    Gravity, electromagnetism, strong, and weak — these are the beating hearts of the universe, the four fundamental forces. But do we really need the last one for us to exist?

    Harnik, Kribs and Perez went about building a world without weak interactions and showed that, indeed, life as we know it could emerge there. This was a counter-proof by example to a famous anthropic argument by Agrawal, Barr, Donoghue and Seckel for the puzzling tininess of the weak scale, i.e. the electroweak hierarchy problem.

    1
    Summary of the argument in hep-ph/9707380 that a tiny Higgs mass (in Planck mass units) is necessary for life to develop.

    Let’s ask first: would the Sun be there in a weakless universe? Sunshine is the product of proton fusion, and that’s the strong force. However, the reaction chain is ignited by the weak force!

    2
    image: Eric G. Blackman.

    So would no stars shine in a weakless world? Amazingly, there’s another route to trigger stellar burning: deuteron-proton fusion via the strong force! In our world, gas clouds collapsing into stars do not take this option because deuterons are very rare, with protons outnumbering them by 50,000. But we need not carry this, er, weakness into our gedanken universe. We can tune the baryon-to-photon ratio — whose origin is unknown — so that we end up with roughly as many deuterons as protons from the primordial synthesis of nuclei. Harnik et al. go on to show that, as in our universe, elements up to iron can be cooked in weakless stars, that they live for billions of years, and may explode in supernovae that disperse heavy elements into the interstellar medium.

    3
    source: hep-ph/0604027

    A “weakless” universe is arranged by elevating the electroweak scale or the Higgs vacuum expectation value (\approx 246 GeV) to, say, the Planck scale (\approx 10^{19} GeV). To get the desired nucleosynthesis, care must be taken to keep the u, d, s quarks and the electron at their usual mass by tuning the Yukawa couplings, which are technically natural.

    And let’s not forget dark matter. To make stars, one needs galaxy-like structures. And to make those, density perturbations must be gravitationally condensed by a large population of matter. In the weakless world of Harnik et al., hyperons make up some of the dark matter, but you would also need much other dark stuff such as your favourite non-WIMP.

    If you believe in the string landscape, a weakless world isn’t just a hypothetical. Someone somewhere might be speculating about a habitable universe with a fourth fundamental force, explaining to their bemused colleagues: “It’s kinda like the strong force, only weak…”

    4

    Bibliography

    Viable range of the mass scale of the standard model
    V. Agrawal, S. M. Barr, J. F. Donoghue, D. Seckel, Phys.Rev.D 57 (1998) 5480-5492.

    A Universe without weak interactions
    R. Harnik, G. D. Kribs, G. Perez, Phys.Rev.D 74 (2006) 035006

    Further reading

    Gedanken Worlds without Higgs: QCD-Induced Electroweak Symmetry Breaking
    C. Quigg, R. Shrock, Phys.Rev.D 79 (2009) 096002

    The Multiverse and Particle Physics
    J. F. Donoghue, Ann.Rev.Nucl.Part.Sci. 66 (2016)

    The eighteen arbitrary parameters of the standard model in your everyday life
    R. N. Cahn, Rev. Mod. Phys. 68, 951 (1996)

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?
    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 7:04 pm on December 31, 2022 Permalink | Reply
    Tags: "What’s Next for Theoretical Particle Physics?", A future collider will also determine the research programs and careers of many future students and professors., A new horizon has formed with the launching of ultra-precise telescopes and gravitational wave detectors: probing the universe via cosmological data., A section on the intersections of cosmology and particle physics would not be complete without mention of everyone’s favorite mystery: dark matter., As new telescopes and gravitational wave observatories are slated to come online within the next decade expect this prolific field to continue to deliver alluring prospects for physics beyond the SM., As the promise of SUSY is so far unfilled the space of possible models is expansive and as anomalies pop up and disappear in our experiments the field is yearning for a source of new data., , , , Despite attention-grabbing headlines that herald the “death” of particle physics there remains an abundance of questions ripe for exploration., Eight new particles had been confirmed in the past two decades alone., How can we make the case for the importance of fundamental physics research in an increasingly uncertain landscape?, In the past few years interest has grown for a different kind of lepton collider: a muon collider., Investigating why neutrinos have mass — and where that mass comes from — is a central question in particle physics., Making theoretical predictions in the modern era has become incredibly computationally expensive., One looming question has formed an undercurrent through the entirety of the Snowmass process: What’s next after the LHC? ILC? CLIC? FCC? Chinese collider?, , Particle theory is undergoing a paradigm shift., particlebites, , Progress in theory remains more accessible than in experiment — the number of possible models we’ve developed far outpaces the detectors we are able to build to investigate them., Quantum field theory (QFT) is the common language of particle physics., Snapshots: only a few of the myriad subtopics within particle theory. Other notable ones include string theory; quantum information science; lattice gauge theory and effective field theory., Supersymmetry (SUSY) was — as the lore goes — just around the corner., The challenge over the past decade has received a significant helping hand from the advent of machine learning — deep learning in particular., The depth and breadth of particle physics knowledge acquired in the last half-century is remarkable., The Large Hadron Collider (LHC) and other accelerators produce over 100 terabytes of data per day while running., The quark sector especially will benefit from the growing adoption of machine learning in particle physics., The researchers at the Kavli Institute for Theoretical Physics (KITP) at UC Santa Barbara are here to map out the “Theory Frontier” of the Snowmass process., The unequivocal evidence for neutrino mixing was the crown prize of the last few decades of neutrino physics research., This past summer thousands of particle physicists across the frontiers convened in Seattle to share and debate and ponder questions and new directions.   

    From particlebites: “What’s Next for Theoretical Particle Physics?” 

    particlebites bloc

    From particlebites

    12.31.22
    Amara McCune

    2022 saw the pandemic-delayed Snowmass process confront the past, present, and future of particle physics. As the last papers trickle in for the year, we review Snowmass’s major developments and takeaways for particle theory.

    1
    A team of scientists wanders through the landscape of questions. Generated by DALL·E 2.

    It’s February 2022, and I am in an auditorium next to the beach in sunny Santa Barbara, listening to particle theory experts discuss their specialty. Each talk begins with roughly the same starting point: the Standard Model (SM) is incomplete.

    We know it is incomplete because, while its predictive capability is astonishingly impressive, it does not address a multitude of puzzles. These are the questions most familiar to any reader of popular physics: What is dark matter? What is dark energy? How can gravity be incorporated into the SM, which describes only 3 of the 4 known fundamental forces? How can we understand the origin of the SM’s structure — the values of its parameters, the hierarchy of its scales, and its “unnatural” constants that are calculated to be mysteriously small or far too large to be compatible with observation? 

    This compilation of questions is the reason that I, and all others in the room, are here. In the 80s, the business of particle discovery was booming. Eight new particles had been confirmed in the past two decades alone, cosmology was pivoting toward the recently developed inflationary paradigm, and supersymmetry (SUSY) was — as the lore goes — just around the corner. This flourish of progress and activity in the field had become too extensive for any collaboration or laboratory to address on its own. Meanwhile, links between theoretical developments, experimental proposals, and the flurry of results ballooned. The transition from the solitary 18th century tinkerer to the CERN summer student running an esoteric simulation for a research group was now complete: particle physics, as a field and community, had emerged. 

    It was only natural that the field sought a collective vision, or at minimum a notion of promising directions to pursue. In 1982, the American Physical Society’s Division of Particles and Fields organized Snowmass, a conference of a mere hundred participants that took place in a single room on a mountain in its namesake town of Snowmass, Colorado. Now, too large to be contained by its original location (although enthusiasm for organizing physics meetings at prominent ski locations abounds), Snowmass is both a conference and a multi-year process. 

    The depth and breadth of particle physics knowledge acquired in the last half-century is remarkable, yet a snapshot of the field today appears starkly different. The Higgs boson just celebrated its tenth “discovery birthday”.
    ______________________________________________________________________________
    Higgs


    ______________________________________________________________________________
    While the completion of the Standard Model (SM) as we know it is no doubt a momentous achievement, no new fundamental particles have been found since, despite overwhelming evidence of the inevitability of new physics. Supersymmetry may still prove to be just around the corner at a next-generation collider…or orders of magnitude beyond our current experimental reach. Despite attention-grabbing headlines that herald the “death” of particle physics, there remains an abundance of questions ripe for exploration. 

    In light of this shift, the field is up against a challenge: how do we reconcile the disappointments of supersymmetry? Moreover, how can we make the case for the importance of fundamental physics research in an increasingly uncertain landscape?

    The researchers are here at the Kavli Institute for Theoretical Physics (KITP) at UC Santa Barbara to map out the “Theory Frontier” of the Snowmass process. The “frontiers” — subsections of the field focusing on a different approach of particle physics — have met over the past year to weave their own story of the last decade’s progress, burgeoning both questions and promising future trajectories. This past summer, thousands of particle physicists across the frontiers convened in Seattle, Washington to share, debate, and ponder questions and new directions. Now, these frontiers are collating their stories into an anthology. 

    Below are a few (theory) focal points in this astoundingly expansive picture.

    Scattering Amplitudes

    2
    A 4-point amplitude can be constructed from two 3-point amplitudes. By Henriette Elvang.

    Quantum field theory (QFT) is the common language of particle physics. QFT provides a description of a particle system based on two fundamental tools: the Lagrangian and the path integral, which can both be wrapped up in the familiar diagrams of Richard Feynman. This approach, utilizing a visualization of incoming and outgoing scattering or decaying particles, has provided relief to many Ph.D. students over the past few generations due to its comparative ease of use. The diagrams are roughly divided into three parts: propagators (which tell us about the motion of a free particle), vertices (in which three or more particles interact), and loops (which describe the scenario in which the trajectory of two particles form a closed path). They contain both real, incoming particles, which are known as on-shell, as well as virtual, intermediate particles that cannot be measured, which are known as off-shell. To calculate a scattering amplitude — the probability of one or more particles interacting to form some specified final state — in this paradigm requires summing over all possibilities of what these virtual particles may be. This can prove not only cumbersome, but can also result in redundancies in our calculations.

    Particle theory is undergoing a paradigm shift. If we instead focus on the physical observable itself, the scattering amplitude, we can build more complicated diagrams from simpler ones in a recursive fashion. For example, we can imagine creating a 4-particle amplitude by gluing together two 3-particle amplitudes, as shown above. The process bypasses the intermediate, virtual particles and focuses only on computing on-shell states. This is not only a nice feature, but it can significantly reduce the problem at hand: calculating the scattering amplitude of 8 gluons with the Feynman approach requires computing more than a million Feynman diagrams, whereas the amplitudes method reduces the problem to a mere half-line. 

    In recent years, this program has seen renewed efforts, not only for its practical simplicity but also for its insights into the underlying concepts that shape particle interactions. The Lagrangian formalism organizes a theory based on the fields it contains and the symmetries those fields obey, with the rule of thumb that any term respecting the theory’s symmetries can be included in the Lagrangian. Further, these terms satisfy several general principles: Unitarity (the sum of the probabilities of each possible process in the theory adds to one, and time-evolves in a respecting manner), causality (an effect originates only from a cause that is contained in the effect’s backward light cone), and locality (observables that are localized at distinct regions in spacetime cannot affect one another). These are all reasonable axioms, but they must be explicitly baked into a theory that is represented in the Lagrangian formalism. Scattering amplitudes, in contrast, can reveal these principles without prior assumptions, signaling the unveiling of a more fundamental structure.

    Recent research surrounding amplitudes concerns both diving deeper into this structure, as well as applying the results of the amplitudes program toward precision theory predictions. The past decade has seen a flurry of results from an idea known as bootstrapping, which takes the relationship between physics and mathematics and flips it on its head.

    QFTs are typically built up from “bottom-up” by including terms in the Lagrangian based on which fields are present and which symmetries they obey. The bootstrapping methodology instead asks what the observable quantities are that result from a theory, and considers which underlying properties they must obey in order to be mathematically consistent. This process of elimination rules out a large swath of possibilities, significantly constraining the system and allowing us to, in some cases, guess our way to the answer. 

    This rich research program has plenty of directions to pursue. We can compute the scattering amplitudes of multi-loop diagrams in order to arrive at extremely precise SM predictions. We can probe their structure in the classical regime with the gravitational waves resulting from inspiraling stellar and black hole binaries. We can apply them to less understood regimes; for example, cosmological scattering amplitudes pose a unique challenge because they proceed in curved, rather than flat, space. Are there curved space analogues to the flat space amplitude structures? If so, what are they? What can we compute with them? Amplitudes are pushing forward our notion of what a QFT is. With them, we may be able to uncover the more fundamental frameworks that must underlie particle physics.

    Computational Advances

    3
    The overlap between machine learning, deep learning, artificial intelligence, and physics. By Jesse Thaler.

    Making theoretical predictions in the modern era has become incredibly computationally expensive. The Large Hadron Collider (LHC) and other accelerators produce over 100 terabytes of data per day while running, requiring not only intensive data filtering systems, but efficient computational methods to categorize and search for the signatures of particle collisions. Performing calculations in the quark sector — which relies on lattice gauge theory, in which spacetime is broken down into a discrete grid — also requires vast computational resources. And as simulations in astrophysics and cosmology balloon, so too does the supercomputing power needed to handle them. 

    This challenge over the past decade has received a significant helping hand from the advent of machine learning — deep learning in particular. On the collider front, these techniques have been applied to the detection of anomalies — a deviation in the data from the SM “background” that may signal new physics — as well as the analysis of jets. These protocols can be trained on previously analyzed collider data and synthetic data to establish benchmarks and push computational efficiency much further. As the LHC enters its third operational run, it will be particularly focused on precision measurements as the increasing quantity of data allows for higher statistical certainty in our results. The growing list of anomalies — including the W mass measurement and the muon g-2 anomaly — will confront these increased statistics, allowing for possible confirmation or rejection of previous results. Our analyses have also grown more sophisticated; the showers of quarks and gluons that result from collisions of hadrons known as jets have proved to reveal substructure that opens up another avenue for comparison of data with an SM theory prediction. 

    The quark sector especially will benefit from the growing adoption of machine learning in particle physics. Analytical calculations in this sector are intractable due to strong coupling, so in practice calculations are built upon the use of lattice gauge theory. Increasingly precise calculations are dependent upon making this grid smaller and smaller and including more and more particles. 

    As physics continually benefits from the rapid development of machine learning and artificial intelligence, the field is up against a unique challenge. Machine learning algorithms can often be applied blindly, resulting in misunderstood outputs via a black box. The key in utilizing these techniques effectively is in asking the right questions, understanding what questions we are asking, and translating the physics appropriately to a machine learning context. This has its practical uses — in confidently identifying tasks for which automation is appropriate — but also opens up the possibility to formulate theoretical particle physics in a computational language. As we look toward the future, we can dream of the possibilities that could result from such a language: to what extent can we train a machine to learn physics itself? There is much work to be done before such questions can be answered, but the prospects are exciting nonetheless.

    Cosmological Approaches

    4
    Shapes of possible non-Gaussian correlations in the distribution of galaxies. By Dan Green.

    As the promise of SUSY is so far unfilled, the space of possible models is expansive, and anomalies pop up and disappear in our experiments, the field is yearning for a source of new data. While colliders have fueled significant progress in the past decades, a new horizon has formed with the launching of ultra-precise telescopes and gravitational wave detectors: probing the universe via cosmological data. 

    The use of observations of astrophysical and cosmological sources to tell us about particle physics is not new, — we’ve long hunted for supernovae and mapped the cosmic microwave background (CMB) — but nascent theory developments hold incredible potential for discovery. As of 2015, observations of gravitational waves guide insights into stellar and black hole binaries, with an eye toward a detection of a stochastic gravitational wave background originating from the period of exponential expansion known as inflation, which proceeded shortly after the big bang. The observation of black hole binaries in particular can provide valuable insights into the workings of gravity at the smallest of scales, when it enters the realm of quantum mechanics. The possibility of a stochastic gravitational wave background raises the promise of “seeing” the workings of the universe at earlier stages in its history than we’ve ever before been able to access, potentially even to the start of the universe itself. 

    Inflation also lends itself to other applications within particle physics. Quantum fields at the beginning of the universe, in alignment with the uncertainty principle, predict tiny, statistical fluctuations. These initial spacetime curvature perturbations beget density perturbations in the distribution of matter which beget the temperature fluctuations visible in the cosmic microwave background (CMB). These fluctuations are observed to be distributed according to a Gaussian normal distribution, as of the latest CMB datasets. But tiny, primordial non-Gaussianities — processes that lead to a correlated pattern of fluctuations in the CMB and other datasets — are predicted for certain particle interactions during inflation. In particular, if particles interacting with the fields responsible for inflation acquire heavy masses during inflation, they could imprint a distinct, oscillating signature within these datasets. This would show up in our observables, such as the large-scale distribution of galaxies shown above, in the form of triangular (or higher-point polygonal) shapes signaling a non-Gaussian correlation. Currently, our probes of these non-Gaussianities are not precise enough to unveil such signatures, but planned and upcoming experiments may establish this new window into the workings of the early universe. 

    Finally, a section on the intersections of cosmology and particle physics would not be complete without mention of everyone’s favorite mystery: dark matter. A decade ago, the prime candidate for dark matter was the WIMP — the Weakly Interacting Massive Particle. This model was fairly simple, able to account for the 25% dark matter content of the universe we observe today, and remain in harmony with all other known cosmology. However, we’ve now probed a large swath of possible masses and cross-sections for the WIMP and come up short. The field’s focus has shifted to a different candidate for dark matter, the axion, which addresses both the dark matter mystery and a puzzle known as the strong CP problem simultaneously. While experiments to probe the axion parameter space are built, theorists are tasked with identifying well-motivated regions of this space — that is, possibilities for the mass and other parameters describing the axion that are plausible. The prospects include: theoretical motivation from calculations in string theory, considerations of the Peccei-Quinn symmetry underlying the notion of an axion, as well as various possible modes of production, including extreme astrophysical environments such as neutron stars and black holes. 

    Cosmological data has thus far been an important source not only into the history and evolution of the universe, but also of particle physics at high energy scales. As new telescopes and gravitational wave observatories are slated to come online within the next decade, expect this prolific field to continue to deliver alluring prospects for physics beyond the SM.

    Neutrinos

    5
    A visual representation of how neutrino oscillation works. From: http://www.hyper-k.org/en/neutrino.html.

    While the previous sections have highlighted new approaches to uncovering physics beyond the SM, there is a particular collection of particles that stand out in the spotlight. In the SM formulation, the three flavors of neutrinos are massless, just like the photon. Yet we know unequivocally from experiment that this is false. Neutrinos display a phenomenon known as neutrino mixing, in which one flavor of neutrino can turn into another flavor of neutrino as it propagates. This implies that at least two of the three neutrino flavors are in fact massive.

    Investigating why neutrinos have mass — and where that mass comes from — is a central question in particle physics. Neutrinos are especially captivating because any observation of a neutrino mass mechanism is guaranteed to be a window to new physics. Further, neutrinos could be of central importance to several puzzles within the SM, including the MicroBooNE anomaly, the question of why there is more matter than antimatter in the observable universe, and the flavor puzzle, among others.

    The latter refers to an overall lack of understanding of the origin of flavor in the SM. Why do quarks come in six flavors, organized into three generations each consisting of one “up-type” quark with a +⅔ charge and one “down-type” quarks with a -⅓ charge? Why do leptons come in six flavors, with three pairs of one electron-like particle and one neutrino? What is the origin of the hierarchy of masses for both quarks and leptons? Of the SM’s 19 free parameters — which includes particle masses, coupling strengths, and others — 14 of them are associated with flavor.

    The unequivocal evidence for neutrino mixing was the crown prize of the last few decades of neutrino physics research. Modern experiments are charged with detecting more subtle signs of new physics, through measurements of neutrino energy in colliders, ever more precise oscillation data, and the possibility for a heavy neutrino belonging to a fourth generation. 

    Experiment has a clear role to play; the upcoming Deep Underground Neutrino Experiment (DUNE) will produce neutrinos at Fermilab and observe them at Sanford Lab, South Dakota in order to accumulate data regarding long-distance neutrino oscillation.

    DUNE and other detectors will also turn their eye toward the sky in observations of neutrinos sourced by supernovae. There is also much room for theorists, both in developing models for neutrino mass-generation as well as influencing the future of neutrino experiment — short-distance neutrino oscillation experiments are a key proposal in the quest to address the MicroBooNE anomaly. 

    The field of neutrino physics is only growing. It is likely we’ll learn much more about the SM and beyond through these ghostly, mysterious particles in the coming decades.
     
    Which Collider(s)?

    One looming question has formed an undercurrent through the entirety of the Snowmass process: What’s next after the LHC? In the past decade, propositions have been fleshed out in various stages, with the goal of satisfying some part of the lengthy wish list of questions a future collider would hope to probe. 

    The most well-known possible successor to the LHC is the Future Circular Collider (FCC), which is roughly a plan for a larger LHC, able to reach energies some 30 times that of its modern-day counterpart.

    An FCC that collides hadrons, as the LHC does, would extend the reach of our studies into the Higgs boson, other force-carrying gauge bosons, and dark matter searches. Its higher collision rate would enable studies of rare hadron decays and continue the trek into the realm of flavor physics searches. It would also enable our discovery of gauge bosons of new interactions — if they exist at those energies. This proposal, while captivating, has also met its fair share of skepticism, particularly because there is no singular particle physics goal it would be guaranteed to achieve. When the LHC was built, physicists were nearly certain that the Higgs boson would be found there — and it was. However, physicists were also somewhat confident in the prospect of finding SUSY at the LHC. Could supersymmetric particles be discovered at the FCC? Maybe, or maybe not. 

    A second plan exists for the FCC, in which it collides electrons and positrons instead of hadrons. This targets the electroweak sector of the SM, covering the properties of the Higgs, the W and Z bosons, and the heaviest quark (the top quark). Whereas hadrons are composite particles, and produce particle showers and jets upon collision, leptons are fundamental particles, and so have well-defined initial states. This allows for greater precision in measurements compared to hadron colliders, particularly in questions of the Higgs. Is the Higgs boson the only Higgs-like particle? Is it a composite particle? How does the origin of mass influence other key questions, such as the nature of dark matter? While unable to reach as high of energies as a hadron collider, an electron-positron collider is appealing due to its precision. This dichotomy epitomizes the choice between these two proposals for the FCC.

    The options go beyond circular colliders. Linear colliders such as the International Linear Collider (ILC) and Compact Linear Collider (CLIC) are also on the table.





    While circular colliders are advantageous for their ability to accelerate particles over long distances and to keep un-collided particles in circulation for other experiments, they come with a particular disadvantage due to their shape. The acceleration of charged particles along a curved path results in synchrotron radiation — electromagnetic radiation that significantly reduces the energy available for each collision. For this reason, a circular accelerator is more suited to the collision of heavy particles — like the protons used in the LHC — than much lighter leptons. The lepton collisions within a linear accelerator would produce Higgs bosons at a high rate, allowing for deeper insight into the multitude of Higgs-related questions.

    In the past few years interest has grown for a different kind of lepton collider: a muon collider.

    Muons are, like electrons, fundamental particles, and therefore much cleaner in collisions than composite hadrons. They are also much more massive than electrons, which leads to a smaller proportion of energy being lost to synchrotron radiation in comparison to electron-positron colliders. This would allow for both high-precision measurements as well as high energies, making a muon collider an incredibly attractive candidate. The heavier mass of the muon, however, does bring with it a new set of technical challenges, particularly because the muon is not a stable particle and decays within a short timeframe. 

    As a multi-billion dollar project requiring the cooperation of numerous countries, getting a collider funded, constructed, and running is no easy feat. As collider proposals are put forth and debated, there is much at stake — a future collider will also determine the research programs and careers of many future students and professors. With that in mind, considerable care is necessary. Only one thing is certain: there will be something after the LHC.

    Toward the Next Snowmass

    7
    The path forward in the quest to understand particle physics. By Raman Sundrum.

    The above snapshots are only a few of the myriad subtopics within particle theory; other notable ones include string theory, quantum information science, lattice gauge theory, and effective field theory.

    As the Snowmass process wraps up, the voice of particle theory has played and continues to play an influential role. Overall, progress in theory remains more accessible than in experiment — the number of possible models we’ve developed far outpaces the detectors we are able to build to investigate them. The theoretical physics community both guides the direction and targets of future experiments, and has plenty of room to make progress on the model-building front, including understanding quantum field theories at the deepest level and further uncovering the structures of amplitudes. A decade ago, SUSY at LHC-scales was at the prime objective in the hunt for an ultimate theory of physics. Now, new physics could be anywhere and everywhere; Snowmass is crucial to charting our path in an endless valley of questions. I look forward to the trails of the next decade.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?
    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 8:54 am on May 5, 2020 Permalink | Reply
    Tags: "Three Birds with One Particle: The Possibilities of Axions", , , Matter-antimatter asymmetry, , particlebites,   

    From particlebites: “Three Birds with One Particle: The Possibilities of Axions” 

    particlebites bloc

    From particlebites

    May 1, 2020
    Amara McCune

    Title: “Axiogenesis”

    Author: Raymond T. Co and Keisuke Harigaya

    Reference: https://arxiv.org/pdf/1910.02080.pdf

    On the laundry list of problems in particle physics, a rare three-for-one solution could come in the form of a theorized light scalar particle fittingly named after a detergent: the axion. Frank Wilczek coined this term in reference to its potential to “clean up” the Standard Model once he realized its applicability to multiple unsolved mysteries. Although Axion the dish soap has been somewhat phased out of our everyday consumer life (being now primarily sold in Latin America), axion particles remain as a key component of a physicist’s toolbox. While axions get a lot of hype as a promising Dark Matter candidate, and are now being considered as a solution to matter-antimatter asymmetry, they were originally proposed as a solution for a different Standard Model puzzle: the strong CP problem.

    The strong CP problem refers to a peculiarity of quantum chromodynamics (QCD), our theory of quarks, gluons, and the strong force that mediates them: while the theory permits charge-parity (CP) symmetry violation, the ardent experimental search for CP-violating processes in QCD has so far come up empty-handed. What does this mean from a physical standpoint? Consider the neutron electric dipole moment (eDM), which roughly describes the distribution of the three quarks comprising a neutron. Naively, we might expect this orientation to be a triangular one. However, measurements of the neutron eDM, carried out by tracking changes in neutron spin precession, return a value orders of magnitude smaller than classically expected. In fact, the incredibly small value of this parameter corresponds to a neutron where the three quarks are found nearly in a line.

    1
    The classical picture of the neutron (left) looks markedly different from the picture necessitated by CP symmetry (right). The strong CP problem is essentially a question of why our mental image should look like the right picture instead of the left. Source: https://arxiv.org/pdf/1812.02669.pdf

    This would not initially appear to be a problem. In fact, in the context of CP, this makes sense: a simultaneous charge conjugation (exchanging positive charges for negative ones and vice versa) and parity inversion (flipping the sign of spatial directions) when the quark arrangement is linear results in a symmetry. Yet there are a few subtleties that point to the existence of further physics. First, this tiny value requires an adjustment of parameters within the mathematics of QCD, carefully fitting some coefficients to cancel out others in order to arrive at the desired conclusion. Second, we do observe violation of CP symmetry in particle physics processes mediated by the weak interaction, such as kaon decay, which also involves quarks.

    These arguments rest upon the idea of naturalness, a principle that has been invoked successfully several times throughout the development of particle theory as a hint toward the existence of a deeper, more underlying theory. Naturalness (in one of its forms) states that such minuscule values are only allowed if they increase the overall symmetry of the theory, something that cannot be true if weak processes exhibit CP-violation where strong processes do not. This puts the strong CP problem squarely within the realm of “fine-tuning” problems in physics; although there is no known reason for CP symmetry conservation to occur, the theory must be modified to fit this observation. We then seek one of two things: either an observation of CP-violation in QCD or a solution that sets the neutron eDM, and by extension any CP-violating phase within our theory, to zero.

    2
    This term in the QCD Lagrangian allows for CP symmetry violation. Current measurements place the value of \theta at no greater than 10^{-10}. In Peccei-Quinn symmetry, Θ is promoted to a field.

    When such an expected symmetry violation is nowhere to be found, where is a theoretician to look for such a solution? The most straightforward answer is to turn to a new symmetry. This is exactly what Roberto Peccei and Helen Quinn did in 1977, birthing the Peccei-Quinn symmetry, an extension of QCD which incorporates a CP-violating phase known as the Θ term. The main idea behind this theory is to promote Θ to a dynamical field, rather than keeping it a constant. Since quantum fields have associated particles, this also yields the particle we dub the axion. Looking back briefly to the neutron eDM picture of the strong CP problem, this means that the angular separation should also be dynamical, and hence be relegated to the minimum energy configuration: the quarks again all in a straight line. In the language of symmetries, the U(1) Peccei-Quinn symmetry is approximately spontaneously broken, giving us a non-zero vacuum expectation value and a nearly-massless Goldstone boson: our axion.

    This is all great, but what does it have to do with dark matter? As it turns out, axions make for an especially intriguing dark matter candidate due to their low mass and potential to be produced in large quantities. For decades, this prowess was overshadowed by the leading WIMP candidate (weakly-interacting massive particles), whose parameter space has been slowly whittled down to the point where physicists are more seriously turning to alternatives. As there are several production-mechanisms in early universe cosmology for axions, and 100% of dark matter abundance could be explained through this generation, the axion is now stepping into the spotlight.

    This increased focus is causing some theorists to turn to further avenues of physics as possible applications for the axion. In a recent paper, Co and Harigaya examined the connection between this versatile particle and matter-antimatter asymmetry (also called baryon asymmetry). This latter term refers to the simple observation that there appears to be more matter than antimatter in our universe, since we are predominantly composed of matter, yet matter and antimatter also seem to be produced in colliders in equal proportions. In order to explain this asymmetry, without which matter and antimatter would have annihilated and we would not exist, physicists look for any mechanism to trigger an imbalance in these two quantities in the early universe. This theorized process is known as baryogenesis.

    Here’s where the axion might play a part. The \theta term, which settles to zero in its possible solution to the strong CP problem, could also have taken on any value from 0 to 360 degrees very early on in the universe. Analyzing the axion field through the conjectures of quantum gravity, if there are no global symmetries then the initial axion potential cannot be symmetric [4]. By falling from some initial value through an uneven potential, which the authors describe as a wine bottle potential with a wiggly top, \theta would cycle several times through the allowed values before settling at its minimum energy value of zero. This causes the axion field to rotate, an asymmetry which could generate a disproportionality between the amounts of produced matter and antimatter. If the field were to rotate in one direction, we would see more matter than antimatter, while a rotation in the opposite direction would result instead in excess antimatter.

    3
    The team’s findings can be summarized in the plot above. Regions in purple, red, and above the orange lines (dependent upon a particular constant X which is proportional to weak scale quantities) signify excluded portions of the parameter space. The remaining white space shows values of the axion decay constant and mass where the currently measured amount of baryon asymmetry could be generated. Source: https://arxiv.org/pdf/1910.02080.pdf

    Introducing a third fundamental mystery into the realm of axions begets the question of whether all three problems (strong CP, dark matter, and matter-antimatter asymmetry) can be solved simultaneously with axions. And, of course, there are nuances that could make alternative solutions to the strong CP problem more favorable or other dark matter candidates more likely. Like most theorized particles, there are several formulations of axion in the works. It is then necessary to turn our attention to experiment to narrow down the possibilities for how axions could interact with other particles, determine what their mass could be, and answer the all-important question: if they exist at all. Consequently, there are a plethora of axion-focused experiments up and running, with more on the horizon, that use a variety of methods spanning several subfields of physics. While these results begin to roll in, we can continue to investigate just how many problems we might be able to solve with one adaptable, soapy particle.

    Learn More:

    A comprehensive introduction to the strong CP problem, the axion solution, and other potential solutions: https://arxiv.org/pdf/1812.02669.pdf
    Axions as a dark matter candidate: https://www.symmetrymagazine.org/article/the-other-dark-matter-candidate
    More information on matter-antimatter asymmetry and baryogenesis: https://www.quantumdiaries.org/2015/02/04/where-do-i-come-from/
    The quantum gravity conjectures that axiogenesis builds upon: https://arxiv.org/abs/1810.05338
    An overview of current axion-focused experiments: https://www.annualreviews.org/doi/full/10.1146/annurev-nucl-102014-022120

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?
    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 6:11 pm on April 23, 2020 Permalink | Reply
    Tags: "LHCb’s Flavor Mystery Deepens", In a press conference last month LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay., LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018., particlebites, Rather just measuring the total rate of this decay this analysis focuses on measuring the angular distribution of the decay products., This measurement is an update to their 2015 result now with twice as much data.   

    From particlebites: “LHCb’s Flavor Mystery Deepens” 

    particlebites bloc

    From particlebites

    Posted on April 23, 2020 by Oz Amram
    LHCb’s Flavor Mystery Deepens

    Title: Measurement of CP -averaged observables in the B0→ K∗0µ+µ− decay
    Authors: LHCb Collaboration

    In the Standard Model, matter is organized in 3 generations; 3 copies of the same family of particles but with sequentially heavier masses.

    Standard Model of Particle Physics, Quantum Diaries

    Though the Standard Model can successfully describe this structure, it offers no insight into why nature should be this way. Many believe that a more fundamental theory of nature would better explain where this structure comes from. A natural way to look for clues to this deeper origin is to check whether these different ‘flavors’ of particles really behave in exactly the same ways, or if there are subtle differences that may hint at their origin.

    The experiment is designed to probe these types of questions.

    CERN LHCb chamber, LHC

    And in recent years, they have seen a series of anomalies, tensions between data and Standard Model predictions, that may be indicating the presence of new particles which talk to the different generations. In the Standard Model, the different generations can only interact with each other through the W boson, which means that quarks with the same charge can only interact through more complicated processes like those described by ‘penguin diagrams’.

    2
    The so called ‘penguin diagrams’ describe how rare decays like bottom quark → strange quark can happen in the Standard Model. The name comes from both their shape and a famous bar bet. Who says physicists don’t have a sense of humor?

    These interactions typically have quite small rates in the Standard Model, meaning that the rate of these processes can be quite sensitive to new particles, even if they are very heavy or interact very weakly with the SM ones. This means that studying these sort of flavor decays is a promising avenue to search for new physics.

    In a press conference last month, LHCb unveiled a new measurement of the angular distribution of the rare B0→K*0μ+μ– decay. The interesting part of this process involves a b → s transition (a bottom quark decaying into a strange quark), where number of anomalies have been seen in recent years.

    3
    Feynman diagrams of the decay being studied. A B meson (composed of a bottom and a down quark) decays into a Kaon (composed of a strange quark and a down quark) and a pair of muons. Because this decay is very rare in the Standard Mode (left diagram) it could be a good place to look for the effects of new particles (right diagram). Diagrams taken from here.

    Rather just measuring the total rate of this decay, this analysis focuses on measuring the angular distribution of the decay products. They also perform this mesaurement in different bins of ‘q^2’, the dimuon pair’s invariant mass. These choices allow the measurement to be less sensitive to uncertainties in the Standard Model prediction due to difficult to compute hadronic effects. This also allows the possibility of better characterizing the nature of whatever particle may be causing a deviation.

    The kinematics of decay are fully described by 3 angles between the final state particles and q^2. Based on knowing the spins and polarizations of each of the particles, they can fully describe the angular distributions in terms of 8 parameters. They also have to account for the angular distribution of background events, and distortions of the true angular distribution that are caused by the detector. Once all such effects are accounted for, they are able to fit the full angular distribution in each q^2 bin to extract the angular coefficients in that bin.

    This measurement is an update to their 2015 result, now with twice as much data. The previous result saw an intriguing tension with the SM at the level of roughly 3 standard deviations. The new result agrees well with the previous one, and mildly increases the tension to the level of 3.4 standard deviations.

    4
    LHCb’s measurement of P’5, an observable describing one part of the angular distribution of the decay. The orange boxes show the SM prediction of this value and the red, blue and black point shows LHCb’s most recent measurement (a combination of its ‘Run 1’ measurement and the more recent 2016 data). The grey regions are excluded from the measurement because they have large backgrounds from the decays of other mesons.

    This latest result is even more interesting given that LHCb has seen an anomaly in another measurement (the R_k anomaly) involving the same b → s transition. This had led some to speculate that both effects could be caused by a single new particle. The most popular idea is a so-called ‘leptoquark’ that only interacts with some of the flavors.

    LHCb is already hard at work on updating this measurement with more recent data from 2017 and 2018, which should once again double the number of events. Updates to the R_k measurement with new data are also hotly anticipated. The Belle II experiment has also recent started taking data and should be able to perform similar measurements. So we will have to wait and see if this anomaly is just a statistical fluke, or our first window into physics beyond the Standard Model!

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?
    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 11:25 am on February 13, 2020 Permalink | Reply
    Tags: "Neutrinos: What Do They Know? Do They Know Things?", In 1956 an antineutrino-positron annihilation producing two gamma rays was detected confirming the existence of neutrinos., particlebites, Three distinct flavors of neutrino: one corresponding to each type of lepton: electron; muon; and tau particles.   

    From particlebites: “Neutrinos: What Do They Know? Do They Know Things?” 

    particlebites bloc

    From particlebites

    Posted on February 12, 2020
    Amara McCune

    Title: “Upper Bound of Neutrino Masses from Combined Cosmological Observations and Particle Physics Experiments
    Author: Loureiro et al.

    Neutrinos are almost a lot of things. They are almost massless, a property that goes against the predictions of the Standard Model. Possessing this non-zero mass, they should travel at almost the speed of light, but not quite, in order to be consistent with the principles of special relativity. Yet each measurement of neutrino propagation speed returns a value that is, within experimental error, exactly the speed of light. Only coupled to the weak force, they are almost non-interacting, with 65 billion of them streaming from the sun through each square centimeter of Earth each second, almost undetected.

    How do all of these pieces fit together? The story of the neutrino begins in 1930, when Wolfgang Pauli propositioned an as-yet detected particle emitted during beta decay in order to explain an observed lack of energy and momentum conservation. In 1956, an antineutrino-positron annihilation producing two gamma rays was detected, confirming the existence of neutrinos. Yet with this confirmation came an assortment of growing mysteries. In the decades that followed, a series of experiments found that there are three distinct flavors of neutrino, one corresponding to each type of lepton: electron, muon, and tau particles. Subsequent measurements of propagating neutrinos then revealed a curious fact: these three flavors are anything but distinct. When the flavor of a neutrino is initially measured to be, say, an electron neutrino, a second measurement of flavor after it has traveled some distance could return the answer of muon neutrino. Measure yet again, and you could find yourself a tau neutrino. This process, in which the probability of measuring a neutrino in one of the three flavor states varies as it propagates, is known as neutrino oscillation.

    1
    A representation of neutrino oscillation: three flavors of neutrino form a superposed wave. As a result, a measurement of neutrino flavor as the neutrino propagates switches between the three possible flavors. This mechanism implies that neutrinos are not massless, as previously thought. From: http://www.hyper-k.org/en/neutrino.html.

    Neutrino oscillation threw a wrench into the Standard Model in terms of mass; neutrino oscillation implies that the masses of the three neutrino flavors cannot be equal to each other, and hence cannot all be zero. Specifically, only one of them would be allowed to be zero, with the remaining two non-zero and non-equal.

    Standard Model of Particle Physics, Quantum Diaries

    While at first glance an oddity, oscillation arises naturally from underlying mathematics, and we can arrive at this conclusion via a simple analysis. To think about a neutrino, we consider two eigenstates (the state a particle is in when it is measured to have a certain observable quantity), one corresponding to flavor and one corresponding to mass. Because neutrinos are created in weak interactions which conserve flavor, they are initially in a flavor eigenstate. Flavor and mass eigenstates cannot be simultaneously determined, and so each flavor eigenstate is a linear combination of mass eigenstates, and vice versa. Now, consider the case of three flavors of neutrino. If all three flavors consisted of the same linear combination of mass eigenstates, there would be no varying superposition between them, since the different masses would travel at different speeds in accordance with special relativity. Since we experimentally observe an oscillation between neutrino flavors, we can conclude that their masses cannot all be the same.

    Although this result was unexpected and provides the first known departure from the Standard Model, it is worth noting that it also neatly resolves a few outstanding experimental mysteries, such as the solar neutrino problem. Neutrinos in the sun are produced as electron neutrinos and are likely to interact with unbound electrons as they travel outward, transitioning them into a second mass state which can interact as any of the three flavors. By observing a solar neutrino flux roughly a third of its predicted value, physicists not only provided a potential answer to a previously unexplained phenomenon but also deduced that this second mass state must be larger than the state initially produced. Related flux measurements of neutrinos produced during charged particle interactions in the Earth’s upper atmosphere, which are primarily muon neutrinos, reveal that the third mass state is quite different from the first two mass states. This gives rise to two potential mass hierarchies: the normal (m_1 < m_2 \ll m_3) and inverted (m_3 \ll m_1 < m_2) ordering.

    2
    The PMNS matrix parametrizes the transformation between the neutrino mass eigenbasis and its flavor eigenbasis. The left vector represents a neutrino in the flavor basis, while the right represents the same neutrino in the mass basis. When an individual component of the transformation matrix is squared, it gives the probability to measure the specified mass for the the corresponding flavor.

    However, this oscillation also means that it is difficult to discuss neutrino masses individually, as measuring the sum of neutrino masses is currently easier from a technical standpoint. With current precision in cosmology, we cannot distinguish the three neutrinos at the epoch in which they become free-traveling, although this could change with increased precision. Future experiments in beta decay could also lead to progress in pinpointing individual masses, although current oscillation experiments are only sensitive to mass-squared differences \Delta m_{ij}^2 = m_i^2 – m_j^2. Hence, we frame our models in terms of these mass splittings and the mass sum, which also makes it easier to incorporate cosmological data. Current models of neutrinos are phenomenological ― not directly derived from theory but consistent with both theoretical principles and experimental data. The mixing between states is mathematically described by the PMNS (Pontecorvo-Maki-Nakagawa-Sakata) matrix, which is parametrized by three mixing angles and a phase related to CP violation. These parameters, as in most phenomenological models, have to be inserted into the theory. There is usually a wide space of parameters in such models and constraining this space requires input from a variety of sources. In the case of neutrinos, both particle physics experiments and cosmological data provide key avenues for exploration into these parameters. In a recent paper, Loureiro et al. used such a strategy, incorporating data from the large scale structure of galaxies and the cosmic microwave background to provide new upper bounds on the sum of neutrino masses.

    The group investigated two main classes of neutrino mass models: exact models and cosmological approximations. The former concerns models that integrate results from neutrino oscillation experiments and are parametrized by the smallest neutrino mass, while the latter class uses a model scheme in which the neutrino mass sum is related to an effective number of neutrino species N_{\nu} times an effective mass m_{eff} which is equal for each flavor. In exact models, Gaussian priors (an initial best-guess) were used with data sampling from a number of experimental results and error bars, depending on the specifics of the model in question. This includes possibilities such as fixing the mass splittings to their central values or assuming either a normal or inverted mass hierarchy. In cosmological approximations, N_{\nu} was fixed to a specific value depending on the particular cosmological model being studied, with the total mass sum sampled from data.

    3
    The end result of the group’s analysis, which shows the calculated neutrino mass bounds from 7 studied models, where the first 4 models are exact and the last 3 are cosmological approximations. The left column gives the probability distribution for the sum of neutrino masses, while the right column gives the probability distribution for the lightest neutrino in the model (not used in the cosmological approximation scheme). From: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.123.081301

    The group ultimately demonstrated that cosmologically-based models result in upper bounds for the mass sum that are much lower than those generated from physically-motivated exact models, as we can see in the figure above. One of the models studied resulted in an upper bound that is not only different from those determined from neutrino oscillation experiments, but is inconsistent with known lower bounds. This puts us into the exciting territory that neutrinos have pushed us to again and again: a potential finding that goes against what we presently know. The calculated upper bound is also significantly different if the assumption is made that one of the neutrino masses is zero, with the mass sum contained in the remaining two neutrinos, setting the stage for future differentiation between neutrino masses. Although the group did not find any statistically preferable model, they provide a framework for studying neutrinos with a considerable amount of cosmological data, using results of the Planck, BOSS, and SDSS collaborations, among many others. Ultimately, the only way to arrive at a robust answer to the question of neutrino mass is to consider all of these possible sources of information for verification.

    With increased sensitivity in upcoming telescopes and a plethora of intriguing beta decay experiments on the horizon, we should be moving away from studies that update bounds and toward ones which make direct estimations. In these future experiments, previous analysis will prove vital in working toward an understanding of the underlying mass models and put us one step closer to unraveling the enigma of the neutrino. While there are still many open questions concerning their properties ― Why are their masses so small? Is the neutrino its own antiparticle? What governs the mass mechanism? ― studies like these help to grow intuition and prepare for the next phases of discovery. I’m excited to see what unexpected results come next for this almost elusive particle.

    Further Reading:

    A thorough introduction to neutrino oscillation: https://arxiv.org/pdf/1802.05781.pdf
    Details on mass ordering: https://arxiv.org/pdf/1806.11051.pdf
    More information about solar neutrinos (from a fellow ParticleBites writer!): https://particlebites.com/?p=6778
    A summary of current neutrino experiments and their expected results: https://www.symmetrymagazine.org/article/game-changing-neutrino-experiments

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?
    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 12:24 pm on January 4, 2020 Permalink | Reply
    Tags: A new boson of mass 17 MeV/c^2 nicknamed the “X17” particle., , Atomki collaboration in Decebren Hungary, Atomki nuclear spectrometer, , , PADME experiment at Gran Sasso, , particlebites   

    From particlebites: “The Delirium over Helium” 

    particlebites bloc

    From particlebites

    January 4, 2020
    Andre Frankenthal

    Title: New evidence supporting the existence of the hypothetic X17 particle
    Authors: A.J. Krasznahorkay, M. Csatlós, L. Csige, J. Gulyás, M. Koszta, B. Szihalmi, and J. Timár; D.S. Firak, A. Nagy, and N.J. Sas; A. Krasznahorkay

    This is an update to the excellent Delirium over Beryllium bite written by Flip Tanedo back in 2016 introducing the Beryllium anomaly (I highly recommend starting there first if you just opened this page). At the time, the Atomki collaboration in Decebren, Hungary, had just found an unexpected excess on the angular correlation distribution of electron-positron pairs from internal pair conversion in the transition of excited states of Beryllium. According to them, this excess is consistent with a new boson of mass 17 MeV/c^2, nicknamed the “X17” particle. (Note: for reference, 1 GeV/c^2 is roughly the mass of a proton; for simplicity, from now on I’ll omit the “c^2” term by setting c, the speed of light, to 1 and just refer to masses in MeV or GeV. Here’s a nice explanation of this procedure.)

    A few weeks ago, the Atomki group released a new set of results that uses an updated spectrometer and measures the same observable (positron-electron angular correlation) but from transitions of Helium excited states instead of Beryllium. Interestingly, they again find a similar excess on this distribution, which could similarly be explained by a boson with mass ~17 MeV. There are still many questions surrounding this result, and lots of skeptical voices, but the replication of this anomaly in a different system (albeit not yet performed by independent teams) certainly raises interesting questions that seem to warrant further investigation by other researchers worldwide.

    Nuclear physics and spectroscopy

    The paper reports the production of excited states of Helium nuclei from the bombardment of tritium atoms with protons. To a non-nuclear physicist, this may not be immediately obvious, but nuclei can be in excited states just as electrons around atoms. The entire quantum wavefunction of the nucleus is usually found in the ground state, but can be excited by various mechanisms such as the proton bombardment used in this case. Protons with a specific energy (0.9 MeV) were targeted at tritium atoms to initiate the reaction 3H(p, γ)4He, in nuclear physics notation. The equivalent particle physics notation is p + 3H → He* → He + γ (→ e+ e–), where ‘*’ denotes an excited state.

    This particular proton energy serves to excite the newly-produced Helium atoms into a state with energy of 20.49 MeV. This energy is sufficiently close to the Jπ = 0– state (i.e. negative parity and quantum number J = 0), which is the second excited state in the ladder of states of Helium. This state has a centroid energy of 21.01 MeV and a wide “sigma” (or decay width) of 0.84 MeV. Note that energies of the first two excited states of Helium overlap quite a bit, so actually sometimes nuclei will be found in the first excited state instead, which is not phenomenologically interesting in this case.

    1
    Figure 1. Sketch of the energy distributions for the first two excited quantum states of Helium nuclei. The second excited state (with centroid energy of 21.01 MeV) exhibits an anomaly in the electron-positron angular correlation distribution in transitions to the ground state. Proton bombardment with 0.9 MeV protons yields Helium nuclei at 20.49 MeV, therefore producing both first and second excited states, which are overlapping.

    With this reaction, experimentalists can obtain transitions from the Jπ = 0– excited state back to the ground state with Jπ = 0+. These transitions typically produce a gamma ray (photon) with 21.01 MeV energy, but occasionally the photon will internally convert into an electron-positron pair, which is the experimental signature of interest here. A sketch of the experimental concept is shown below. In particular, the two main observables measured by the researchers are the invariant mass of the electron-positron pair, and the angular separation (or angular correlation) between them, in the lab frame.

    2
    Figure 2. Schematic representation of the production of excited Helium states from proton bombardment, followed by their decay back to the ground state with the emission of an “X” particle. X here can refer to a photon converting into a positron-electron pair, in which case this is an internal pair creation (IPC) event, or to the hypothetical “X17” particle, which is the process of interest in this experiment. Adapted from 1608.03591.

    The measurement

    For this latest measurement, the researchers upgraded the spectrometer apparatus to include 6 arms instead of the previous 5. Below is a picture of the setup with the 6 arms shown and labeled. The arms are at azimuthal positions of 0, 60, 120, 180, 240, and 300 degrees, and oriented perpendicularly to the proton beam.

    4
    Figure 3. The Atomki nuclear spectrometer. This is an upgraded detector from the previous one used to detect the Beryllium anomaly, featuring 6 arms instead of 5. Each arm has both plastic scintillators for measuring electrons’ and positrons’ energies, as well as a silicon strip-based detector to measure their hit impact positions. Image credit: A. Krasznahorkay.

    The arms consist of plastic scintillators to detect the scintillation light produced by the electrons and positrons striking the plastic material. The amount of light collected is proportional to the energy of the particles. In addition, silicon strip detectors are used to measure the hit position of these particles, so that the correlation angle can be determined with better precision.

    With this setup, the experimenters can measure the energy of each particle in the pair and also their incident positions (and, from these, construct the main observables: invariant mass and separation angle). They can also look at the scalar sum of energies of the electron and positron (Etot), and use it to zoom in on regions where they expect more events due to the new “X17” boson: since the second excited state lives around 21.01 MeV, the signal-enriched region is defined as 19.5 MeV < Etot < 22.0 MeV. They can then use the orthogonal region, 5 MeV < Etot < 19 MeV (where signal is not expected to be present), to study background processes that could potentially contaminate the signal region as well.

    The figure below shows the angular separation (or correlation) between electron-positron pairs. The red asterisks are the main data points, and consist of events with Etot in the signal region (19.5 MeV < Etot < 22.0 MeV). We can clearly see the bump occurring around angular separations of 115 degrees. The black asterisks consist of events in the orthogonal region, 5 MeV < Etot < 19 MeV. Clearly there is no bump around 115 degrees here. The researchers then assume that the distribution of background events in the orthogonal region (black asterisks) has the same shape inside the signal region (red asterisks), so they fit the black asterisks to a smooth curve (blue line), and rescale this curve to match the number of events in the signal region in the 40 to 90 degrees sub-range (the first few red asterisks). Finally, the re-scaled blue curve is used in the 90 to 135 degrees sub-range (the last few red asterisks) as the expected distribution.

    5
    Figure 4. Angular correlation between positrons and electrons emitted in Helium nuclear transitions to the ground state. Red dots are data in the signal region (sum of positron and electron energies between 19.5 and 22 MeV), and black dots are data in the orthogonal region (sum of energies between 5 and 19 MeV). The smooth blue curve is a fit to the orthogonal region data, which is then re-scaled to to be used as background estimation in the signal region. The blue, black, and magenta histograms are Monte Carlo simulations of expected backgrounds. The green curve is a fit to the data with the hypothesis of a new “X17” particle.

    In addition to the data points and fitted curves mentioned above, the figure also reports the researchers’ estimates of the physics processes that cause the observed background. These are the black and magenta histograms, and their sum is the blue histogram. Finally, there is also a green curve on top of the red data, which is the best fit to a signal hypothesis, that is, assuming that a new particle with mass 16.84 ± 0.16 MeV is responsible for the bump in the high-angle region of the angular correlation plot.

    The other main observable, the invariant mass of the electron-positron pair, is shown below.

    6
    Figure 5. Invariant mass distribution of emitted electrons and positrons in the transitions of Helium nuclei to the ground state. Red asterisks are data in the signal region (sum of electron and positron energies between 19.5 and 22 MeV), and black asterisks are data in the orthogonal region (sum of energies between 5 and 19 MeV). The green smooth curve is the best fit to the data assuming the existence of a 17 MeV particle.

    The invariant mass is constructed from the equation

    7

    where all relevant quantities refer to electron and positron observables: Etot is as before the sum of their energies, y is the ratio of their energy difference over their sum (y \equiv (E_{e^+} – E_{e^-})/E_{\textrm{tot}}), θ is the angular separation between them, and me is the electron and positron mass. This is just one of the standard ways to calculate the invariant mass of two daughter particles in a reaction, when the known quantities are the angular separation between them and their individual energies in the lab frame.

    The red asterisks are again the data in the signal region (19.5 MeV < Etot < 22 MeV), and the black asterisks are the data in the orthogonal region (5 MeV < Etot < 19 MeV). The green curve is a new best fit to a signal hypothesis, and in this case the best-fit scenario is a new particle with mass 17.00 ± 0.13 MeV, which is statistically compatible with the fit in the angular correlation plot. The significance of this fit is 7.2 sigma, which means the probability of the background hypothesis (i.e. no new particle) producing such large fluctuations in data is less than 1 in 390,682,215,445! It is remarkable and undeniable that a peak shows up in the data — the only question is whether it really is due to a new particle, or whether perhaps the authors failed to consider all possible backgrounds, or even whether there may have been an unexpected instrumental anomaly of some sort.

    According to the authors, the same particle that could explain the anomaly in the Beryllium case could also explain the anomaly here. I think this claim needs independent validation by the theory community. In any case, it is very interesting that similar excesses show up in two “independent” systems such as the Beryllium and the Helium transitions.

    Some possible theoretical interpretations

    There are a few particle interpretations of this result that can be made compatible with current experimental constraints. Here I’ll just briefly summarize some of the possibilities. For a more in-depth view from a theoretical perspective, check out Flip’s “Delirium over Beryllium” bite.

    The new X17 particle could be the vector gauge boson (or mediator) of a protophobic force, i.e. a force that interacts preferentially with neutrons but not so much with protons. This would certainly be an unusual and new force, but not necessarily impossible. Theorists have to work hard to make this idea work, as you can see here.

    Another possibility is that the X17 is a vector boson with axial couplings to quarks, which could explain, in the case of the original Beryllium anomaly, why the excess appears in only some transitions but not others. There are complete theories proposed with such vector bosons that could fit within current experimental constraints and explain the Beryllium anomaly, but they also include new additional particles in a dark sector to make the whole story work. If this is the case, then there might be new accessible experimental observables to confirm the existence of this dark sector and the vector boson showing up in the nuclear transitions seen by the Atomki group. This model is proposed here.

    However, an important caveat about these explanations is in order: so far, they only apply to the Beryllium anomaly. I believe the theory community needs to validate the authors’ assumption that the same particle could explain this new anomaly in Helium, and that there aren’t any additional experimental constraints associated with the Helium signature. As far as I can tell, this has not been shown yet. In fact, the similar invariant mass is the only evidence so far that this could be due to the same particle. An independent and thorough theoretical confirmation is needed with high-stake claims such as this one.

    Questions and criticisms

    n the years since the first Beryllium anomaly result, a few criticisms about the paper and about the experimental team’s history have been laid out. I want to mention some of those to point out that this is still a contentious result.

    First, there is the group’s history of repeated claims of new particle discoveries every so often since the early 2000s. After experimental refutation of these claims by more precise measurements, there isn’t a proper and thorough discussion of why the original excesses were seen in the first place, and why they have subsequently disappeared. Especially for such groundbreaking claims, a consistent history of solid experimental attitude towards one’s own research is very valuable when making future claims.

    Second, others have mentioned that some fit curves seem to pass very close to most data points (n.b. I can’t seem to find the blog post where I originally read this or remember its author – if you know where it is, please let me know so I can give proper credit!). Take a look at the plot below, which shows the observed Etot distribution. In experimental plots, there is usually a statistical fluctuation of data points around the “mean” behavior, which is natural and expected. Below, in contrast, the data points are remarkably close to the fit. This doesn’t in itself mean there is anything wrong here, but it does raise an interesting question of how the plot and the fit were produced. It could be that this is not a fit to some prior expected behavior, but just an “interpolation”. Still, if that’s the case, then it’s not clear (to me, at least) what role the interpolation curve plays.

    8
    Figure 6. Sum of electron and positron energies distribution produced in the decay of Helium nuclei to the ground state. Black dots are data and the red curve is a fit.

    Third, there is also the background fit to data in Figure 4 (black asterisks and blue line). As Ethan Siegel has pointed out, you can see how well the background fit matches data, but only in the 40 to 90 degrees sub-range. In the 90 to 135 degrees sub-range, the background fit is actually quite poorer. In a less favorable interpretation of the results, this may indicate that whatever effect is causing the anomalous peak in the red asterisks is also causing the less-than-ideal fit in the black asterisks, where no signal due to a new boson is expected. If the excess is caused by some instrumental error instead, you’d expect to see effects in both curves. In any case, the background fit (blue curve) constructed from the black asterisks does not actually model the bump region very well, which weakens the argument for using it throughout all of the data. A more careful analysis of the background is warranted here.

    Fourth, another criticism comes from the simplistic statistical treatment the authors employ on the data. They fit the red asterisks in Figure 4 with the “PDF”:

    9

    where PDF stands for “Probability Density Function”, and in this case they are combining two PDFs: one derived from data, and one assumed from the signal hypothesis. The two PDFs are then “re-scaled” by the expected number of background events (N_{Bg}) and signal events (N_{sig}), according to Monte Carlo simulations. However, as others have pointed out, when you multiply a PDF by a yield such as N_{Bg}, you no longer have a PDF! A variable that incorporates yields is no longer a probability. This may just sound like a semantics game, but it does actually point to the simplicity of the treatment, and makes one wonder if there could be additional (and perhaps more serious) statistical blunders made in the course of data analysis.

    Fifth, there is also of course the fact that no other experiments have seen this particle so far. This doesn’t mean that it’s not there, but particle physics is in general a field with very few “low-hanging fruits”. Most of the “easy” discoveries have already been made, and so every claim of a new particle must be compatible with dozens of previous experimental and theoretical constraints. It can be a tough business. Another example of this is the DAMA experiment, which has made claims of dark matter detection for almost 2 decades now, but no other experiments were able to provide independent verification (and in fact, several have provided independent refutations) of their claims.

    DAMA LIBRA Dark Matter Experiment, 1.5 km beneath Italy’s Gran Sasso mountain

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    I’d like to add my own thoughts to the previous list of questions and considerations.

    The authors mention they correct the calibration of the detector efficiency with a small energy-dependent term based on a GEANT3 simulation. The updated version of the GEANT library, GEANT4, has been available for at least 20 years. I haven’t actually seen any results that use GEANT3 code since I’ve started in physics. Is it possible that the authors are missing a rather large effect in their physics expectations by using an older simulation library? I’m not sure, but just like the simplistic PDF treatment and the troubling background fit to the signal region, it doesn’t inspire as much confidence. It would be nice to at least have a more detailed and thorough explanation of what the simulation is actually doing (which maybe already exists but I haven’t been able to find?). This could also be due to a mismatch in the nuclear physics and high-energy physics communities that I’m not aware of, and perhaps nuclear physicists tend to use GEANT3 a lot more than high-energy physicists.

    Also, it’s generally tricky to use Monte Carlo simulation to estimate efficiencies in data. One needs to make sure the experimental apparatus is well understood and be confident that their simulation reproduces all the expected features of the setup, which is often difficult to do in practice, as collider experimentalists know too well. I’d really like to see a more in-depth discussion of this point.

    Finally, a more technical issue: from the paper, it’s not clear to me how the best fit to the data (red asterisks) was actually constructed. The authors claim:

    Using the composite PDF described in Equation 1 we first performed a list of fits by fixing the simulated particle mass in the signal PDF to a certain value, and letting RooFit estimate the best values for NSig andNBg. Letting the particle mass lose in the fit, the best fitted mass is calculated for the best fit […]

    When they let loose the particle mass in the fit, do they keep the “NSig” and “NBg” found with a fixed-mass hypothesis? If so, which fixed-mass NSig and which NBg do they use? And if not, what exactly was the purpose of performing the fixed-mass fits originally? I don’t think I fully got the point here.

    Where to go from here

    Despite the many questions surrounding the experimental approach, it’s still an interesting result that deserves further exploration. If it holds up with independent verification from other experiments, it would be an undeniable breakthrough, one that particle physicists have been craving for a long time now.

    And independent verification is key here. Ideally other experiments need to confirm that they also see this new boson before the acceptance of this result grows wider. Many upcoming experiments will be sensitive to a new X17 boson, as the original paper points out. In the next few years, we will actually have the possibility to probe this claim from multiple angles. Dedicated standalone experiments at the LHC such as FASER and CODEX-b will be able to probe highly long-lived signatures coming from the proton-proton interaction point, and so should be sensitive to new particles such as axion-like particles (ALPs).

    Another experiment that could have sensitivity to X17, and has come online this year, PADME (disclaimer: I am a collaborator on this experiment).

    11
    https://www.researchgate.net/figure/The-PADME-experiment-as-seen-from-above-The-positron-beam-travels-from-left-to-right_fig5_321191649

    PADME stands for Positron Annihilation into Dark Matter Experiment and its main goal is to look for dark photons produced in the annihilation between positrons and electrons.

    You can find more information about PADME here, and I will write a more detailed post about the experiment in the future, but the gist is that PADME is a fixed-target experiment striking a beam of positrons (beam energy: 550 MeV) against a fixed target made of diamond (carbon atoms). The annihilation between positrons in the beam and electrons in the carbon atoms could give rise to a photon and a new dark photon via kinetic mixing. By measuring the incoming positron and the outgoing photon momenta, we can infer the missing mass which is carried away by the (invisible) dark photon.

    If the dark photon is the X17 particle (a big if), PADME might be able to see it as well. Our dark photon mass sensitivity is roughly between 1 and 22 MeV, so a 17 MeV boson would be within our reach. But more interestingly, using the knowledge of where the new particle hypothesis lies, we might actually be able to set our beam energy to produce the X17 in resonance (using a beam energy of roughly 282 MeV). The resonance beam energy increases the number of X17s produced and could give us even higher sensitivity to investigate the claim.

    An important caveat is that PADME can provide independent confirmation of X17, but cannot refute it. If the coupling between the new particle and our ordinary particles is too feeble, PADME might not see evidence for it. This wouldn’t necessarily reject the claim by Atomki, it would just mean that we would need a more sensitive apparatus to detect it. This might be achievable with the next generation of PADME, or with the new experiments mentioned above coming online in a few years.

    Finally, in parallel with the experimental probes of the X17 hypothesis, it’s critical to continue gaining a better theoretical understanding of this anomaly. In particular, an important check is whether the proposed theoretical models that could explain the Beryllium excess also work for the new Helium excess. Furthermore, theorists have to work very hard to make these models compatible with all current experimental constraints, so they can look a bit contrived. Perhaps a thorough exploration of the theory landscape could lead to more models capable of explaining the observed anomalies as well as evading current constraints.

    Conclusions

    The recent results from the Atomki group raise the stakes in the search for Physics Beyond the Standard Model. The reported excesses in the angular correlation between electron-positron pairs in two different systems certainly seems intriguing. However, there are still a lot of questions surrounding the experimental methods, and given the nature of the claims made, a crystal-clear understanding of the results and the setup need to be achieved. Experimental verification by at least one independent group is also required if the X17 hypothesis is to be confirmed. Finally, parallel theoretical investigations that can explain both excesses are highly desirable.

    As Flip mentioned after the first excess was reported, even if this excess turns out to have an explanation other than a new particle, it’s a nice reminder that there could be interesting new physics in the light mass parameter space (e.g. MeV-scale), and a new boson in this range could also account for the dark matter abundance we see leftover from the early universe. But as Carl Sagan once said, extraordinary claims require extraordinary evidence.

    In any case, this new excess gives us a chance to witness the scientific process in action in real time. The next few years should be very interesting, and hopefully will see the independent confirmation of the new X17 particle, or a refutation of the claim and an explanation of the anomalies seen by the Atomki group. So, stay tuned!

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 10:17 am on December 29, 2019 Permalink | Reply
    Tags: , , , , , , , particlebites,   

    From particlebites: “Dark Photons in Light Places” 

    particlebites bloc

    From particlebites

    December 29, 2019
    Amara McCune

    Title: “Searching for dark photon dark matter in LIGO O1 data”

    Author: Huai-Ke Guo, Keith Riles, Feng-Wei Yang, & Yue Zhao

    Reference: https://www.nature.com/articles/s42005-019-0255-0

    There is very little we know about dark matter save for its existence.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    The LSST, or Large Synoptic Survey Telescope is to be named the Vera C. Rubin Observatory by an act of the U.S. Congress.

    LSST telescope, The Vera Rubin Survey Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background [CMB] hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    CMB per ESA/Planck

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LBNL LZ Dark Matter project at SURF, Lead, SD, USA


    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington. Axion Dark Matter Experiment

    Its mass(es), its interactions, even the proposition that it consists of particles at all is mostly up to the creativity of the theorist. For those who don’t turn to modified theories of gravity to explain the gravitational effects on galaxy rotation and clustering that suggest a massive concentration of unseen matter in the universe (among other compelling evidence), there are a few more widely accepted explanations for what dark matter might be. These include weakly-interacting massive particles (WIMPS), primordial black holes, or new particles altogether, such as axions or dark photons.

    In particle physics, this latter category is what’s known as the “hidden sector,” a hypothetical collection of quantum fields and their corresponding particles that are utilized in theorists’ toolboxes to help explain phenomena such as dark matter. In order to test the validity of the hidden sector, several experimental techniques have been concocted to narrow down the vast parameter space of possibilities, which generally consist of three strategies:

    1.Direct detection: Detector experiments look for low-energy recoils of dark matter particle collisions with nuclei, often involving emitted light or phonons.
    2.Indirect detection: These searches focus on potential decay products of dark matter particles, which depends on the theory in question.
    3.Collider production: As the name implies, colliders seek to produce dark matter in order to study its properties. This is reliant on the other two methods for verification.

    The first detection of gravitational waves from a black hole merger in 2015 ushered in a new era of physics, in which the cosmological range of theory-testing is no longer limited to the electromagnetic spectrum.

    MIT /Caltech Advanced aLigo


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Caltech/MIT Advanced aLigo detector installation Hanford, WA, USA

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/NASA eLISA space based, the future of gravitational wave research

    Bringing LIGO (the Laser Interferometer Gravitational-Wave Observatory) to the table, proposals for the indirect detection of dark matter via gravitational waves began to spring up in the literature, with implications for primordial black hole detection or dark matter ensconced in neutron stars. Yet a new proposal, in a paper by Guo et. al., [Scientific Reports-Communication Physics] suggests that direct dark matter detection with gravitational waves may be possible, specifically in the case of dark photons.

    Dark photons are hidden sector particles in the ultralight regime of dark matter candidates. Theorized as the gauge boson of a U(1) gauge group, meaning the particle is a force-carrier akin to the photon of quantum electrodynamics, dark photons either do not couple or very weakly couple to Standard Model particles in various formulations. Unlike a regular photon, dark photons can acquire a mass via the Higgs mechanism. Since dark photons need to be non-relativistic in order to meet cosmological dark matter constraints, we can model them as a coherently oscillating background field: a plane wave with amplitude determined by dark matter energy density and oscillation frequency determined by mass. In the case that dark photons weakly interact with ordinary matter, this means an oscillating force is imparted. This sets LIGO up as a means of direct detection due to the mirror displacement dark photons could induce in LIGO detectors.

    3
    Figure 1: The experimental setup of the Advanced LIGO interferometer. We can see that light leaves the laser and is reflected between a few power recycling mirrors (PR), split by a beam splitter (BS), and bounced between input and end test masses (ITM and ETM). The entire system is mounted on seismically-isolated platforms to reduce noise as much as possible. Source: https://arxiv.org/pdf/1411.4547.pdf

    LIGO consists of a Michelson interferometer, in which a laser shines upon a beam splitter which in turn creates two perpendicular beams. The light from each beam then hits a mirror, is reflected back, and the two beams combine, producing an interference pattern. In the actual LIGO detectors, the beams are reflected back some 280 times (down a 4 km arm length) and are split to be initially out of phase so that the photodiode detector should not detect any light in the absence of a gravitational wave. A key feature of gravitational waves is their polarization, which stretches spacetime in one direction and compresses it in the perpendicular direction in an alternating fashion. This means that when a gravitational wave passes through the detector, the effective length of one of the interferometer arms is reduced while the other is increased, and the photodiode will detect an interference pattern as a result.

    LIGO has been able to reach an incredible sensitivity of one part in 10^{23} in its detectors over a 100 Hz bandwidth, meaning that its instruments can detect mirror displacements up to 1/10,000th the size of a proton. Taking advantage of this number, Guo et. al. demonstrated that the differential strain (the ratio of the relative displacement of the mirrors to the interferometer’s arm length, or h = \Delta L/L) is also sensitive to ultralight dark matter via the modeling process described above. The acceleration induced by the dark photon dark matter on the LIGO mirrors is ultimately proportional to the dark electric field and charge-to-mass ratio of the mirrors themselves.

    Once this signal is approximated, next comes the task of estimating the background. Since the coherence length is of order 10^9 m for a dark photon field oscillating at order 100 Hz, a distance much larger than the separation between the LIGO detectors at Hanford and Livingston (in Washington and Louisiana, respectively), the signals from dark photons at both detectors should be highly correlated. This has the effect of reducing the noise in the overall signal, since the noise in each of the detectors should be statistically independent. The signal-to-noise ratio can then be computed directly using discrete Fourier transforms from segments of data along the total observation time. However, this process of breaking up the data, known as “binning,” means that some signal power is lost and must be corrected for.

    4
    Figure 2: The end result of the Guo et. al. analysis of dark photon-induced mirror displacement in LIGO. Above we can see a plot of the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. We can see that over further Advanced LIGO runs, up to O4-O5, these limits are expected to improve by several orders of magnitude. Source: https://www.nature.com/articles/s42005-019-0255-0

    In applying this analysis to the strain data from the first run of Advanced LIGO, Guo et. al. generated a plot which sets new limits for the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. There are a few key subtleties in this analysis, primarily that there are many potential dark photon models which rely on different gauge groups, yet this framework allows for similar analysis of other dark photon models. With plans for future iterations of gravitational wave detectors, further improved sensitivities, and many more data runs, there seems to be great potential to apply LIGO to direct dark matter detection. It’s exciting to see these instruments in action for discoveries that were not in mind when LIGO was first designed, and I’m looking forward to seeing what we can come up with next!

    Learn More:

    An overview of gravitational waves and dark matter: https://www.symmetrymagazine.org/article/what-gravitational-waves-can-say-about-dark-matter
    A summary of dark photon experiments and results: https://physics.aps.org/articles/v7/115
    Details on the hardware of Advanced LIGO: https://arxiv.org/pdf/1411.4547.pdf
    A similar analysis done by Pierce et. al.: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.121.061102

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 4:25 pm on December 4, 2019 Permalink | Reply
    Tags: "Discovering the Top Quark", , , FNAL Tevatron CDF, , , , particlebites,   

    From particlebites: “Discovering the Top Quark” 

    particlebites bloc

    From particlebites

    December 3, 2019
    Adam Green

    This post is about the discovery of the most massive quark in the Standard Model, the Top quark. Below is a “discovery plot” [1] from the Collider Detector at Fermilab collaboration (CDF). Here is the original paper [Physical Review Letters].

    FNAL/Tevatron CDF detector

    FNAL/Tevatron tunnel

    FNAL/Tevatron map

    1
    This plot confirms the existence of the Top quark. Let’s understand how.

    For each proton collision that passes certain selection conditions, the horizontal axis shows the best estimate of the Top quark mass. These selection conditions encode the particle “fingerprint” of the Top quark. Out of all possible proton collisions events, we only want to look at ones that perhaps came from Top quark decays. This subgroup of events can inform us of a best guess at the mass of the Top quark. This is what is being plotted on the x axis.

    On the vertical axis are the number of these events.

    The dashed distribution is the number of these events originating from the Top quark if the Top quark exists and decays this way. This could very well not be the case.

    The dotted distribution is the background for these events, events that did not come from Top quark decays.

    The solid distribution is the measured data.

    To claim a discovery, the background (dotted) plus the signal (dashed) should equal the measured data (solid). We can run simulations for different top quark masses to give us distributions of the signal until we find one that matches the data. The inset at the top right is showing that a Top quark of mass of 175GeV best reproduces the measured data.

    Taking a step back from the technicalities, the Top quark is special because it is the heaviest of all the fundamental particles. In the Standard Model, particles acquire their mass by interacting with the Higgs. Particles with more mass interact more with the Higgs. The Top mass being so heavy is an indicator that any new physics involving the Higgs may be linked to the Top quark.

    References / Further Reading

    [1] – Observation of Top Quark Production in pp Collisions with the Collider Detector at Fermilab – This is the “discovery paper” announcing experimental evidence of the Top.

    [2] – Observation of tt(bar)H Production [Physical Review Letters]– Who is to say that the Top and the Higgs even have significant interactions to lowest order? The CMS collaboration finds evidence that they do in fact interact at “tree-level.”

    [2] – The Perfect Couple: Higgs and top quark spotted together – This article further describes the interconnection between the Higgs and the Top.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 9:37 pm on May 5, 2018 Permalink | Reply
    Tags: "blinding", , , , , , particlebites,   

    From particlebites: “Going Rogue: The Search for Anything (and Everything) with ATLAS” 

    particlebites bloc

    From particlebites

    May 5, 2018
    Julia Gonski

    Title: “A model-independent general search for new phenomena with the ATLAS detector at √s=13 TeV”

    Author: The ATLAS Collaboration

    Reference: ATLAS-PHYS-CONF-2017-001

    CERN/ATLAS detector

    When a single experimental collaboration has a few thousand contributors (and even more opinions), there are a lot of rules. These rules dictate everything from how you get authorship rights to how you get chosen to give a conference talk. In fact, this rulebook is so thorough that it could be the topic of a whole other post. But for now, I want to focus on one rule in particular, a rule that has only been around for a few decades in particle physics but is considered one of the most important practices of good science: blinding.

    In brief, blinding is the notion that it’s experimentally compromising for a scientist to look at the data before finalizing the analysis. As much as we like to think of ourselves as perfectly objective observers, the truth is, when we really really want a particular result (let’s say a SUSY discovery), that desire can bias our work. For instance, imagine you were looking at actual collision data while you were designing a signal region. You might unconsciously craft your selection in such a way to force an excess of data over background prediction. To avoid such human influences, particle physics experiments “blind” their analyses while they are under construction, and only look at the data once everything else is in place and validated.

    1
    Figure 1: “Blind analysis: Hide results to seek the truth”, R. MacCounor & S. Perlmutter for Nature.com

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 9:20 am on October 20, 2016 Permalink | Reply
    Tags: , , Missing transverse energy (MET), particlebites, What Happens When Energy Goes Missing?   

    From particlebites: “What Happens When Energy Goes Missing?” 

    particlebites bloc

    particlebites

    October 11, 2016
    Julia Gonski

    Article:Performance of algorithms that reconstruct missing transverse momentum in √s = 8 TeV proton-proton collisions in the ATLAS detector
    Authors: The ATLAS Collaboration
    Reference: arXiv:1609.09324

    CERN/ATLAS detector
    CERN/ATLAS detector

    The ATLAS experiment recently released a note detailing the nature and performance of algorithms designed to calculate what is perhaps the most difficult quantity in any LHC event: missing transverse energy.

    2
    Figure 1: LHC momentum conservation.

    1
    Figure 2: ATLAS event display showing MET balancing two jets.

    Missing transverse energy (MET) is so difficult because by its very nature, it is missing, thus making it unobservable in the detector. So where does this missing energy come from, and why do we even need to reconstruct it?

    The LHC accelerates protons towards one another on the same axis, so that they collide head on.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Therefore, the incoming partons have net momentum along the direction of the beamline, but no net momentum in the transverse direction (see Figure 1). MET is then defined as the negative vectorial sum (in the transverse plane) of all recorded particles. Any nonzero MET indicates a particle that escaped the detector. This escaping particle could be a regular Standard Model neutrino, or something much more exotic, such as the lightest supersymmetric particle or a dark matter candidate.

    Figure 2 shows an event display where the calculated MET balances the visible objects in the detector. In this case, these visible objects are jets, but they could also be muons, photons, electrons, or taus. This constitutes the “hard term” in the MET calculation. Often there are also contributions of energy in the detector that are not associated to a particular physics object, but may still be necessary to get an accurate measurement of MET. This momenta is known as the “soft term”.

    In the course of looking at all the energy in the detector for a given event, inevitably some pileup will sneak in. The pileup could be contributions from additional proton-proton collisions in the same bunch crossing, or from scattering of protons upstream of the interaction point. Either way, the MET reconstruction algorithms have to take this into account. Adding up energy from pileup could lead to more MET than was actually in the collision, which could mean the difference between an observation of dark matter and just another Standard Model event.

    One of the ways to suppress pile up is to use a quantity called jet vertex fraction (JVF), which uses the additional information of tracks associated to jets. If the tracks do not point back to the initial hard scatter, they can be tagged as pileup and not included in the calculation. This is the idea behind the Track Soft Term (TST) algorithm. Another way to remove pileup is to estimate the average energy density in the detector due to pileup using event-by-event measurements, then subtracting this baseline energy. This is used in the Extrapolated Jet Area with Filter, or EJAF algorithm.

    Once these algorithms are designed, they are tested in two different types of events. One of these is in W to lepton + neutrino decay signatures. These events should all have some amount of real missing energy from the neutrino, so they can easily reveal how well the reconstruction is working. The second group is Z boson to two lepton events. These events should not have any real missing energy (no neutrinos), so with these events, it is possible to see if and how the algorithm reconstructs fake missing energy. Fake MET often comes from miscalibration or mismeasurement of physics objects in the detector. Figures 3 and 4 show the calorimeter soft MET distributions in these two samples; here it is easy to see the shape difference between real and fake missing energy.

    3
    Figure 3: Distribution of the sum of missing energy in the calorimeter soft term (“real MET”) shown in Z to μμ data and Monte Carlo events.

    4
    Figure 4: Distribution of the sum of missing energy in the calorimeter soft term (“fake MET”) shown in W to eν data and Monte Carlo events.

    This note evaluates the performance of these algorithms in 8 TeV proton proton collision data collected in 2012. Perhaps the most important metric in MET reconstruction performance is the resolution, since this tells you how well you know your MET value. Intuitively, the resolution depends on detector resolution of the objects that went into the calculation, and because of pile up, it gets worse as the number of vertices gets larger. The resolution is technically defined as the RMS of the combined distribution of MET in the x and y directions, covering the full transverse plane of the detector. Figure 5 shows the resolution as a function of the number of vertices in Z to μμ data for several reconstruction algorithms. Here you can see that the TST algorithm has a very small dependence on the number of vertices, implying a good stability of the resolution with pileup.

    5
    Figure 5: Resolution obtained from the combined distribution of MET(x) and MET(y) for five algorithms as a function of NPV in 0-jet Z to μμ data.

    Another important quantity to measure is the angular resolution, which is important in the reconstruction of kinematic variables such as the transverse mass of the W. It can be measured in W to μν simulation by comparing the direction of the MET, as reconstructed by the algorithm, to the direction of the true MET. The resolution is then defined as the RMS of the distribution of the phi difference between these two vectors. Figure 6 shows the angular resolution of the same five algorithms as a function of the true missing transverse energy. Note the feature between 40 and 60 GeV, where there is a transition region into events with high pT calibrated jets. Again, the TST algorithm has the best angular resolution for this topology across the entire range of true missing energy.

    6
    Figure 6: Resolution of ΔΦ(reco MET, true MET) for 0 jet W to μν Monte Carlo.

    As the High Luminosity LHC looms larger and larger, the issue of MET reconstruction will become a hot topic in the ATLAS collaboration. In particular, the HLLHC will be a very high pile up environment, and many new pile up subtraction studies are underway. Additionally, there is no lack of exciting theories predicting new particles in Run 3 that are invisible to the detector. As long as these hypothetical invisible particles are being discussed, the MET teams will be working hard to catch them, so we can safely expect some innovation of these methods in the next few years.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: