Tagged: particlebites Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:37 pm on May 5, 2018 Permalink | Reply
    Tags: "blinding", , , , , , particlebites,   

    From particlebites: “Going Rogue: The Search for Anything (and Everything) with ATLAS” 

    particlebites bloc

    From particlebites

    May 5, 2018
    Julia Gonski

    Title: “A model-independent general search for new phenomena with the ATLAS detector at √s=13 TeV”

    Author: The ATLAS Collaboration

    Reference: ATLAS-PHYS-CONF-2017-001

    CERN/ATLAS detector

    When a single experimental collaboration has a few thousand contributors (and even more opinions), there are a lot of rules. These rules dictate everything from how you get authorship rights to how you get chosen to give a conference talk. In fact, this rulebook is so thorough that it could be the topic of a whole other post. But for now, I want to focus on one rule in particular, a rule that has only been around for a few decades in particle physics but is considered one of the most important practices of good science: blinding.

    In brief, blinding is the notion that it’s experimentally compromising for a scientist to look at the data before finalizing the analysis. As much as we like to think of ourselves as perfectly objective observers, the truth is, when we really really want a particular result (let’s say a SUSY discovery), that desire can bias our work. For instance, imagine you were looking at actual collision data while you were designing a signal region. You might unconsciously craft your selection in such a way to force an excess of data over background prediction. To avoid such human influences, particle physics experiments “blind” their analyses while they are under construction, and only look at the data once everything else is in place and validated.

    1
    Figure 1: “Blind analysis: Hide results to seek the truth”, R. MacCounor & S. Perlmutter for Nature.com

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 9:20 am on October 20, 2016 Permalink | Reply
    Tags: , , Missing transverse energy (MET), particlebites, What Happens When Energy Goes Missing?   

    From particlebites: “What Happens When Energy Goes Missing?” 

    particlebites bloc

    particlebites

    October 11, 2016
    Julia Gonski

    Article:Performance of algorithms that reconstruct missing transverse momentum in √s = 8 TeV proton-proton collisions in the ATLAS detector
    Authors: The ATLAS Collaboration
    Reference: arXiv:1609.09324

    CERN/ATLAS detector
    CERN/ATLAS detector

    The ATLAS experiment recently released a note detailing the nature and performance of algorithms designed to calculate what is perhaps the most difficult quantity in any LHC event: missing transverse energy.

    2
    Figure 1: LHC momentum conservation.

    1
    Figure 2: ATLAS event display showing MET balancing two jets.

    Missing transverse energy (MET) is so difficult because by its very nature, it is missing, thus making it unobservable in the detector. So where does this missing energy come from, and why do we even need to reconstruct it?

    The LHC accelerates protons towards one another on the same axis, so that they collide head on.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Therefore, the incoming partons have net momentum along the direction of the beamline, but no net momentum in the transverse direction (see Figure 1). MET is then defined as the negative vectorial sum (in the transverse plane) of all recorded particles. Any nonzero MET indicates a particle that escaped the detector. This escaping particle could be a regular Standard Model neutrino, or something much more exotic, such as the lightest supersymmetric particle or a dark matter candidate.

    Figure 2 shows an event display where the calculated MET balances the visible objects in the detector. In this case, these visible objects are jets, but they could also be muons, photons, electrons, or taus. This constitutes the “hard term” in the MET calculation. Often there are also contributions of energy in the detector that are not associated to a particular physics object, but may still be necessary to get an accurate measurement of MET. This momenta is known as the “soft term”.

    In the course of looking at all the energy in the detector for a given event, inevitably some pileup will sneak in. The pileup could be contributions from additional proton-proton collisions in the same bunch crossing, or from scattering of protons upstream of the interaction point. Either way, the MET reconstruction algorithms have to take this into account. Adding up energy from pileup could lead to more MET than was actually in the collision, which could mean the difference between an observation of dark matter and just another Standard Model event.

    One of the ways to suppress pile up is to use a quantity called jet vertex fraction (JVF), which uses the additional information of tracks associated to jets. If the tracks do not point back to the initial hard scatter, they can be tagged as pileup and not included in the calculation. This is the idea behind the Track Soft Term (TST) algorithm. Another way to remove pileup is to estimate the average energy density in the detector due to pileup using event-by-event measurements, then subtracting this baseline energy. This is used in the Extrapolated Jet Area with Filter, or EJAF algorithm.

    Once these algorithms are designed, they are tested in two different types of events. One of these is in W to lepton + neutrino decay signatures. These events should all have some amount of real missing energy from the neutrino, so they can easily reveal how well the reconstruction is working. The second group is Z boson to two lepton events. These events should not have any real missing energy (no neutrinos), so with these events, it is possible to see if and how the algorithm reconstructs fake missing energy. Fake MET often comes from miscalibration or mismeasurement of physics objects in the detector. Figures 3 and 4 show the calorimeter soft MET distributions in these two samples; here it is easy to see the shape difference between real and fake missing energy.

    3
    Figure 3: Distribution of the sum of missing energy in the calorimeter soft term (“real MET”) shown in Z to μμ data and Monte Carlo events.

    4
    Figure 4: Distribution of the sum of missing energy in the calorimeter soft term (“fake MET”) shown in W to eν data and Monte Carlo events.

    This note evaluates the performance of these algorithms in 8 TeV proton proton collision data collected in 2012. Perhaps the most important metric in MET reconstruction performance is the resolution, since this tells you how well you know your MET value. Intuitively, the resolution depends on detector resolution of the objects that went into the calculation, and because of pile up, it gets worse as the number of vertices gets larger. The resolution is technically defined as the RMS of the combined distribution of MET in the x and y directions, covering the full transverse plane of the detector. Figure 5 shows the resolution as a function of the number of vertices in Z to μμ data for several reconstruction algorithms. Here you can see that the TST algorithm has a very small dependence on the number of vertices, implying a good stability of the resolution with pileup.

    5
    Figure 5: Resolution obtained from the combined distribution of MET(x) and MET(y) for five algorithms as a function of NPV in 0-jet Z to μμ data.

    Another important quantity to measure is the angular resolution, which is important in the reconstruction of kinematic variables such as the transverse mass of the W. It can be measured in W to μν simulation by comparing the direction of the MET, as reconstructed by the algorithm, to the direction of the true MET. The resolution is then defined as the RMS of the distribution of the phi difference between these two vectors. Figure 6 shows the angular resolution of the same five algorithms as a function of the true missing transverse energy. Note the feature between 40 and 60 GeV, where there is a transition region into events with high pT calibrated jets. Again, the TST algorithm has the best angular resolution for this topology across the entire range of true missing energy.

    6
    Figure 6: Resolution of ΔΦ(reco MET, true MET) for 0 jet W to μν Monte Carlo.

    As the High Luminosity LHC looms larger and larger, the issue of MET reconstruction will become a hot topic in the ATLAS collaboration. In particular, the HLLHC will be a very high pile up environment, and many new pile up subtraction studies are underway. Additionally, there is no lack of exciting theories predicting new particles in Run 3 that are invisible to the detector. As long as these hypothetical invisible particles are being discussed, the MET teams will be working hard to catch them, so we can safely expect some innovation of these methods in the next few years.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 8:44 am on September 5, 2016 Permalink | Reply
    Tags: 6 dimensional spacetime, , , , , , particlebites   

    From particlebites: “Gravity in the Next Dimension: Micro Black Holes at ATLAS” 

    particlebites bloc

    particlebites

    August 31, 2016
    Savannah Thais

    Article: Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at sqrt(s)=13 TeV
    Authors: The ATLAS Collaboration
    Reference: arXiv:1606.02265 [hep-ex]

    CERN/ATLAS detector
    CERN/ATLAS detector

    What would gravity look like if we lived in a 6-dimensional space-time? Models of TeV-scale gravity theorize that the fundamental scale of gravity, MD, is much lower than what’s measured here in our normal, 4-dimensional space-time. If true, this could explain the large difference between the scale of electroweak interactions (order of 100 GeV) and gravity (order of 1016 GeV), an important open question in particle physics. There are several theoretical models to describe these extra dimensions, and they all predict interesting new signatures in the form of non-perturbative gravitational states. One of the coolest examples of such a state is microscopic black holes. Conveniently, this particular signature could be produced and measured at the LHC!

    Sounds cool, but how do you actually look for microscopic black holes with a proton-proton collider? Because we don’t have a full theory of quantum gravity (yet), ATLAS researchers made predictions for the production cross-sections of these black holes using semi-classical approximations that are valid when the black hole mass is above MD. This production cross-section is also expected to dramatically larger when the energy scale of the interactions (pp collisions) surpasses MD. We can’t directly detect black holes with ATLAS, but many of the decay channels of these black holes include leptons in the final state, which IS something that can be measured at ATLAS! This particular ATLAS search looked for final states with at least 3 high transverse momentum (pt) jets, at least one of which must be a leptonic (electron or muon) jet (the others can be hadronic or leptonic). The sum of the transverse momenta, is used as a discriminating variable since the signal is expected to appear only at high pt.

    This search used the full 3.2 fb-1 of 13 TeV data collected by ATLAS in 2015 to search for this signal above relevant Standard Model backgrounds (Z+jets, W+jets, and ttbar, all of which produce similar jet final states). The results are shown in Figure 1 (electron and muon channels are presented separately). The various backgrounds are shown in various colored histograms, the data in black points, and two microscopic black hole models in green and blue lines. There is a slight excess in the 3 TeV region in the electron channel, which corresponds to a p-value of only 1% when tested against the background only hypothesis. Unfortunately, this isn’t enough evidence to indicate new physics yet, but it’s an exciting result nonetheless! This analysis was also used to improve exclusion limits on individual extra-dimensional gravity models, as shown in Figure 2. All limits were much stronger than those set in Run 1.

    1
    Figure 1: momentum distributions in the electron (a) and muon (b) channels

    2
    Figure 2: Exclusion limits in the Mth, MD plane for models with various numbers of extra dimensions

    So: no evidence of microscopic black holes or extra-dimensional gravity at the LHC yet, but there is a promising excess and Run 2 has only just begun. Since publication, ATLAS has collected another 10 fb-1 of sqrt(13) TeV data that has yet to be analyzed. These results could also be used to constrain other Beyond the Standard Model searches at the TeV scale that have similar high pt leptonic jet final states, which would give us more information about what can and can’t exist outside of the Standard Model. There is certainly more to be learned from this search!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 10:16 am on September 4, 2016 Permalink | Reply
    Tags: , , particlebites,   

    From particlebites: “The CMB sheds light on galaxy clusters: Observing the kSZ signal with ACT and BOSS” 

    particlebites bloc

    particlebites

    August 17, 2016 [Just brought to social media by astrobites.]
    Eve Vavagiakis

    Article: Detection of the pairwise kinematic Sunyaev-Zel’dovich effect with BOSS DR11 and the Atacama Cosmology Telescope
    Authors: F. De Bernardis, S. Aiola, E. M. Vavagiakis, M. D. Niemack, N. Battaglia, and the ACT Collaboration
    Reference: arXiv:1607.02139

    Editor’s note: this post is written by one of the students involved in the published result.

    Like X-rays shining through your body can inform you about your health, the cosmic microwave background (CMB) shining through galaxy clusters can tell us about the universe we live in.

    Cosmic Microwave Background per ESA/Planck
    CMB per ESA/Planck

    When light from the CMB is distorted by the high energy electrons present in galaxy clusters, it’s called the Sunyaev-Zel’dovich effect. A new 4.1σ measurement of the kinematic Sunyaev-Zel’dovich (kSZ) signal has been made from the most recent Atacama Cosmology Telescope (ACT) cosmic microwave background (CMB) maps and galaxy data from the Baryon Oscillation Spectroscopic Survey (BOSS).

    Princeton ACT
    Princeton ACT

    With steps forward like this one, the kinematic Sunyaev-Zel’dovich signal could become a probe of cosmology, astrophysics and particle physics alike.

    The Kinematic Sunyaev-Zel’dovich Effect

    It rolls right off the tongue, but what exactly is the kinematic Sunyaev-Zel’dovich signal? Galaxy clusters distort the cosmic microwave background before it reaches Earth, so we can learn about these clusters by looking at these CMB distortions. In our X-ray metaphor, the map of the CMB is the image of the X-ray of your arm, and the galaxy clusters are the bones. Galaxy clusters are the largest gravitationally bound structures we can observe, so they serve as important tools to learn more about our universe. In its essence, the Sunyaev-Zel’dovich effect is inverse-Compton scattering of cosmic microwave background photons off of the gas in these galaxy clusters, whereby the photons gain a “kick” in energy by interacting with the high energy electrons present in the clusters.

    The Sunyaev-Zel’dovich effect can be divided up into two categories: thermal and kinematic. The thermal Sunyaev-Zel’dovich (tSZ) effect is the spectral distortion of the cosmic microwave background in a characteristic manner due to the photons gaining, on average, energy from the hot (~107 – 108 K) gas of the galaxy clusters. The kinematic (or kinetic) Sunyaev-Zel’dovich (kSZ) effect is a second-order effect—about a factor of 10 smaller than the tSZ effect—that is caused by the motion of galaxy clusters with respect to the cosmic microwave background rest frame. If the CMB photons pass through galaxy clusters that are moving, they are Doppler shifted due to the cluster’s peculiar velocity (the velocity that cannot be explained by Hubble’s law, which states that objects recede from us at a speed proportional to their distance). The kinematic Sunyaev-Zel’dovich effect is the only known way to directly measure the peculiar velocities of objects at cosmological distances, and is thus a valuable source of information for cosmology. It allows us to probe megaparsec and gigaparsec scales – that’s around 30,000 times the diameter of the Milky Way!

    2
    A schematic of the Sunyaev-Zel’dovich effect resulting in higher energy (or blue shifted) photons of the cosmic microwave background (CMB) when viewed through the hot gas present in galaxy clusters. Source: UChicago Astronomy.

    Measuring the kSZ Effect

    To make the measurement of the kinematic Sunyaev-Zel’dovich signal, the Atacama Cosmology Telescope (ACT) collaboration used a combination of cosmic microwave background maps from two years of observations by ACT. The CMB map used for the analysis overlapped with ~68000 galaxy sources from the Large Scale Structure (LSS) DR11 catalog of the Baryon Oscillation Spectroscopic Survey (BOSS). The catalog lists the coordinate positions of galaxies along with some of their properties. The most luminous of these galaxies were assumed to be located at the centers of galaxy clusters, so temperature signals from the CMB map were taken at the coordinates of these galaxy sources in order to extract the Sunyaev-Zel’dovich signal.

    While the smallness of the kSZ signal with respect to the tSZ signal and the noise level in current CMB maps poses an analysis challenge, there exist several approaches to extracting the kSZ signal. To make their measurement, the ACT collaboration employed a pairwise statistic. “Pairwise” refers to the momentum between pairs of galaxy clusters, and “statistic” indicates that a large sample is used to rule out the influence of unwanted effects.

    Here’s the approach: nearby galaxy clusters move towards each other on average, due to gravity. We can’t easily measure the three-dimensional momentum of clusters, but the average pairwise momentum can be estimated by using the line of sight component of the momentum, along with other information such as redshift and angular separations between clusters. The line of sight momentum is directly proportional to the measured kSZ signal: the microwave temperature fluctuation which is measured from the CMB map. We want to know if we’re measuring the kSZ signal when we look in the direction of galaxy clusters in the CMB map. Using the observed CMB temperature to find the line of sight momenta of galaxy clusters, we can estimate the mean pairwise momentum as a function of cluster separation distance, and check to see if we find that nearby galaxies are indeed falling towards each other. If so, we know that we’re observing the kSZ effect in action in the CMB map.

    For the measurement quoted in their paper, the ACT collaboration finds the average pairwise momentum as a function of galaxy cluster separation, and explores a variety of error determinations and sources of systematic error. The most conservative errors based on simulations give signal-to-noise estimates that vary between 3.6 and 4.1.

    4
    The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.

    The ACT and BOSS results are an improvement on the 2012 ACT detection, and are comparable with results from the South Pole Telescope (SPT) collaboration that use galaxies from the Dark Energy Survey. The ACT and BOSS measurement represents a step forward towards improved extraction of kSZ signals from CMB maps. Future surveys such as Advanced ACTPol, SPT-3G, the Simons Observatory, and next-generation CMB experiments will be able to apply the methods discussed here to improved CMB maps in order to achieve strong detections of the kSZ effect. With new data that will enable better measurements of galaxy cluster peculiar velocities, the pairwise kSZ signal will become a powerful probe of our universe in the years to come.

    Implications and Future Experiments

    One interesting consequence for particle physics will be more stringent constraints on the sum of the neutrino masses from the pairwise kinematic Sunyaev-Zel’dovich effect. Upper bounds on the neutrino mass sum from cosmological measurements of large scale structure and the CMB have the potential to determine the neutrino mass hierarchy, one of the next major unknowns of the Standard Model to be resolved, if the mass hierarchy is indeed a “normal hierarchy” with ν3 being the heaviest mass state. If the upper bound of the neutrino mass sum is measured to be less than 0.1 eV, the inverted hierarchy scenario would be ruled out, due to there being a lower limit on the mass sum of ~0.095 eV for an inverted hierarchy and ~0.056 eV for a normal hierarchy.

    Forecasts for kSZ measurements in combination with input from Planck predict possible constraints on the neutrino mass sum with a precision of 0.29 eV, 0.22 eV and 0.096 eV for Stage II (ACTPol + BOSS), Stage III (Advanced ACTPol + BOSS) and Stage IV (next generation CMB experiment + DESI) surveys respectively, with the possibility of much improved constraints with optimal conditions. As cosmic microwave background maps are improved and Sunyaev-Zel’dovich analysis methods are developed, we have a lot to look forward to.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 4:05 pm on August 14, 2016 Permalink | Reply
    Tags: , , particlebites,   

    From particlebites: “Jets: More than Riff, Tony, and a rumble” 

    particlebites bloc

    particlebites

    July 26, 2016 [Just today in social media.]
    Reggie Bain

    Ubiquitous in the LHC’s ultra-high energy collisions are collimated sprays of particles called jets. The study of jet physics is a rapidly growing field where experimentalists and theorists work together to unravel the complex geometry of the final state particles at LHC experiments. If you’re totally new to the idea of jets…this bite from July 18th, 2016 by Julia Gonski is a nice experimental introduction to the importance of jets. In this bite, we’ll look at the basic ideas of jet physics from a more theoretical perspective. Let’s address a few basic questions:

    1. What is a jet? Jets are highly collimated collections of particles that are frequently observed in detectors. In visualizations of collisions in the ATLAS detector, one can often identify jets by eye.

    1
    A nicely colored visualization of a multi-jet event in the ATLAS detector. Reason #172 that I’m not an experimentalist…actually sifting out useful information from the detector (or even making a graphic like this) is insanely hard.

    Jets are formed in the final state of a collision when a particle showers off radiation in such a way as to form a focused cone of particles. The most commonly studied jets are formed by quarks and gluons that fragment into hadrons like pions, kaons, and sometimes more exotic particles like the $latex J/Ψ, Υ, χc and many others. This process is often referred to as hadronization.

    2. Why do jets exist? Jets are a fundamental prediction of Quantum Field Theories like Quantum Chromodynamics (QCD). One common process studied in field theory textbooks is electron–positron annihilation into a pair of quarks, e+e– → q q. In order to calculate the cross-section of this process, it turns out that one has to consider the possibility that additional gluons are produced along with the qq. Since no detector has infinite resolution, it’s always possible that there are gluons that go unobserved by your detector. This could be because they are incredibly soft (low energy) or because they travel almost exactly collinear to the q or q itself. In this region of momenta, the cross-section gets very large and the process favors the creation of this extra radiation. Since these gluons carry color/anti-color, they begin to hadronize and decay so as to become stable, colorless states. When the q, q have high momenta, the zoo of particles that are formed from the hadronization all have momenta that are clustered around the direction of the original q,q and form a cone shape in the detector…thus a jet is born! The details of exactly how hadronization works is where theory can get a little hazy. At the energy and distance scales where quarks/gluons start to hadronize, perturbation theory breaks down making many of our usual calculational tools useless. This, of course, makes the realm of hadronization—often referred to as parton fragmentation in the literature—a hot topic in QCD research.

    3. How do we measure/study jets? Now comes the tricky part. As experimentalists will tell you, actually measuring jets can a messy business. By taking the signatures of the final state particles in an event (i.e. a collision), one can reconstruct a jet using a jet algorithm. One of the first concepts of such jet definitions was introduced by Geroge Sterman and Steven Weinberg in 1977. There they defined a jet using two parameters θ, E. These restricted the angle and energy of particles that are in or out of a jet. Today, we have a variety of jet algorithms that fall into two categories:

    Cone Algorithms — These algorithms identify stable cones of a given angular size. These cones are defined in such a way that if one or two nearby particles are added to or removed from the jet cone, that it won’t drastically change the cone location and energy
    Recombination Algorithms — These look pairwise at the 4-momenta of all particles in an event and combine them, according to a certain distance metric (there’s a different one for each algorithm), in such a way as to be left with distinct, well-separated jets.

    2
    Figure 2: From Cacciari and Salam’s original paper on the “Anti-kT” jet algorithm (See arXiv:0802.1189). The picture shows the application of 4 different jet algorithms: the kT, Cambridge/Aachen, Seedless-Infrared-Safe Cone, and anti-kT algorithms to a single set of final state particles in an event. You can see how each algorithm reconstructs a slightly different jet structure. These are among the most commonly used clustering algorithms on the market (the anti-kT being, at least in my experience, the most popular)

    4. Why are jets important? On the frontier of high energy particle physics, CERN leads the world’s charge in the search for new physics. From deepening our understanding of the Higgs to observing never before seen particles, projects like ATLAS,

    3
    An illustration of an interesting type of jet substructure observable called “N-subjettiness” from the original paper by Jesse Thaler and Ken van Tilburg (see arXiv:1011.2268). N-subjettiness aims to study how momenta within a jet are distributed by dividing them up into n sub-jets. The diagram on the left shows an example of 2-subjettiness where a jet contains two sub-jets. The diagram on the right shows a jet with 0 sub-jets.

    CMS, and LHCb promise to uncover interesting physics for years to come. As it turns out, a large amount of Standard Model background to these new physics discoveries comes in the form of jets. Understanding the origin and workings of these jets can thus help us in the search for physics beyond the Standard Model.

    Additionally, there are a number of interesting questions that remain about the Standard Model itself. From studying the production of heavy hadron production/decay in pp and heavy-ion collisions to providing precision measurements of the strong coupling, jets physics has a wide range of applicability and relevance to Standard Model problems. In recent years, the physics of jet substructure, which studies the distributions of particle momenta within a jet, has also seen increased interest. By studying the geometry of jets, a number of clever observables have been developed that can help us understand what particles they come from and how they are formed. Jet substructure studies will be the subject of many future bites!

    Going forward…With any luck, this should serve as a brief outline to the uninitiated on the basics of jet physics. In a world increasingly filled with bigger, faster, and stronger colliders, jets will continue to play a major role in particle phenomenology. In upcoming bites, I’ll discuss the wealth of new and exciting results coming from jet physics research. We’ll examine questions like:

    How do theoretical physicists tackle problems in jet physics?
    How does the process of hadronization/fragmentation of quarks and gluons really work?
    Can jets be used to answer long outstanding problems in the Standard Model?

    I’ll also bite about how physicists use theoretical smart bombs called “effective field theories” to approach these often nasty theoretical calculations. But more on that later…

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 10:21 am on July 14, 2016 Permalink | Reply
    Tags: , , , , , particlebites   

    From particlebites: “The dawn of multi-messenger astronomy: using KamLAND to study gravitational wave events GW150914 and GW151226” 

    particlebites bloc

    particlebites

    July 13, 2016
    Eve Vavagiakis

    Article: Search for electron antineutrinos associated with gravitational wave events GW150914 and GW151226 using KamLAND
    Authors: KamLAND Collaboration
    Reference: arXiv:1606.07155

    After the chirp heard ‘round the world, the search is on for coincident astrophysical particle events to provide insight into the source and nature of the era-defining gravitational wave events detected by the LIGO Scientific Collaboration in late 2015.

    LSC LIGO Scientific Collaboration

    By combining information from gravitational wave (GW) events with the detection of astrophysical neutrinos and electromagnetic signatures such as gamma-ray bursts, physicists and astronomers are poised to draw back the curtain on the dynamics of astrophysical phenomena, and we’re surely in for some surprises.

    The first recorded gravitational wave event, GW150914, was likely a merger of two black holes which took place more than one billion light years from the Earth. The event’s name marks the day it was observed by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO), September 14th, 2015.

    1
    Two black holes spiral in towards one another and merge to emit a burst of gravitational waves that Advanced LIGO can detect. Source: APS Physics.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    LIGO detections are named “GW” for “gravitational wave,” followed by the observation date in YYMMDD format. The second event, GW151226 (December 26th, 2015) was likely another merger of two black holes, having 8 and 14 times the mass of the sun, taking place 1.4 billion light years away from Earth.

    The following computer simulation from LIGO depicts what the collision of two black holes would look like if we could get close enough to the merger. It was created by solving equations from Albert Einstein’s general theory of relativity using the LIGO data. (Source: LIGO Lab Caltech : MIT).

    A third gravitational wave event candidate, LVT151012, a possible black hole merger which occurred on October 12th, 2015, did not reach the same detection significance a the aforementioned events, but still has a >50% chance of astrophysical origin. LIGO candidates are named differently than detections. The names start with “LVT” for “LIGO-Virgo Trigger,” but are followed by the observation date in the same YYMMDD format. The different name indicates that the event was not significant enough to be called a gravitational wave.

    Observations from other scientific collaborations can search for particles associated with these gravitational waves. The combined information from the gravitational wave and particle detections could identify the origin of these gravitational wave events. For example, some violent astrophysical phenomena emit not only gravitational waves, but also high-energy neutrinos. Conversely, there is currently no known mechanism for the production of either neutrinos or electromagnetic waves in a black hole merger.

    Black holes with rapidly accreting disks can be the origin of gamma-ray bursts and neutrino signals, but these disks are not expected to be present during mergers like the ones detected by LIGO. For this reason, it was surprising when the Fermi Gamma-ray Space Telescope reported a coincident gamma-ray burst occurring 0.4 seconds after the September GW event with a false alarm probability of 1 in 455. Although there is some debate in the community about whether or not this observation is to be believed, the observation motivates a multi-messenger analysis including the hunt for associated astrophysical neutrinos at all energies.

    Could a neutrino experiment like KamLAND find low energy antineutrino events coincident with the GW events, even when higher energy searches by IceCube and ANTARES did not?

    3
    Schematic diagram of the KamLAND detector. Source: hep-ex/0212021v1

    KamLAND, the Kamioka Liquid scintillator Anti-Neutrino Detector, is located under Mt. Ikenoyama, Japan, buried beneath the equivalent of 2,700 meters of water. It consists of an 18 meter diameter stainless steel sphere, the inside of which is covered with photomultiplier tubes, surrounding an EVOH/nylon balloon enclosed by pure mineral oil. Inside the balloon resides 1 kton of highly purified liquid scintillator. Outside the stainless steel sphere is a cylindrical 3.2 kton water-Cherenkov detector that provides shielding and enables cosmic ray muon identification.

    KamLAND is optimized to search for ~MeV neutrinos and antineutrinos. The detection of the gamma ray burst by the Fermi telescope suggests that the detected black hole merger might have retained its accretion disk, and the spectrum of accretion disk neutrinos around a single black hole is expected to peak around 10 MeV, so KamLAND searched for correlations between the LIGO GW events and ~10 MeV electron antineutrino events occurring within a 500 second window of the merger events. Researchers focused on the detection of electron antineutrinos through the inverse beta decay reaction.

    No events were found within the target window of any gravitational wave event, and any adjacent event was consistent with background. KamLAND researchers used this information to determine a monochromatic fluence (time integrated flux) upper limit, as well as an upper limit on source luminosity for each gravitational wave event, which places a bound on the total energy released as low energy neutrinos during the merger events and candidate event. The lack of detected concurrent inverse beta decay events supports the conclusion that GW150914 was a black hole merger, and not another astrophysical event such as a core-collapse supernova.

    More information would need to be obtained to explain the gamma ray burst observed by the Fermi telescope, and work to improve future measurements is ongoing. Large uncertainties in the origin region of gamma ray bursts observed by the Fermi telescope will be reduced, and the localization of GW events will be improved, most drastically so by the addition of a third LIGO detector (LIGO India).

    As Advanced LIGO continues its operation, there will likely be many more chances for KamLAND and other neutrino experiments to search for coincidence neutrinos. Multi-messenger astronomy has only just begun to shed light on the nature of black holes, supernovae, mergers, and other exciting astrophysical phenomena — and the future looks bright.

    U Wisconsin ICECUBE neutrino detector at the South Pole
    IceCube neutrino detector interior
    U Wisconsin ICECUBE neutrino detector at the South Pole

    5
    ANTERES

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 8:27 pm on June 27, 2016 Permalink | Reply
    Tags: , , , , particlebites   

    From particlebites: “The Fermi LAT Data Depicting Dark Matter Detection” 

    particlebites bloc

    particlebites

    June 27, 2016
    Chris Karwin

    The center of the galaxy is brighter than astrophysicists expected. Could this be the result of the self-annihilation of dark matter? Chris Karwin, a graduate student from the University of California, Irvine presents the Fermi collaboration’s analysis.

    Presenting: Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
    Authors: The Fermi-LAT Collaboration (ParticleBites blogger is a co-author)
    Reference: 1511.02938, Astrophys.J. 819 (2016) no.1, 44

    NASA/Fermi Telescope
    NASA/Fermi

    Introduction

    Like other telescopes, the Fermi Gamma-Ray Space Telescope is a satellite that scans the sky collecting light. Unlike many telescopes, it searches for very high energy light: gamma-rays. The satellite’s main component is the Large Area Telescope (LAT).

    NASA/Fermi LAT
    NASA/Fermi LAT

    When this detector is hit with a high-energy gamma-ray, it measures the the energy and the direction in the sky from where it originated. The data provided by the LAT is an all-sky photon counts map:

    1
    All-sky counts map of gamma-rays. The color scale correspond to the number of detected photons. Image from NASA

    In 2009, researchers noticed that there appeared to be an excess of gamma-rays coming from the galactic center. This excess is found by making a model of the known astrophysical gamma-ray sources and then comparing it to the data.

    What makes the excess so interesting is that its features seem consistent with predictions from models of dark matter annihilation. Dark matter theory and simulations predict:

    The distribution of dark matter in space. The gamma rays coming from dark matter annihilation should follow this distribution, or spatial morphology.
    The particles to which dark matter directly annihilates. This gives a prediction for the expected energy spectrum of the gamma-rays.

    Although a dark matter interpretation of the excess is a very exciting scenario that would tell us new things about particle physics, there are also other possible astrophysical explanations. For example, many physicists argue that the excess may be due to an unresolved population of milli-second pulsars. Another possible explanation is that it is simply due to the mis-modeling of the background. Regardless of the physical interpretation, the primary objective of the Fermi analysis is to characterize the excess.

    The main systematic uncertainty of the experiment is our limited understanding of the backgrounds: the gamma rays produced by known astrophysical sources. In order to include this uncertainty in the analysis, four different background models are constructed. Although these models are methodically chosen so as to account for our lack of understanding, it should be noted that they do not necessarily span the entire range of possible error. For each of the background models, a gamma-ray excess is found. With the objective of characterizing the excess, additional components are then added to the model. Among the different components tested, it is found that the fit is most improved when dark matter is added. This is an indication that the signal may be coming from dark matter annihilation.
    Analysis

    This analysis is interested in the gamma rays coming from the galactic center. However, when looking towards the galactic center the telescope detects all of the gamma-rays coming from both the foreground and the background. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources.

    2
    Schematic of the experiment. We are interested in gamma-rays coming from the galactic center, represented by the red circle. However, the LAT detects all of the gamma-rays coming from the foreground and background, represented by the blue region. The main challenge is to accurately model the gamma-rays coming from known astrophysical sources. Image adapted from Universe Today.

    An overview of the analysis chain is as follows. The model of the observed region comes from performing a likelihood fit of the parameters for the known astrophysical sources. A likelihood fit is a statistical procedure that calculates the probability of observing the data given a set of parameters. In general there are two types of sources:

    1. Point sources such as known pulsars
    2. Diffuse sources due to the interaction of cosmic rays with the interstellar gas and radiation field

    Parameters for these two types of sources are fit at the same time. One of the main uncertainties in the background is the cosmic ray source distribution. This is the number of cosmic ray sources as a function of distance from the center of the galaxy. It is believed that cosmic rays come from supernovae. However, the source distribution of supernova remnants is not well determined. Therefore, other tracers must be used. In this context a tracer refers to a measurement that can be made to infer the distribution of supernova remnants. This analysis uses both the distribution of OB stars and the distribution of pulsars as tracers. The former refers to OB associations, which are regions of O-type and B-type stars. These hot massive stars are progenitors of supernovae. In contrast to these progenitors, the distribution of pulsars is also used since pulsars are the end state of supernovae. These two extremes serve to encompass the uncertainty in the cosmic ray source distribution, although, as mentioned earlier, this uncertainty is by no means bracketing. Two of the four background model variants come from these distributions.

    3
    An overview of the analysis chain. In general there are two types of sources: point sources and diffuse source. The diffuse sources are due to the interaction of cosmic rays with interstellar gas and radiation fields. Spectral parameters for the diffuse sources are fit concurrently with the point sources using a likelihood fit. The question mark represents the possibility of an additional component possibly missing from the model, such as dark matter.

    The information pertaining to the cosmic rays, gas, and radiation fields is input into a propagation code called GALPROP. This produces an all-sky gamma-ray intensity map for each of the physical processes that produce gamma-rays. These processes include the production of neutral pions due to the interaction of cosmic ray protons with the interstellar gas, which quickly decay into gamma-rays, cosmic ray electrons up-scattering low-energy photons of the radiation field via inverse Compton, and cosmic ray electrons interacting with the gas producing gamma-rays via Bremsstrahlung radiation.

    4
    Residual map for one of the background models. Image from 1511.02938

    The maps of all the processes are then tuned to the data. In general, tuning is a procedure by which the background models are optimized for the particular data set being used. This is done using a likelihood analysis. There are two different tuning procedures used for this analysis. One tunes the normalization of the maps, and the other tunes both the normalization and the extra degrees of freedom related to the gas emission interior to the solar circle. These two tuning procedures, performed for the the two cosmic ray source models, make up the four different background models.

    Point source models are then determined for each background model, and the spectral parameters for both diffuse sources and point sources are simultaneously fit using a likelihood analysis.

    Results and Conclusion

    6
    Best fit dark matter spectra for the four different background models. Image: 1511.02938

    In the plot of the best fit dark matter spectra for the four background models, the hatching of each curve corresponds to the statistical uncertainty of the fit. The systematic uncertainty can be interpreted as the region enclosed by the four curves. Results from other analyses of the galactic center are overlaid on the plot. This result shows that the galactic center analysis performed by the Fermi collaboration allows a broad range of possible dark matter spectra.

    The Fermi analysis has shown that within systematic uncertainties a gamma-ray excess coming from the galactic center is detected. In order to try to explain this excess additional components were added to the model. Among the additional components tested it was found that the fit is most improved with that addition of a dark matter component. However, this does not establish that a dark matter signal has been detected. There is still a good chance that the excess can be due to something else, such as an unresolved population of millisecond pulsars or mis-modeling of the background. Further work must be done to better understand the background and better characterize the excess. Nevertheless, it remains an exciting prospect that the gamma-ray excess could be a signal of dark matter.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 5:16 pm on June 26, 2016 Permalink | Reply
    Tags: , , particlebites,   

    From particle bites: “Can’t Stop Won’t Stop: The Continuing Search for SUSY” 

    particlebites bloc

    particlebites

    June 19, 2016
    Julia Gonski

    Presenting:

    Title: “Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in √s = 13 TeV pp collisions with the ATLAS detector
    Author: The ATLAS Collaboration
    Publication: Submitted 13 June 2016, arXiv 1606.03903

    Things at the LHC are going great. Run II of the Large Hadron Collider is well underway, delivering higher energies and more luminosity than ever before. ATLAS and CMS also have an exciting lead to chase down– the diphoton excess that was first announced in December 2015. So what does lots of new data and a mysterious new excess have in common? They mean that we might finally get a hint at the elusive theory that keeps refusing our invitations to show up: supersymmetry.

    Standard model of Supersymmetry DESY
    Standard model of Supersymmetry DESY

    1
    Figure 1: Feynman diagram of stop decay from proton-proton collisions.

    People like supersymmetry because it fixes a host of things in the Standard Model. But most notably, it generates an extra Feynman diagram that cancels the quadratic divergence of the Higgs mass due to the top quark contribution. This extra diagram comes from the stop quark. So a natural SUSY solution would have a light stop mass, ideally somewhere close to the top mass of 175 GeV. This expected low mass due to “naturalness” makes the stop a great place to start looking for SUSY. But according to the newest results from the ATLAS Collaboration, we’re not going to be so lucky.

    Using the full 2015 dataset (about 3.2 fb-1), ATLAS conducted a search for pair-produced stops, each decaying to a top quark and a neutralino, in this case playing the role of the lightest supersymmetric particle. The top then decays as tops do, to a W boson and a b quark. The W usually can do what it wants, but in this case the group chose to select for one W decaying leptonically and one decaying to jets (leptons are easier to reconstruct, but have a lower branching ratio from the W, so it’s a trade off.) This whole process is shown in Figure 1. So that gives a lepton from one W, jets from the others, and missing energy from the neutrino for a complete final state.

    2
    Figure 2: Transverse mass distribution in one of the signal regions.

    The paper does report an excess in the data, with a significance around 2.3 sigma. In Figure 2, you can see this excess overlaid with all the known background predictions, and two possible signal models for various gluino and stop masses. This signal in the 700-800 GeV mass range is right around the current limit for the stop, so it’s not entirely inconsistent. While these sorts of excesses come and go a lot in particle physics, it’s certainly an exciting reason to keep looking.

    Figure 3 shows our status with the stop and neutralino, using 8 TeV data. All the shaded regions here are mass points for the stop and neutralino that physicists have excluded at 95% confidence. So where do we go from here? You can see a sliver of white space on this plot that hasn’t been excluded yet— that part is tough to probe because the mass splitting is so small, the neutralino emerges almost at rest, making it very hard to notice. It would be great to check out that parameter space, and there’s an effort underway to do just that. But at the end of the day, only more time (and more data) can tell.

    (P.S. This paper also reports a gluino search—too much to cover in one post, but check it out if you’re interested!)

    3
    Figure 3: Limit curves for stop and neutralino masses, with 8 TeV ATLAS dataset.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: