Tagged: particlebites Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:25 am on February 13, 2020 Permalink | Reply
    Tags: "Neutrinos: What Do They Know? Do They Know Things?", In 1956 an antineutrino-positron annihilation producing two gamma rays was detected confirming the existence of neutrinos., particlebites, Three distinct flavors of neutrino: one corresponding to each type of lepton: electron; muon; and tau particles.   

    From particlebites: “Neutrinos: What Do They Know? Do They Know Things?” 

    particlebites bloc

    From particlebites

    Posted on February 12, 2020
    Amara McCune

    Title: “Upper Bound of Neutrino Masses from Combined Cosmological Observations and Particle Physics Experiments
    Author: Loureiro et al.

    Neutrinos are almost a lot of things. They are almost massless, a property that goes against the predictions of the Standard Model. Possessing this non-zero mass, they should travel at almost the speed of light, but not quite, in order to be consistent with the principles of special relativity. Yet each measurement of neutrino propagation speed returns a value that is, within experimental error, exactly the speed of light. Only coupled to the weak force, they are almost non-interacting, with 65 billion of them streaming from the sun through each square centimeter of Earth each second, almost undetected.

    How do all of these pieces fit together? The story of the neutrino begins in 1930, when Wolfgang Pauli propositioned an as-yet detected particle emitted during beta decay in order to explain an observed lack of energy and momentum conservation. In 1956, an antineutrino-positron annihilation producing two gamma rays was detected, confirming the existence of neutrinos. Yet with this confirmation came an assortment of growing mysteries. In the decades that followed, a series of experiments found that there are three distinct flavors of neutrino, one corresponding to each type of lepton: electron, muon, and tau particles. Subsequent measurements of propagating neutrinos then revealed a curious fact: these three flavors are anything but distinct. When the flavor of a neutrino is initially measured to be, say, an electron neutrino, a second measurement of flavor after it has traveled some distance could return the answer of muon neutrino. Measure yet again, and you could find yourself a tau neutrino. This process, in which the probability of measuring a neutrino in one of the three flavor states varies as it propagates, is known as neutrino oscillation.

    1
    A representation of neutrino oscillation: three flavors of neutrino form a superposed wave. As a result, a measurement of neutrino flavor as the neutrino propagates switches between the three possible flavors. This mechanism implies that neutrinos are not massless, as previously thought. From: http://www.hyper-k.org/en/neutrino.html.

    Neutrino oscillation threw a wrench into the Standard Model in terms of mass; neutrino oscillation implies that the masses of the three neutrino flavors cannot be equal to each other, and hence cannot all be zero. Specifically, only one of them would be allowed to be zero, with the remaining two non-zero and non-equal.

    Standard Model of Particle Physics, Quantum Diaries

    While at first glance an oddity, oscillation arises naturally from underlying mathematics, and we can arrive at this conclusion via a simple analysis. To think about a neutrino, we consider two eigenstates (the state a particle is in when it is measured to have a certain observable quantity), one corresponding to flavor and one corresponding to mass. Because neutrinos are created in weak interactions which conserve flavor, they are initially in a flavor eigenstate. Flavor and mass eigenstates cannot be simultaneously determined, and so each flavor eigenstate is a linear combination of mass eigenstates, and vice versa. Now, consider the case of three flavors of neutrino. If all three flavors consisted of the same linear combination of mass eigenstates, there would be no varying superposition between them, since the different masses would travel at different speeds in accordance with special relativity. Since we experimentally observe an oscillation between neutrino flavors, we can conclude that their masses cannot all be the same.

    Although this result was unexpected and provides the first known departure from the Standard Model, it is worth noting that it also neatly resolves a few outstanding experimental mysteries, such as the solar neutrino problem. Neutrinos in the sun are produced as electron neutrinos and are likely to interact with unbound electrons as they travel outward, transitioning them into a second mass state which can interact as any of the three flavors. By observing a solar neutrino flux roughly a third of its predicted value, physicists not only provided a potential answer to a previously unexplained phenomenon but also deduced that this second mass state must be larger than the state initially produced. Related flux measurements of neutrinos produced during charged particle interactions in the Earth’s upper atmosphere, which are primarily muon neutrinos, reveal that the third mass state is quite different from the first two mass states. This gives rise to two potential mass hierarchies: the normal (m_1 < m_2 \ll m_3) and inverted (m_3 \ll m_1 < m_2) ordering.

    2
    The PMNS matrix parametrizes the transformation between the neutrino mass eigenbasis and its flavor eigenbasis. The left vector represents a neutrino in the flavor basis, while the right represents the same neutrino in the mass basis. When an individual component of the transformation matrix is squared, it gives the probability to measure the specified mass for the the corresponding flavor.

    However, this oscillation also means that it is difficult to discuss neutrino masses individually, as measuring the sum of neutrino masses is currently easier from a technical standpoint. With current precision in cosmology, we cannot distinguish the three neutrinos at the epoch in which they become free-traveling, although this could change with increased precision. Future experiments in beta decay could also lead to progress in pinpointing individual masses, although current oscillation experiments are only sensitive to mass-squared differences \Delta m_{ij}^2 = m_i^2 – m_j^2. Hence, we frame our models in terms of these mass splittings and the mass sum, which also makes it easier to incorporate cosmological data. Current models of neutrinos are phenomenological ― not directly derived from theory but consistent with both theoretical principles and experimental data. The mixing between states is mathematically described by the PMNS (Pontecorvo-Maki-Nakagawa-Sakata) matrix, which is parametrized by three mixing angles and a phase related to CP violation. These parameters, as in most phenomenological models, have to be inserted into the theory. There is usually a wide space of parameters in such models and constraining this space requires input from a variety of sources. In the case of neutrinos, both particle physics experiments and cosmological data provide key avenues for exploration into these parameters. In a recent paper, Loureiro et al. used such a strategy, incorporating data from the large scale structure of galaxies and the cosmic microwave background to provide new upper bounds on the sum of neutrino masses.

    The group investigated two main classes of neutrino mass models: exact models and cosmological approximations. The former concerns models that integrate results from neutrino oscillation experiments and are parametrized by the smallest neutrino mass, while the latter class uses a model scheme in which the neutrino mass sum is related to an effective number of neutrino species N_{\nu} times an effective mass m_{eff} which is equal for each flavor. In exact models, Gaussian priors (an initial best-guess) were used with data sampling from a number of experimental results and error bars, depending on the specifics of the model in question. This includes possibilities such as fixing the mass splittings to their central values or assuming either a normal or inverted mass hierarchy. In cosmological approximations, N_{\nu} was fixed to a specific value depending on the particular cosmological model being studied, with the total mass sum sampled from data.

    3
    The end result of the group’s analysis, which shows the calculated neutrino mass bounds from 7 studied models, where the first 4 models are exact and the last 3 are cosmological approximations. The left column gives the probability distribution for the sum of neutrino masses, while the right column gives the probability distribution for the lightest neutrino in the model (not used in the cosmological approximation scheme). From: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.123.081301

    The group ultimately demonstrated that cosmologically-based models result in upper bounds for the mass sum that are much lower than those generated from physically-motivated exact models, as we can see in the figure above. One of the models studied resulted in an upper bound that is not only different from those determined from neutrino oscillation experiments, but is inconsistent with known lower bounds. This puts us into the exciting territory that neutrinos have pushed us to again and again: a potential finding that goes against what we presently know. The calculated upper bound is also significantly different if the assumption is made that one of the neutrino masses is zero, with the mass sum contained in the remaining two neutrinos, setting the stage for future differentiation between neutrino masses. Although the group did not find any statistically preferable model, they provide a framework for studying neutrinos with a considerable amount of cosmological data, using results of the Planck, BOSS, and SDSS collaborations, among many others. Ultimately, the only way to arrive at a robust answer to the question of neutrino mass is to consider all of these possible sources of information for verification.

    With increased sensitivity in upcoming telescopes and a plethora of intriguing beta decay experiments on the horizon, we should be moving away from studies that update bounds and toward ones which make direct estimations. In these future experiments, previous analysis will prove vital in working toward an understanding of the underlying mass models and put us one step closer to unraveling the enigma of the neutrino. While there are still many open questions concerning their properties ― Why are their masses so small? Is the neutrino its own antiparticle? What governs the mass mechanism? ― studies like these help to grow intuition and prepare for the next phases of discovery. I’m excited to see what unexpected results come next for this almost elusive particle.

    Further Reading:

    A thorough introduction to neutrino oscillation: https://arxiv.org/pdf/1802.05781.pdf
    Details on mass ordering: https://arxiv.org/pdf/1806.11051.pdf
    More information about solar neutrinos (from a fellow ParticleBites writer!): https://particlebites.com/?p=6778
    A summary of current neutrino experiments and their expected results: https://www.symmetrymagazine.org/article/game-changing-neutrino-experiments

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?
    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 12:24 pm on January 4, 2020 Permalink | Reply
    Tags: A new boson of mass 17 MeV/c^2 nicknamed the “X17” particle., , Atomki collaboration in Decebren Hungary, Atomki nuclear spectrometer, , , PADME experiment at Gran Sasso, , particlebites   

    From particlebites: “The Delirium over Helium” 

    particlebites bloc

    From particlebites

    January 4, 2020
    Andre Frankenthal

    Title: New evidence supporting the existence of the hypothetic X17 particle
    Authors: A.J. Krasznahorkay, M. Csatlós, L. Csige, J. Gulyás, M. Koszta, B. Szihalmi, and J. Timár; D.S. Firak, A. Nagy, and N.J. Sas; A. Krasznahorkay

    This is an update to the excellent Delirium over Beryllium bite written by Flip Tanedo back in 2016 introducing the Beryllium anomaly (I highly recommend starting there first if you just opened this page). At the time, the Atomki collaboration in Decebren, Hungary, had just found an unexpected excess on the angular correlation distribution of electron-positron pairs from internal pair conversion in the transition of excited states of Beryllium. According to them, this excess is consistent with a new boson of mass 17 MeV/c^2, nicknamed the “X17” particle. (Note: for reference, 1 GeV/c^2 is roughly the mass of a proton; for simplicity, from now on I’ll omit the “c^2” term by setting c, the speed of light, to 1 and just refer to masses in MeV or GeV. Here’s a nice explanation of this procedure.)

    A few weeks ago, the Atomki group released a new set of results that uses an updated spectrometer and measures the same observable (positron-electron angular correlation) but from transitions of Helium excited states instead of Beryllium. Interestingly, they again find a similar excess on this distribution, which could similarly be explained by a boson with mass ~17 MeV. There are still many questions surrounding this result, and lots of skeptical voices, but the replication of this anomaly in a different system (albeit not yet performed by independent teams) certainly raises interesting questions that seem to warrant further investigation by other researchers worldwide.

    Nuclear physics and spectroscopy

    The paper reports the production of excited states of Helium nuclei from the bombardment of tritium atoms with protons. To a non-nuclear physicist, this may not be immediately obvious, but nuclei can be in excited states just as electrons around atoms. The entire quantum wavefunction of the nucleus is usually found in the ground state, but can be excited by various mechanisms such as the proton bombardment used in this case. Protons with a specific energy (0.9 MeV) were targeted at tritium atoms to initiate the reaction 3H(p, γ)4He, in nuclear physics notation. The equivalent particle physics notation is p + 3H → He* → He + γ (→ e+ e–), where ‘*’ denotes an excited state.

    This particular proton energy serves to excite the newly-produced Helium atoms into a state with energy of 20.49 MeV. This energy is sufficiently close to the Jπ = 0– state (i.e. negative parity and quantum number J = 0), which is the second excited state in the ladder of states of Helium. This state has a centroid energy of 21.01 MeV and a wide “sigma” (or decay width) of 0.84 MeV. Note that energies of the first two excited states of Helium overlap quite a bit, so actually sometimes nuclei will be found in the first excited state instead, which is not phenomenologically interesting in this case.

    1
    Figure 1. Sketch of the energy distributions for the first two excited quantum states of Helium nuclei. The second excited state (with centroid energy of 21.01 MeV) exhibits an anomaly in the electron-positron angular correlation distribution in transitions to the ground state. Proton bombardment with 0.9 MeV protons yields Helium nuclei at 20.49 MeV, therefore producing both first and second excited states, which are overlapping.

    With this reaction, experimentalists can obtain transitions from the Jπ = 0– excited state back to the ground state with Jπ = 0+. These transitions typically produce a gamma ray (photon) with 21.01 MeV energy, but occasionally the photon will internally convert into an electron-positron pair, which is the experimental signature of interest here. A sketch of the experimental concept is shown below. In particular, the two main observables measured by the researchers are the invariant mass of the electron-positron pair, and the angular separation (or angular correlation) between them, in the lab frame.

    2
    Figure 2. Schematic representation of the production of excited Helium states from proton bombardment, followed by their decay back to the ground state with the emission of an “X” particle. X here can refer to a photon converting into a positron-electron pair, in which case this is an internal pair creation (IPC) event, or to the hypothetical “X17” particle, which is the process of interest in this experiment. Adapted from 1608.03591.

    The measurement

    For this latest measurement, the researchers upgraded the spectrometer apparatus to include 6 arms instead of the previous 5. Below is a picture of the setup with the 6 arms shown and labeled. The arms are at azimuthal positions of 0, 60, 120, 180, 240, and 300 degrees, and oriented perpendicularly to the proton beam.

    4
    Figure 3. The Atomki nuclear spectrometer. This is an upgraded detector from the previous one used to detect the Beryllium anomaly, featuring 6 arms instead of 5. Each arm has both plastic scintillators for measuring electrons’ and positrons’ energies, as well as a silicon strip-based detector to measure their hit impact positions. Image credit: A. Krasznahorkay.

    The arms consist of plastic scintillators to detect the scintillation light produced by the electrons and positrons striking the plastic material. The amount of light collected is proportional to the energy of the particles. In addition, silicon strip detectors are used to measure the hit position of these particles, so that the correlation angle can be determined with better precision.

    With this setup, the experimenters can measure the energy of each particle in the pair and also their incident positions (and, from these, construct the main observables: invariant mass and separation angle). They can also look at the scalar sum of energies of the electron and positron (Etot), and use it to zoom in on regions where they expect more events due to the new “X17” boson: since the second excited state lives around 21.01 MeV, the signal-enriched region is defined as 19.5 MeV < Etot < 22.0 MeV. They can then use the orthogonal region, 5 MeV < Etot < 19 MeV (where signal is not expected to be present), to study background processes that could potentially contaminate the signal region as well.

    The figure below shows the angular separation (or correlation) between electron-positron pairs. The red asterisks are the main data points, and consist of events with Etot in the signal region (19.5 MeV < Etot < 22.0 MeV). We can clearly see the bump occurring around angular separations of 115 degrees. The black asterisks consist of events in the orthogonal region, 5 MeV < Etot < 19 MeV. Clearly there is no bump around 115 degrees here. The researchers then assume that the distribution of background events in the orthogonal region (black asterisks) has the same shape inside the signal region (red asterisks), so they fit the black asterisks to a smooth curve (blue line), and rescale this curve to match the number of events in the signal region in the 40 to 90 degrees sub-range (the first few red asterisks). Finally, the re-scaled blue curve is used in the 90 to 135 degrees sub-range (the last few red asterisks) as the expected distribution.

    5
    Figure 4. Angular correlation between positrons and electrons emitted in Helium nuclear transitions to the ground state. Red dots are data in the signal region (sum of positron and electron energies between 19.5 and 22 MeV), and black dots are data in the orthogonal region (sum of energies between 5 and 19 MeV). The smooth blue curve is a fit to the orthogonal region data, which is then re-scaled to to be used as background estimation in the signal region. The blue, black, and magenta histograms are Monte Carlo simulations of expected backgrounds. The green curve is a fit to the data with the hypothesis of a new “X17” particle.

    In addition to the data points and fitted curves mentioned above, the figure also reports the researchers’ estimates of the physics processes that cause the observed background. These are the black and magenta histograms, and their sum is the blue histogram. Finally, there is also a green curve on top of the red data, which is the best fit to a signal hypothesis, that is, assuming that a new particle with mass 16.84 ± 0.16 MeV is responsible for the bump in the high-angle region of the angular correlation plot.

    The other main observable, the invariant mass of the electron-positron pair, is shown below.

    6
    Figure 5. Invariant mass distribution of emitted electrons and positrons in the transitions of Helium nuclei to the ground state. Red asterisks are data in the signal region (sum of electron and positron energies between 19.5 and 22 MeV), and black asterisks are data in the orthogonal region (sum of energies between 5 and 19 MeV). The green smooth curve is the best fit to the data assuming the existence of a 17 MeV particle.

    The invariant mass is constructed from the equation

    7

    where all relevant quantities refer to electron and positron observables: Etot is as before the sum of their energies, y is the ratio of their energy difference over their sum (y \equiv (E_{e^+} – E_{e^-})/E_{\textrm{tot}}), θ is the angular separation between them, and me is the electron and positron mass. This is just one of the standard ways to calculate the invariant mass of two daughter particles in a reaction, when the known quantities are the angular separation between them and their individual energies in the lab frame.

    The red asterisks are again the data in the signal region (19.5 MeV < Etot < 22 MeV), and the black asterisks are the data in the orthogonal region (5 MeV < Etot < 19 MeV). The green curve is a new best fit to a signal hypothesis, and in this case the best-fit scenario is a new particle with mass 17.00 ± 0.13 MeV, which is statistically compatible with the fit in the angular correlation plot. The significance of this fit is 7.2 sigma, which means the probability of the background hypothesis (i.e. no new particle) producing such large fluctuations in data is less than 1 in 390,682,215,445! It is remarkable and undeniable that a peak shows up in the data — the only question is whether it really is due to a new particle, or whether perhaps the authors failed to consider all possible backgrounds, or even whether there may have been an unexpected instrumental anomaly of some sort.

    According to the authors, the same particle that could explain the anomaly in the Beryllium case could also explain the anomaly here. I think this claim needs independent validation by the theory community. In any case, it is very interesting that similar excesses show up in two “independent” systems such as the Beryllium and the Helium transitions.

    Some possible theoretical interpretations

    There are a few particle interpretations of this result that can be made compatible with current experimental constraints. Here I’ll just briefly summarize some of the possibilities. For a more in-depth view from a theoretical perspective, check out Flip’s “Delirium over Beryllium” bite.

    The new X17 particle could be the vector gauge boson (or mediator) of a protophobic force, i.e. a force that interacts preferentially with neutrons but not so much with protons. This would certainly be an unusual and new force, but not necessarily impossible. Theorists have to work hard to make this idea work, as you can see here.

    Another possibility is that the X17 is a vector boson with axial couplings to quarks, which could explain, in the case of the original Beryllium anomaly, why the excess appears in only some transitions but not others. There are complete theories proposed with such vector bosons that could fit within current experimental constraints and explain the Beryllium anomaly, but they also include new additional particles in a dark sector to make the whole story work. If this is the case, then there might be new accessible experimental observables to confirm the existence of this dark sector and the vector boson showing up in the nuclear transitions seen by the Atomki group. This model is proposed here.

    However, an important caveat about these explanations is in order: so far, they only apply to the Beryllium anomaly. I believe the theory community needs to validate the authors’ assumption that the same particle could explain this new anomaly in Helium, and that there aren’t any additional experimental constraints associated with the Helium signature. As far as I can tell, this has not been shown yet. In fact, the similar invariant mass is the only evidence so far that this could be due to the same particle. An independent and thorough theoretical confirmation is needed with high-stake claims such as this one.

    Questions and criticisms

    n the years since the first Beryllium anomaly result, a few criticisms about the paper and about the experimental team’s history have been laid out. I want to mention some of those to point out that this is still a contentious result.

    First, there is the group’s history of repeated claims of new particle discoveries every so often since the early 2000s. After experimental refutation of these claims by more precise measurements, there isn’t a proper and thorough discussion of why the original excesses were seen in the first place, and why they have subsequently disappeared. Especially for such groundbreaking claims, a consistent history of solid experimental attitude towards one’s own research is very valuable when making future claims.

    Second, others have mentioned that some fit curves seem to pass very close to most data points (n.b. I can’t seem to find the blog post where I originally read this or remember its author – if you know where it is, please let me know so I can give proper credit!). Take a look at the plot below, which shows the observed Etot distribution. In experimental plots, there is usually a statistical fluctuation of data points around the “mean” behavior, which is natural and expected. Below, in contrast, the data points are remarkably close to the fit. This doesn’t in itself mean there is anything wrong here, but it does raise an interesting question of how the plot and the fit were produced. It could be that this is not a fit to some prior expected behavior, but just an “interpolation”. Still, if that’s the case, then it’s not clear (to me, at least) what role the interpolation curve plays.

    8
    Figure 6. Sum of electron and positron energies distribution produced in the decay of Helium nuclei to the ground state. Black dots are data and the red curve is a fit.

    Third, there is also the background fit to data in Figure 4 (black asterisks and blue line). As Ethan Siegel has pointed out, you can see how well the background fit matches data, but only in the 40 to 90 degrees sub-range. In the 90 to 135 degrees sub-range, the background fit is actually quite poorer. In a less favorable interpretation of the results, this may indicate that whatever effect is causing the anomalous peak in the red asterisks is also causing the less-than-ideal fit in the black asterisks, where no signal due to a new boson is expected. If the excess is caused by some instrumental error instead, you’d expect to see effects in both curves. In any case, the background fit (blue curve) constructed from the black asterisks does not actually model the bump region very well, which weakens the argument for using it throughout all of the data. A more careful analysis of the background is warranted here.

    Fourth, another criticism comes from the simplistic statistical treatment the authors employ on the data. They fit the red asterisks in Figure 4 with the “PDF”:

    9

    where PDF stands for “Probability Density Function”, and in this case they are combining two PDFs: one derived from data, and one assumed from the signal hypothesis. The two PDFs are then “re-scaled” by the expected number of background events (N_{Bg}) and signal events (N_{sig}), according to Monte Carlo simulations. However, as others have pointed out, when you multiply a PDF by a yield such as N_{Bg}, you no longer have a PDF! A variable that incorporates yields is no longer a probability. This may just sound like a semantics game, but it does actually point to the simplicity of the treatment, and makes one wonder if there could be additional (and perhaps more serious) statistical blunders made in the course of data analysis.

    Fifth, there is also of course the fact that no other experiments have seen this particle so far. This doesn’t mean that it’s not there, but particle physics is in general a field with very few “low-hanging fruits”. Most of the “easy” discoveries have already been made, and so every claim of a new particle must be compatible with dozens of previous experimental and theoretical constraints. It can be a tough business. Another example of this is the DAMA experiment, which has made claims of dark matter detection for almost 2 decades now, but no other experiments were able to provide independent verification (and in fact, several have provided independent refutations) of their claims.

    DAMA LIBRA Dark Matter Experiment, 1.5 km beneath Italy’s Gran Sasso mountain

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    I’d like to add my own thoughts to the previous list of questions and considerations.

    The authors mention they correct the calibration of the detector efficiency with a small energy-dependent term based on a GEANT3 simulation. The updated version of the GEANT library, GEANT4, has been available for at least 20 years. I haven’t actually seen any results that use GEANT3 code since I’ve started in physics. Is it possible that the authors are missing a rather large effect in their physics expectations by using an older simulation library? I’m not sure, but just like the simplistic PDF treatment and the troubling background fit to the signal region, it doesn’t inspire as much confidence. It would be nice to at least have a more detailed and thorough explanation of what the simulation is actually doing (which maybe already exists but I haven’t been able to find?). This could also be due to a mismatch in the nuclear physics and high-energy physics communities that I’m not aware of, and perhaps nuclear physicists tend to use GEANT3 a lot more than high-energy physicists.

    Also, it’s generally tricky to use Monte Carlo simulation to estimate efficiencies in data. One needs to make sure the experimental apparatus is well understood and be confident that their simulation reproduces all the expected features of the setup, which is often difficult to do in practice, as collider experimentalists know too well. I’d really like to see a more in-depth discussion of this point.

    Finally, a more technical issue: from the paper, it’s not clear to me how the best fit to the data (red asterisks) was actually constructed. The authors claim:

    Using the composite PDF described in Equation 1 we first performed a list of fits by fixing the simulated particle mass in the signal PDF to a certain value, and letting RooFit estimate the best values for NSig andNBg. Letting the particle mass lose in the fit, the best fitted mass is calculated for the best fit […]

    When they let loose the particle mass in the fit, do they keep the “NSig” and “NBg” found with a fixed-mass hypothesis? If so, which fixed-mass NSig and which NBg do they use? And if not, what exactly was the purpose of performing the fixed-mass fits originally? I don’t think I fully got the point here.

    Where to go from here

    Despite the many questions surrounding the experimental approach, it’s still an interesting result that deserves further exploration. If it holds up with independent verification from other experiments, it would be an undeniable breakthrough, one that particle physicists have been craving for a long time now.

    And independent verification is key here. Ideally other experiments need to confirm that they also see this new boson before the acceptance of this result grows wider. Many upcoming experiments will be sensitive to a new X17 boson, as the original paper points out. In the next few years, we will actually have the possibility to probe this claim from multiple angles. Dedicated standalone experiments at the LHC such as FASER and CODEX-b will be able to probe highly long-lived signatures coming from the proton-proton interaction point, and so should be sensitive to new particles such as axion-like particles (ALPs).

    Another experiment that could have sensitivity to X17, and has come online this year, PADME (disclaimer: I am a collaborator on this experiment).

    11
    https://www.researchgate.net/figure/The-PADME-experiment-as-seen-from-above-The-positron-beam-travels-from-left-to-right_fig5_321191649

    PADME stands for Positron Annihilation into Dark Matter Experiment and its main goal is to look for dark photons produced in the annihilation between positrons and electrons.

    You can find more information about PADME here, and I will write a more detailed post about the experiment in the future, but the gist is that PADME is a fixed-target experiment striking a beam of positrons (beam energy: 550 MeV) against a fixed target made of diamond (carbon atoms). The annihilation between positrons in the beam and electrons in the carbon atoms could give rise to a photon and a new dark photon via kinetic mixing. By measuring the incoming positron and the outgoing photon momenta, we can infer the missing mass which is carried away by the (invisible) dark photon.

    If the dark photon is the X17 particle (a big if), PADME might be able to see it as well. Our dark photon mass sensitivity is roughly between 1 and 22 MeV, so a 17 MeV boson would be within our reach. But more interestingly, using the knowledge of where the new particle hypothesis lies, we might actually be able to set our beam energy to produce the X17 in resonance (using a beam energy of roughly 282 MeV). The resonance beam energy increases the number of X17s produced and could give us even higher sensitivity to investigate the claim.

    An important caveat is that PADME can provide independent confirmation of X17, but cannot refute it. If the coupling between the new particle and our ordinary particles is too feeble, PADME might not see evidence for it. This wouldn’t necessarily reject the claim by Atomki, it would just mean that we would need a more sensitive apparatus to detect it. This might be achievable with the next generation of PADME, or with the new experiments mentioned above coming online in a few years.

    Finally, in parallel with the experimental probes of the X17 hypothesis, it’s critical to continue gaining a better theoretical understanding of this anomaly. In particular, an important check is whether the proposed theoretical models that could explain the Beryllium excess also work for the new Helium excess. Furthermore, theorists have to work very hard to make these models compatible with all current experimental constraints, so they can look a bit contrived. Perhaps a thorough exploration of the theory landscape could lead to more models capable of explaining the observed anomalies as well as evading current constraints.

    Conclusions

    The recent results from the Atomki group raise the stakes in the search for Physics Beyond the Standard Model. The reported excesses in the angular correlation between electron-positron pairs in two different systems certainly seems intriguing. However, there are still a lot of questions surrounding the experimental methods, and given the nature of the claims made, a crystal-clear understanding of the results and the setup need to be achieved. Experimental verification by at least one independent group is also required if the X17 hypothesis is to be confirmed. Finally, parallel theoretical investigations that can explain both excesses are highly desirable.

    As Flip mentioned after the first excess was reported, even if this excess turns out to have an explanation other than a new particle, it’s a nice reminder that there could be interesting new physics in the light mass parameter space (e.g. MeV-scale), and a new boson in this range could also account for the dark matter abundance we see leftover from the early universe. But as Carl Sagan once said, extraordinary claims require extraordinary evidence.

    In any case, this new excess gives us a chance to witness the scientific process in action in real time. The next few years should be very interesting, and hopefully will see the independent confirmation of the new X17 particle, or a refutation of the claim and an explanation of the anomalies seen by the Atomki group. So, stay tuned!

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 10:17 am on December 29, 2019 Permalink | Reply
    Tags: , , , , , , , particlebites,   

    From particlebites: “Dark Photons in Light Places” 

    particlebites bloc

    From particlebites

    December 29, 2019
    Amara McCune

    Title: “Searching for dark photon dark matter in LIGO O1 data”

    Author: Huai-Ke Guo, Keith Riles, Feng-Wei Yang, & Yue Zhao

    Reference: https://www.nature.com/articles/s42005-019-0255-0

    There is very little we know about dark matter save for its existence.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    The LSST, or Large Synoptic Survey Telescope is to be named the Vera C. Rubin Observatory by an act of the U.S. Congress.

    LSST telescope, The Vera Rubin Survey Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background [CMB] hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    CMB per ESA/Planck

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LBNL LZ Dark Matter project at SURF, Lead, SD, USA


    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington. Axion Dark Matter Experiment

    Its mass(es), its interactions, even the proposition that it consists of particles at all is mostly up to the creativity of the theorist. For those who don’t turn to modified theories of gravity to explain the gravitational effects on galaxy rotation and clustering that suggest a massive concentration of unseen matter in the universe (among other compelling evidence), there are a few more widely accepted explanations for what dark matter might be. These include weakly-interacting massive particles (WIMPS), primordial black holes, or new particles altogether, such as axions or dark photons.

    In particle physics, this latter category is what’s known as the “hidden sector,” a hypothetical collection of quantum fields and their corresponding particles that are utilized in theorists’ toolboxes to help explain phenomena such as dark matter. In order to test the validity of the hidden sector, several experimental techniques have been concocted to narrow down the vast parameter space of possibilities, which generally consist of three strategies:

    1.Direct detection: Detector experiments look for low-energy recoils of dark matter particle collisions with nuclei, often involving emitted light or phonons.
    2.Indirect detection: These searches focus on potential decay products of dark matter particles, which depends on the theory in question.
    3.Collider production: As the name implies, colliders seek to produce dark matter in order to study its properties. This is reliant on the other two methods for verification.

    The first detection of gravitational waves from a black hole merger in 2015 ushered in a new era of physics, in which the cosmological range of theory-testing is no longer limited to the electromagnetic spectrum.

    MIT /Caltech Advanced aLigo


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Caltech/MIT Advanced aLigo detector installation Hanford, WA, USA

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/NASA eLISA space based, the future of gravitational wave research

    Bringing LIGO (the Laser Interferometer Gravitational-Wave Observatory) to the table, proposals for the indirect detection of dark matter via gravitational waves began to spring up in the literature, with implications for primordial black hole detection or dark matter ensconced in neutron stars. Yet a new proposal, in a paper by Guo et. al., [Scientific Reports-Communication Physics] suggests that direct dark matter detection with gravitational waves may be possible, specifically in the case of dark photons.

    Dark photons are hidden sector particles in the ultralight regime of dark matter candidates. Theorized as the gauge boson of a U(1) gauge group, meaning the particle is a force-carrier akin to the photon of quantum electrodynamics, dark photons either do not couple or very weakly couple to Standard Model particles in various formulations. Unlike a regular photon, dark photons can acquire a mass via the Higgs mechanism. Since dark photons need to be non-relativistic in order to meet cosmological dark matter constraints, we can model them as a coherently oscillating background field: a plane wave with amplitude determined by dark matter energy density and oscillation frequency determined by mass. In the case that dark photons weakly interact with ordinary matter, this means an oscillating force is imparted. This sets LIGO up as a means of direct detection due to the mirror displacement dark photons could induce in LIGO detectors.

    3
    Figure 1: The experimental setup of the Advanced LIGO interferometer. We can see that light leaves the laser and is reflected between a few power recycling mirrors (PR), split by a beam splitter (BS), and bounced between input and end test masses (ITM and ETM). The entire system is mounted on seismically-isolated platforms to reduce noise as much as possible. Source: https://arxiv.org/pdf/1411.4547.pdf

    LIGO consists of a Michelson interferometer, in which a laser shines upon a beam splitter which in turn creates two perpendicular beams. The light from each beam then hits a mirror, is reflected back, and the two beams combine, producing an interference pattern. In the actual LIGO detectors, the beams are reflected back some 280 times (down a 4 km arm length) and are split to be initially out of phase so that the photodiode detector should not detect any light in the absence of a gravitational wave. A key feature of gravitational waves is their polarization, which stretches spacetime in one direction and compresses it in the perpendicular direction in an alternating fashion. This means that when a gravitational wave passes through the detector, the effective length of one of the interferometer arms is reduced while the other is increased, and the photodiode will detect an interference pattern as a result.

    LIGO has been able to reach an incredible sensitivity of one part in 10^{23} in its detectors over a 100 Hz bandwidth, meaning that its instruments can detect mirror displacements up to 1/10,000th the size of a proton. Taking advantage of this number, Guo et. al. demonstrated that the differential strain (the ratio of the relative displacement of the mirrors to the interferometer’s arm length, or h = \Delta L/L) is also sensitive to ultralight dark matter via the modeling process described above. The acceleration induced by the dark photon dark matter on the LIGO mirrors is ultimately proportional to the dark electric field and charge-to-mass ratio of the mirrors themselves.

    Once this signal is approximated, next comes the task of estimating the background. Since the coherence length is of order 10^9 m for a dark photon field oscillating at order 100 Hz, a distance much larger than the separation between the LIGO detectors at Hanford and Livingston (in Washington and Louisiana, respectively), the signals from dark photons at both detectors should be highly correlated. This has the effect of reducing the noise in the overall signal, since the noise in each of the detectors should be statistically independent. The signal-to-noise ratio can then be computed directly using discrete Fourier transforms from segments of data along the total observation time. However, this process of breaking up the data, known as “binning,” means that some signal power is lost and must be corrected for.

    4
    Figure 2: The end result of the Guo et. al. analysis of dark photon-induced mirror displacement in LIGO. Above we can see a plot of the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. We can see that over further Advanced LIGO runs, up to O4-O5, these limits are expected to improve by several orders of magnitude. Source: https://www.nature.com/articles/s42005-019-0255-0

    In applying this analysis to the strain data from the first run of Advanced LIGO, Guo et. al. generated a plot which sets new limits for the coupling of dark photons to baryons as a function of the dark photon oscillation frequency. There are a few key subtleties in this analysis, primarily that there are many potential dark photon models which rely on different gauge groups, yet this framework allows for similar analysis of other dark photon models. With plans for future iterations of gravitational wave detectors, further improved sensitivities, and many more data runs, there seems to be great potential to apply LIGO to direct dark matter detection. It’s exciting to see these instruments in action for discoveries that were not in mind when LIGO was first designed, and I’m looking forward to seeing what we can come up with next!

    Learn More:

    An overview of gravitational waves and dark matter: https://www.symmetrymagazine.org/article/what-gravitational-waves-can-say-about-dark-matter
    A summary of dark photon experiments and results: https://physics.aps.org/articles/v7/115
    Details on the hardware of Advanced LIGO: https://arxiv.org/pdf/1411.4547.pdf
    A similar analysis done by Pierce et. al.: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.121.061102

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 4:25 pm on December 4, 2019 Permalink | Reply
    Tags: "Discovering the Top Quark", , , FNAL Tevatron CDF, , , , particlebites,   

    From particlebites: “Discovering the Top Quark” 

    particlebites bloc

    From particlebites

    December 3, 2019
    Adam Green

    This post is about the discovery of the most massive quark in the Standard Model, the Top quark. Below is a “discovery plot” [1] from the Collider Detector at Fermilab collaboration (CDF). Here is the original paper [Physical Review Letters].

    FNAL/Tevatron CDF detector

    FNAL/Tevatron tunnel

    FNAL/Tevatron map

    1
    This plot confirms the existence of the Top quark. Let’s understand how.

    For each proton collision that passes certain selection conditions, the horizontal axis shows the best estimate of the Top quark mass. These selection conditions encode the particle “fingerprint” of the Top quark. Out of all possible proton collisions events, we only want to look at ones that perhaps came from Top quark decays. This subgroup of events can inform us of a best guess at the mass of the Top quark. This is what is being plotted on the x axis.

    On the vertical axis are the number of these events.

    The dashed distribution is the number of these events originating from the Top quark if the Top quark exists and decays this way. This could very well not be the case.

    The dotted distribution is the background for these events, events that did not come from Top quark decays.

    The solid distribution is the measured data.

    To claim a discovery, the background (dotted) plus the signal (dashed) should equal the measured data (solid). We can run simulations for different top quark masses to give us distributions of the signal until we find one that matches the data. The inset at the top right is showing that a Top quark of mass of 175GeV best reproduces the measured data.

    Taking a step back from the technicalities, the Top quark is special because it is the heaviest of all the fundamental particles. In the Standard Model, particles acquire their mass by interacting with the Higgs. Particles with more mass interact more with the Higgs. The Top mass being so heavy is an indicator that any new physics involving the Higgs may be linked to the Top quark.

    References / Further Reading

    [1] – Observation of Top Quark Production in pp Collisions with the Collider Detector at Fermilab – This is the “discovery paper” announcing experimental evidence of the Top.

    [2] – Observation of tt(bar)H Production [Physical Review Letters]– Who is to say that the Top and the Higgs even have significant interactions to lowest order? The CMS collaboration finds evidence that they do in fact interact at “tree-level.”

    [2] – The Perfect Couple: Higgs and top quark spotted together – This article further describes the interconnection between the Higgs and the Top.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 9:37 pm on May 5, 2018 Permalink | Reply
    Tags: "blinding", , , , , , particlebites,   

    From particlebites: “Going Rogue: The Search for Anything (and Everything) with ATLAS” 

    particlebites bloc

    From particlebites

    May 5, 2018
    Julia Gonski

    Title: “A model-independent general search for new phenomena with the ATLAS detector at √s=13 TeV”

    Author: The ATLAS Collaboration

    Reference: ATLAS-PHYS-CONF-2017-001

    CERN/ATLAS detector

    When a single experimental collaboration has a few thousand contributors (and even more opinions), there are a lot of rules. These rules dictate everything from how you get authorship rights to how you get chosen to give a conference talk. In fact, this rulebook is so thorough that it could be the topic of a whole other post. But for now, I want to focus on one rule in particular, a rule that has only been around for a few decades in particle physics but is considered one of the most important practices of good science: blinding.

    In brief, blinding is the notion that it’s experimentally compromising for a scientist to look at the data before finalizing the analysis. As much as we like to think of ourselves as perfectly objective observers, the truth is, when we really really want a particular result (let’s say a SUSY discovery), that desire can bias our work. For instance, imagine you were looking at actual collision data while you were designing a signal region. You might unconsciously craft your selection in such a way to force an excess of data over background prediction. To avoid such human influences, particle physics experiments “blind” their analyses while they are under construction, and only look at the data once everything else is in place and validated.

    1
    Figure 1: “Blind analysis: Hide results to seek the truth”, R. MacCounor & S. Perlmutter for Nature.com

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 9:20 am on October 20, 2016 Permalink | Reply
    Tags: , , Missing transverse energy (MET), particlebites, What Happens When Energy Goes Missing?   

    From particlebites: “What Happens When Energy Goes Missing?” 

    particlebites bloc

    particlebites

    October 11, 2016
    Julia Gonski

    Article:Performance of algorithms that reconstruct missing transverse momentum in √s = 8 TeV proton-proton collisions in the ATLAS detector
    Authors: The ATLAS Collaboration
    Reference: arXiv:1609.09324

    CERN/ATLAS detector
    CERN/ATLAS detector

    The ATLAS experiment recently released a note detailing the nature and performance of algorithms designed to calculate what is perhaps the most difficult quantity in any LHC event: missing transverse energy.

    2
    Figure 1: LHC momentum conservation.

    1
    Figure 2: ATLAS event display showing MET balancing two jets.

    Missing transverse energy (MET) is so difficult because by its very nature, it is missing, thus making it unobservable in the detector. So where does this missing energy come from, and why do we even need to reconstruct it?

    The LHC accelerates protons towards one another on the same axis, so that they collide head on.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Therefore, the incoming partons have net momentum along the direction of the beamline, but no net momentum in the transverse direction (see Figure 1). MET is then defined as the negative vectorial sum (in the transverse plane) of all recorded particles. Any nonzero MET indicates a particle that escaped the detector. This escaping particle could be a regular Standard Model neutrino, or something much more exotic, such as the lightest supersymmetric particle or a dark matter candidate.

    Figure 2 shows an event display where the calculated MET balances the visible objects in the detector. In this case, these visible objects are jets, but they could also be muons, photons, electrons, or taus. This constitutes the “hard term” in the MET calculation. Often there are also contributions of energy in the detector that are not associated to a particular physics object, but may still be necessary to get an accurate measurement of MET. This momenta is known as the “soft term”.

    In the course of looking at all the energy in the detector for a given event, inevitably some pileup will sneak in. The pileup could be contributions from additional proton-proton collisions in the same bunch crossing, or from scattering of protons upstream of the interaction point. Either way, the MET reconstruction algorithms have to take this into account. Adding up energy from pileup could lead to more MET than was actually in the collision, which could mean the difference between an observation of dark matter and just another Standard Model event.

    One of the ways to suppress pile up is to use a quantity called jet vertex fraction (JVF), which uses the additional information of tracks associated to jets. If the tracks do not point back to the initial hard scatter, they can be tagged as pileup and not included in the calculation. This is the idea behind the Track Soft Term (TST) algorithm. Another way to remove pileup is to estimate the average energy density in the detector due to pileup using event-by-event measurements, then subtracting this baseline energy. This is used in the Extrapolated Jet Area with Filter, or EJAF algorithm.

    Once these algorithms are designed, they are tested in two different types of events. One of these is in W to lepton + neutrino decay signatures. These events should all have some amount of real missing energy from the neutrino, so they can easily reveal how well the reconstruction is working. The second group is Z boson to two lepton events. These events should not have any real missing energy (no neutrinos), so with these events, it is possible to see if and how the algorithm reconstructs fake missing energy. Fake MET often comes from miscalibration or mismeasurement of physics objects in the detector. Figures 3 and 4 show the calorimeter soft MET distributions in these two samples; here it is easy to see the shape difference between real and fake missing energy.

    3
    Figure 3: Distribution of the sum of missing energy in the calorimeter soft term (“real MET”) shown in Z to μμ data and Monte Carlo events.

    4
    Figure 4: Distribution of the sum of missing energy in the calorimeter soft term (“fake MET”) shown in W to eν data and Monte Carlo events.

    This note evaluates the performance of these algorithms in 8 TeV proton proton collision data collected in 2012. Perhaps the most important metric in MET reconstruction performance is the resolution, since this tells you how well you know your MET value. Intuitively, the resolution depends on detector resolution of the objects that went into the calculation, and because of pile up, it gets worse as the number of vertices gets larger. The resolution is technically defined as the RMS of the combined distribution of MET in the x and y directions, covering the full transverse plane of the detector. Figure 5 shows the resolution as a function of the number of vertices in Z to μμ data for several reconstruction algorithms. Here you can see that the TST algorithm has a very small dependence on the number of vertices, implying a good stability of the resolution with pileup.

    5
    Figure 5: Resolution obtained from the combined distribution of MET(x) and MET(y) for five algorithms as a function of NPV in 0-jet Z to μμ data.

    Another important quantity to measure is the angular resolution, which is important in the reconstruction of kinematic variables such as the transverse mass of the W. It can be measured in W to μν simulation by comparing the direction of the MET, as reconstructed by the algorithm, to the direction of the true MET. The resolution is then defined as the RMS of the distribution of the phi difference between these two vectors. Figure 6 shows the angular resolution of the same five algorithms as a function of the true missing transverse energy. Note the feature between 40 and 60 GeV, where there is a transition region into events with high pT calibrated jets. Again, the TST algorithm has the best angular resolution for this topology across the entire range of true missing energy.

    6
    Figure 6: Resolution of ΔΦ(reco MET, true MET) for 0 jet W to μν Monte Carlo.

    As the High Luminosity LHC looms larger and larger, the issue of MET reconstruction will become a hot topic in the ATLAS collaboration. In particular, the HLLHC will be a very high pile up environment, and many new pile up subtraction studies are underway. Additionally, there is no lack of exciting theories predicting new particles in Run 3 that are invisible to the detector. As long as these hypothetical invisible particles are being discussed, the MET teams will be working hard to catch them, so we can safely expect some innovation of these methods in the next few years.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 8:44 am on September 5, 2016 Permalink | Reply
    Tags: 6 dimensional spacetime, , , , , , particlebites   

    From particlebites: “Gravity in the Next Dimension: Micro Black Holes at ATLAS” 

    particlebites bloc

    particlebites

    August 31, 2016
    Savannah Thais

    Article: Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at sqrt(s)=13 TeV
    Authors: The ATLAS Collaboration
    Reference: arXiv:1606.02265 [hep-ex]

    CERN/ATLAS detector
    CERN/ATLAS detector

    What would gravity look like if we lived in a 6-dimensional space-time? Models of TeV-scale gravity theorize that the fundamental scale of gravity, MD, is much lower than what’s measured here in our normal, 4-dimensional space-time. If true, this could explain the large difference between the scale of electroweak interactions (order of 100 GeV) and gravity (order of 1016 GeV), an important open question in particle physics. There are several theoretical models to describe these extra dimensions, and they all predict interesting new signatures in the form of non-perturbative gravitational states. One of the coolest examples of such a state is microscopic black holes. Conveniently, this particular signature could be produced and measured at the LHC!

    Sounds cool, but how do you actually look for microscopic black holes with a proton-proton collider? Because we don’t have a full theory of quantum gravity (yet), ATLAS researchers made predictions for the production cross-sections of these black holes using semi-classical approximations that are valid when the black hole mass is above MD. This production cross-section is also expected to dramatically larger when the energy scale of the interactions (pp collisions) surpasses MD. We can’t directly detect black holes with ATLAS, but many of the decay channels of these black holes include leptons in the final state, which IS something that can be measured at ATLAS! This particular ATLAS search looked for final states with at least 3 high transverse momentum (pt) jets, at least one of which must be a leptonic (electron or muon) jet (the others can be hadronic or leptonic). The sum of the transverse momenta, is used as a discriminating variable since the signal is expected to appear only at high pt.

    This search used the full 3.2 fb-1 of 13 TeV data collected by ATLAS in 2015 to search for this signal above relevant Standard Model backgrounds (Z+jets, W+jets, and ttbar, all of which produce similar jet final states). The results are shown in Figure 1 (electron and muon channels are presented separately). The various backgrounds are shown in various colored histograms, the data in black points, and two microscopic black hole models in green and blue lines. There is a slight excess in the 3 TeV region in the electron channel, which corresponds to a p-value of only 1% when tested against the background only hypothesis. Unfortunately, this isn’t enough evidence to indicate new physics yet, but it’s an exciting result nonetheless! This analysis was also used to improve exclusion limits on individual extra-dimensional gravity models, as shown in Figure 2. All limits were much stronger than those set in Run 1.

    1
    Figure 1: momentum distributions in the electron (a) and muon (b) channels

    2
    Figure 2: Exclusion limits in the Mth, MD plane for models with various numbers of extra dimensions

    So: no evidence of microscopic black holes or extra-dimensional gravity at the LHC yet, but there is a promising excess and Run 2 has only just begun. Since publication, ATLAS has collected another 10 fb-1 of sqrt(13) TeV data that has yet to be analyzed. These results could also be used to constrain other Beyond the Standard Model searches at the TeV scale that have similar high pt leptonic jet final states, which would give us more information about what can and can’t exist outside of the Standard Model. There is certainly more to be learned from this search!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 10:16 am on September 4, 2016 Permalink | Reply
    Tags: , , particlebites,   

    From particlebites: “The CMB sheds light on galaxy clusters: Observing the kSZ signal with ACT and BOSS” 

    particlebites bloc

    particlebites

    August 17, 2016 [Just brought to social media by astrobites.]
    Eve Vavagiakis

    Article: Detection of the pairwise kinematic Sunyaev-Zel’dovich effect with BOSS DR11 and the Atacama Cosmology Telescope
    Authors: F. De Bernardis, S. Aiola, E. M. Vavagiakis, M. D. Niemack, N. Battaglia, and the ACT Collaboration
    Reference: arXiv:1607.02139

    Editor’s note: this post is written by one of the students involved in the published result.

    Like X-rays shining through your body can inform you about your health, the cosmic microwave background (CMB) shining through galaxy clusters can tell us about the universe we live in.

    Cosmic Microwave Background per ESA/Planck
    CMB per ESA/Planck

    When light from the CMB is distorted by the high energy electrons present in galaxy clusters, it’s called the Sunyaev-Zel’dovich effect. A new 4.1σ measurement of the kinematic Sunyaev-Zel’dovich (kSZ) signal has been made from the most recent Atacama Cosmology Telescope (ACT) cosmic microwave background (CMB) maps and galaxy data from the Baryon Oscillation Spectroscopic Survey (BOSS).

    Princeton ACT
    Princeton ACT

    With steps forward like this one, the kinematic Sunyaev-Zel’dovich signal could become a probe of cosmology, astrophysics and particle physics alike.

    The Kinematic Sunyaev-Zel’dovich Effect

    It rolls right off the tongue, but what exactly is the kinematic Sunyaev-Zel’dovich signal? Galaxy clusters distort the cosmic microwave background before it reaches Earth, so we can learn about these clusters by looking at these CMB distortions. In our X-ray metaphor, the map of the CMB is the image of the X-ray of your arm, and the galaxy clusters are the bones. Galaxy clusters are the largest gravitationally bound structures we can observe, so they serve as important tools to learn more about our universe. In its essence, the Sunyaev-Zel’dovich effect is inverse-Compton scattering of cosmic microwave background photons off of the gas in these galaxy clusters, whereby the photons gain a “kick” in energy by interacting with the high energy electrons present in the clusters.

    The Sunyaev-Zel’dovich effect can be divided up into two categories: thermal and kinematic. The thermal Sunyaev-Zel’dovich (tSZ) effect is the spectral distortion of the cosmic microwave background in a characteristic manner due to the photons gaining, on average, energy from the hot (~107 – 108 K) gas of the galaxy clusters. The kinematic (or kinetic) Sunyaev-Zel’dovich (kSZ) effect is a second-order effect—about a factor of 10 smaller than the tSZ effect—that is caused by the motion of galaxy clusters with respect to the cosmic microwave background rest frame. If the CMB photons pass through galaxy clusters that are moving, they are Doppler shifted due to the cluster’s peculiar velocity (the velocity that cannot be explained by Hubble’s law, which states that objects recede from us at a speed proportional to their distance). The kinematic Sunyaev-Zel’dovich effect is the only known way to directly measure the peculiar velocities of objects at cosmological distances, and is thus a valuable source of information for cosmology. It allows us to probe megaparsec and gigaparsec scales – that’s around 30,000 times the diameter of the Milky Way!

    2
    A schematic of the Sunyaev-Zel’dovich effect resulting in higher energy (or blue shifted) photons of the cosmic microwave background (CMB) when viewed through the hot gas present in galaxy clusters. Source: UChicago Astronomy.

    Measuring the kSZ Effect

    To make the measurement of the kinematic Sunyaev-Zel’dovich signal, the Atacama Cosmology Telescope (ACT) collaboration used a combination of cosmic microwave background maps from two years of observations by ACT. The CMB map used for the analysis overlapped with ~68000 galaxy sources from the Large Scale Structure (LSS) DR11 catalog of the Baryon Oscillation Spectroscopic Survey (BOSS). The catalog lists the coordinate positions of galaxies along with some of their properties. The most luminous of these galaxies were assumed to be located at the centers of galaxy clusters, so temperature signals from the CMB map were taken at the coordinates of these galaxy sources in order to extract the Sunyaev-Zel’dovich signal.

    While the smallness of the kSZ signal with respect to the tSZ signal and the noise level in current CMB maps poses an analysis challenge, there exist several approaches to extracting the kSZ signal. To make their measurement, the ACT collaboration employed a pairwise statistic. “Pairwise” refers to the momentum between pairs of galaxy clusters, and “statistic” indicates that a large sample is used to rule out the influence of unwanted effects.

    Here’s the approach: nearby galaxy clusters move towards each other on average, due to gravity. We can’t easily measure the three-dimensional momentum of clusters, but the average pairwise momentum can be estimated by using the line of sight component of the momentum, along with other information such as redshift and angular separations between clusters. The line of sight momentum is directly proportional to the measured kSZ signal: the microwave temperature fluctuation which is measured from the CMB map. We want to know if we’re measuring the kSZ signal when we look in the direction of galaxy clusters in the CMB map. Using the observed CMB temperature to find the line of sight momenta of galaxy clusters, we can estimate the mean pairwise momentum as a function of cluster separation distance, and check to see if we find that nearby galaxies are indeed falling towards each other. If so, we know that we’re observing the kSZ effect in action in the CMB map.

    For the measurement quoted in their paper, the ACT collaboration finds the average pairwise momentum as a function of galaxy cluster separation, and explores a variety of error determinations and sources of systematic error. The most conservative errors based on simulations give signal-to-noise estimates that vary between 3.6 and 4.1.

    4
    The mean pairwise momentum estimator and best fit model for a selection of 20000 objects from the DR11 Large Scale Structure catalog, plotted as a function of comoving separation. The dashed line is the linear model, and the solid line is the model prediction including nonlinear redshift space corrections. The best fit provides a 4.1σ evidence of the kSZ signal in the ACTPol-ACT CMB map. Source: arXiv:1607.02139.

    The ACT and BOSS results are an improvement on the 2012 ACT detection, and are comparable with results from the South Pole Telescope (SPT) collaboration that use galaxies from the Dark Energy Survey. The ACT and BOSS measurement represents a step forward towards improved extraction of kSZ signals from CMB maps. Future surveys such as Advanced ACTPol, SPT-3G, the Simons Observatory, and next-generation CMB experiments will be able to apply the methods discussed here to improved CMB maps in order to achieve strong detections of the kSZ effect. With new data that will enable better measurements of galaxy cluster peculiar velocities, the pairwise kSZ signal will become a powerful probe of our universe in the years to come.

    Implications and Future Experiments

    One interesting consequence for particle physics will be more stringent constraints on the sum of the neutrino masses from the pairwise kinematic Sunyaev-Zel’dovich effect. Upper bounds on the neutrino mass sum from cosmological measurements of large scale structure and the CMB have the potential to determine the neutrino mass hierarchy, one of the next major unknowns of the Standard Model to be resolved, if the mass hierarchy is indeed a “normal hierarchy” with ν3 being the heaviest mass state. If the upper bound of the neutrino mass sum is measured to be less than 0.1 eV, the inverted hierarchy scenario would be ruled out, due to there being a lower limit on the mass sum of ~0.095 eV for an inverted hierarchy and ~0.056 eV for a normal hierarchy.

    Forecasts for kSZ measurements in combination with input from Planck predict possible constraints on the neutrino mass sum with a precision of 0.29 eV, 0.22 eV and 0.096 eV for Stage II (ACTPol + BOSS), Stage III (Advanced ACTPol + BOSS) and Stage IV (next generation CMB experiment + DESI) surveys respectively, with the possibility of much improved constraints with optimal conditions. As cosmic microwave background maps are improved and Sunyaev-Zel’dovich analysis methods are developed, we have a lot to look forward to.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 4:05 pm on August 14, 2016 Permalink | Reply
    Tags: , , particlebites,   

    From particlebites: “Jets: More than Riff, Tony, and a rumble” 

    particlebites bloc

    particlebites

    July 26, 2016 [Just today in social media.]
    Reggie Bain

    Ubiquitous in the LHC’s ultra-high energy collisions are collimated sprays of particles called jets. The study of jet physics is a rapidly growing field where experimentalists and theorists work together to unravel the complex geometry of the final state particles at LHC experiments. If you’re totally new to the idea of jets…this bite from July 18th, 2016 by Julia Gonski is a nice experimental introduction to the importance of jets. In this bite, we’ll look at the basic ideas of jet physics from a more theoretical perspective. Let’s address a few basic questions:

    1. What is a jet? Jets are highly collimated collections of particles that are frequently observed in detectors. In visualizations of collisions in the ATLAS detector, one can often identify jets by eye.

    1
    A nicely colored visualization of a multi-jet event in the ATLAS detector. Reason #172 that I’m not an experimentalist…actually sifting out useful information from the detector (or even making a graphic like this) is insanely hard.

    Jets are formed in the final state of a collision when a particle showers off radiation in such a way as to form a focused cone of particles. The most commonly studied jets are formed by quarks and gluons that fragment into hadrons like pions, kaons, and sometimes more exotic particles like the $latex J/Ψ, Υ, χc and many others. This process is often referred to as hadronization.

    2. Why do jets exist? Jets are a fundamental prediction of Quantum Field Theories like Quantum Chromodynamics (QCD). One common process studied in field theory textbooks is electron–positron annihilation into a pair of quarks, e+e– → q q. In order to calculate the cross-section of this process, it turns out that one has to consider the possibility that additional gluons are produced along with the qq. Since no detector has infinite resolution, it’s always possible that there are gluons that go unobserved by your detector. This could be because they are incredibly soft (low energy) or because they travel almost exactly collinear to the q or q itself. In this region of momenta, the cross-section gets very large and the process favors the creation of this extra radiation. Since these gluons carry color/anti-color, they begin to hadronize and decay so as to become stable, colorless states. When the q, q have high momenta, the zoo of particles that are formed from the hadronization all have momenta that are clustered around the direction of the original q,q and form a cone shape in the detector…thus a jet is born! The details of exactly how hadronization works is where theory can get a little hazy. At the energy and distance scales where quarks/gluons start to hadronize, perturbation theory breaks down making many of our usual calculational tools useless. This, of course, makes the realm of hadronization—often referred to as parton fragmentation in the literature—a hot topic in QCD research.

    3. How do we measure/study jets? Now comes the tricky part. As experimentalists will tell you, actually measuring jets can a messy business. By taking the signatures of the final state particles in an event (i.e. a collision), one can reconstruct a jet using a jet algorithm. One of the first concepts of such jet definitions was introduced by Geroge Sterman and Steven Weinberg in 1977. There they defined a jet using two parameters θ, E. These restricted the angle and energy of particles that are in or out of a jet. Today, we have a variety of jet algorithms that fall into two categories:

    Cone Algorithms — These algorithms identify stable cones of a given angular size. These cones are defined in such a way that if one or two nearby particles are added to or removed from the jet cone, that it won’t drastically change the cone location and energy
    Recombination Algorithms — These look pairwise at the 4-momenta of all particles in an event and combine them, according to a certain distance metric (there’s a different one for each algorithm), in such a way as to be left with distinct, well-separated jets.

    2
    Figure 2: From Cacciari and Salam’s original paper on the “Anti-kT” jet algorithm (See arXiv:0802.1189). The picture shows the application of 4 different jet algorithms: the kT, Cambridge/Aachen, Seedless-Infrared-Safe Cone, and anti-kT algorithms to a single set of final state particles in an event. You can see how each algorithm reconstructs a slightly different jet structure. These are among the most commonly used clustering algorithms on the market (the anti-kT being, at least in my experience, the most popular)

    4. Why are jets important? On the frontier of high energy particle physics, CERN leads the world’s charge in the search for new physics. From deepening our understanding of the Higgs to observing never before seen particles, projects like ATLAS,

    3
    An illustration of an interesting type of jet substructure observable called “N-subjettiness” from the original paper by Jesse Thaler and Ken van Tilburg (see arXiv:1011.2268). N-subjettiness aims to study how momenta within a jet are distributed by dividing them up into n sub-jets. The diagram on the left shows an example of 2-subjettiness where a jet contains two sub-jets. The diagram on the right shows a jet with 0 sub-jets.

    CMS, and LHCb promise to uncover interesting physics for years to come. As it turns out, a large amount of Standard Model background to these new physics discoveries comes in the form of jets. Understanding the origin and workings of these jets can thus help us in the search for physics beyond the Standard Model.

    Additionally, there are a number of interesting questions that remain about the Standard Model itself. From studying the production of heavy hadron production/decay in pp and heavy-ion collisions to providing precision measurements of the strong coupling, jets physics has a wide range of applicability and relevance to Standard Model problems. In recent years, the physics of jet substructure, which studies the distributions of particle momenta within a jet, has also seen increased interest. By studying the geometry of jets, a number of clever observables have been developed that can help us understand what particles they come from and how they are formed. Jet substructure studies will be the subject of many future bites!

    Going forward…With any luck, this should serve as a brief outline to the uninitiated on the basics of jet physics. In a world increasingly filled with bigger, faster, and stronger colliders, jets will continue to play a major role in particle phenomenology. In upcoming bites, I’ll discuss the wealth of new and exciting results coming from jet physics research. We’ll examine questions like:

    How do theoretical physicists tackle problems in jet physics?
    How does the process of hadronization/fragmentation of quarks and gluons really work?
    Can jets be used to answer long outstanding problems in the Standard Model?

    I’ll also bite about how physicists use theoretical smart bombs called “effective field theories” to approach these often nasty theoretical calculations. But more on that later…

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 10:21 am on July 14, 2016 Permalink | Reply
    Tags: , , , , , particlebites   

    From particlebites: “The dawn of multi-messenger astronomy: using KamLAND to study gravitational wave events GW150914 and GW151226” 

    particlebites bloc

    particlebites

    July 13, 2016
    Eve Vavagiakis

    Article: Search for electron antineutrinos associated with gravitational wave events GW150914 and GW151226 using KamLAND
    Authors: KamLAND Collaboration
    Reference: arXiv:1606.07155

    After the chirp heard ‘round the world, the search is on for coincident astrophysical particle events to provide insight into the source and nature of the era-defining gravitational wave events detected by the LIGO Scientific Collaboration in late 2015.

    LSC LIGO Scientific Collaboration

    By combining information from gravitational wave (GW) events with the detection of astrophysical neutrinos and electromagnetic signatures such as gamma-ray bursts, physicists and astronomers are poised to draw back the curtain on the dynamics of astrophysical phenomena, and we’re surely in for some surprises.

    The first recorded gravitational wave event, GW150914, was likely a merger of two black holes which took place more than one billion light years from the Earth. The event’s name marks the day it was observed by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO), September 14th, 2015.

    1
    Two black holes spiral in towards one another and merge to emit a burst of gravitational waves that Advanced LIGO can detect. Source: APS Physics.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    LIGO detections are named “GW” for “gravitational wave,” followed by the observation date in YYMMDD format. The second event, GW151226 (December 26th, 2015) was likely another merger of two black holes, having 8 and 14 times the mass of the sun, taking place 1.4 billion light years away from Earth.

    The following computer simulation from LIGO depicts what the collision of two black holes would look like if we could get close enough to the merger. It was created by solving equations from Albert Einstein’s general theory of relativity using the LIGO data. (Source: LIGO Lab Caltech : MIT).

    A third gravitational wave event candidate, LVT151012, a possible black hole merger which occurred on October 12th, 2015, did not reach the same detection significance a the aforementioned events, but still has a >50% chance of astrophysical origin. LIGO candidates are named differently than detections. The names start with “LVT” for “LIGO-Virgo Trigger,” but are followed by the observation date in the same YYMMDD format. The different name indicates that the event was not significant enough to be called a gravitational wave.

    Observations from other scientific collaborations can search for particles associated with these gravitational waves. The combined information from the gravitational wave and particle detections could identify the origin of these gravitational wave events. For example, some violent astrophysical phenomena emit not only gravitational waves, but also high-energy neutrinos. Conversely, there is currently no known mechanism for the production of either neutrinos or electromagnetic waves in a black hole merger.

    Black holes with rapidly accreting disks can be the origin of gamma-ray bursts and neutrino signals, but these disks are not expected to be present during mergers like the ones detected by LIGO. For this reason, it was surprising when the Fermi Gamma-ray Space Telescope reported a coincident gamma-ray burst occurring 0.4 seconds after the September GW event with a false alarm probability of 1 in 455. Although there is some debate in the community about whether or not this observation is to be believed, the observation motivates a multi-messenger analysis including the hunt for associated astrophysical neutrinos at all energies.

    Could a neutrino experiment like KamLAND find low energy antineutrino events coincident with the GW events, even when higher energy searches by IceCube and ANTARES did not?

    3
    Schematic diagram of the KamLAND detector. Source: hep-ex/0212021v1

    KamLAND, the Kamioka Liquid scintillator Anti-Neutrino Detector, is located under Mt. Ikenoyama, Japan, buried beneath the equivalent of 2,700 meters of water. It consists of an 18 meter diameter stainless steel sphere, the inside of which is covered with photomultiplier tubes, surrounding an EVOH/nylon balloon enclosed by pure mineral oil. Inside the balloon resides 1 kton of highly purified liquid scintillator. Outside the stainless steel sphere is a cylindrical 3.2 kton water-Cherenkov detector that provides shielding and enables cosmic ray muon identification.

    KamLAND is optimized to search for ~MeV neutrinos and antineutrinos. The detection of the gamma ray burst by the Fermi telescope suggests that the detected black hole merger might have retained its accretion disk, and the spectrum of accretion disk neutrinos around a single black hole is expected to peak around 10 MeV, so KamLAND searched for correlations between the LIGO GW events and ~10 MeV electron antineutrino events occurring within a 500 second window of the merger events. Researchers focused on the detection of electron antineutrinos through the inverse beta decay reaction.

    No events were found within the target window of any gravitational wave event, and any adjacent event was consistent with background. KamLAND researchers used this information to determine a monochromatic fluence (time integrated flux) upper limit, as well as an upper limit on source luminosity for each gravitational wave event, which places a bound on the total energy released as low energy neutrinos during the merger events and candidate event. The lack of detected concurrent inverse beta decay events supports the conclusion that GW150914 was a black hole merger, and not another astrophysical event such as a core-collapse supernova.

    More information would need to be obtained to explain the gamma ray burst observed by the Fermi telescope, and work to improve future measurements is ongoing. Large uncertainties in the origin region of gamma ray bursts observed by the Fermi telescope will be reduced, and the localization of GW events will be improved, most drastically so by the addition of a third LIGO detector (LIGO India).

    As Advanced LIGO continues its operation, there will likely be many more chances for KamLAND and other neutrino experiments to search for coincidence neutrinos. Multi-messenger astronomy has only just begun to shed light on the nature of black holes, supernovae, mergers, and other exciting astrophysical phenomena — and the future looks bright.

    U Wisconsin ICECUBE neutrino detector at the South Pole
    IceCube neutrino detector interior
    U Wisconsin ICECUBE neutrino detector at the South Pole

    5
    ANTERES

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: