Tagged: CERN Courier Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:55 pm on May 19, 2017 Permalink | Reply
    Tags: , , Belle II rolls in, CERN Courier, , , KEK laboratory in Japan, ,   

    From CERN Courier: “Belle II rolls in” 

    CERN Courier

    May 19, 2017

    1
    The Belle II detector in place

    On 11 April, the Belle II detector at the KEK laboratory in Japan was successfully “rolled-in” to the collision point of the upgraded SuperKEKB accelerator, marking an important milestone for the international B-physics community. The Belle II experiment is an international collaboration hosted by KEK in Tsukuba, Japan, with related physics goals to those of the LHCb experiment at CERN but in the pristine environment of electron–positron collisions. It will analyse copious quantities of B mesons to study CP violation and signs of physics beyond the Standard Model (CERN Courier September 2016 p32).

    “Roll-in” involves moving the entire 8 m-tall, 1400 tonne Belle II detector system from its assembly area to the beam-collision point 13 m away. The detector is now integrated with SuperKEKB and all its seven subdetectors, except for the innermost vertex detector, are in place. The next step is to install the complex focusing magnets around the Belle II interaction point. SuperKEKB achieved its first turns in February, with operation of the main rings scheduled for early spring and phase-II “physics” operation by the end of 2018.

    Compared to the previous Belle experiment, and thanks to major upgrades made to the former KEKB collider, Belle II will allow much larger data samples to be collected with much improved precision. “After six years of gruelling work with many unexpected twists and turns, it was a moving and gratifying experience for everyone on the team to watch the Belle II detector move to the interaction point,” says Belle II spokesperson Tom Browder. “Flavour physics is now the focus of much attention and interest in the community and Belle II will play a critical role in the years to come.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 8:28 pm on May 1, 2017 Permalink | Reply
    Tags: , CERN Courier, , , , ,   

    From CERN Courier: “How dark matter became a particle 

    CERN Courier

    Apr 13, 2017
    Gianfranco Bertone, University of Amsterdam
    Dan Hooper, Fermi National Accelerator Laboratory and the University of Chicago.

    It took decades for dark matter to enter the lexicon of particle physics. Today, explaining the nature and abundance of dark matter is one of the most pressing problems in the field.

    1
    EAGLE-project simulation

    Astronomers have long contemplated the possibility that there may be forms of matter in the universe that are imperceptible, either because they are too far away, too dim or intrinsically invisible. Lord Kelvin was perhaps the first, in 1904, to attempt a dynamical estimate of the amount of dark matter in the universe. His argument was simple yet powerful: if stars in the Milky Way can be described as a gas of particles acting under the influence of gravity, one can establish a relationship between the size of the system and the velocity dispersion of the stars. Henri Poincaré was impressed by Kelvin’s results, and in 1906 he argued that since the velocity dispersion predicted in Kelvin’s estimate is of the same order of magnitude as that observed, “there is no dark matter, or at least not so much as there is of shining matter”.

    The Swiss–US astronomer Fritz Zwicky is arguably the most famous and widely cited pioneer in the field of dark matter. In 1933, he studied the redshifts of various galaxy clusters and noticed a large scatter in the apparent velocities of eight galaxies within the Coma Cluster. Zwicky applied the so-called virial theorem – which establishes a relationship between the kinetic and potential energies of a system of particles – to estimate the cluster’s mass. In contrast to what would be expected from a structure of this scale – a velocity dispersion of around 80 km/s – the observed average velocity dispersion along the line of sight was approximately 1000 km/s. From this comparison, Zwicky concluded: “If this would be confirmed, we would get the surprising result that dark matter is present in much greater amount than luminous matter.”

    In the 1950s and 1960s, most astronomers did not ask whether the universe had a significant abundance of invisible or missing mass. Although observations from this era would later be seen as evidence for dark matter, back then there was no consensus that the observations required much, or even any, such hidden material, and certainly there was not yet any sense of crisis in the field. It was in 1970 that the first explicit statements began to appear arguing that additional mass was needed in the outer parts of some galaxies, based on comparisons between predicted and measured rotation curves. The appendix of a seminal paper published by Ken Freeman in 1970, prompted by discussions with radio-astronomer Mort Roberts, concluded that: “If [the data] are correct, then there must be in these galaxies additional matter which is undetected, either optically or at 21 cm. Its mass must be at least as large as the mass of the detected galaxy, and its distribution must be quite different from the exponential distribution which holds for the optical galaxy.” (Figure 1 below.)

    Several other lines of evidence began to appear that supported the same conclusion. In 1974, two influential papers (by Jaan Einasto, Ants Kaasik and Enn Saar, and by Jerry Ostriker, Jim Peebles and Amos Yahil) argued that a common solution existed for the mass discrepancies observed in clusters and in galaxies, and made the strong claim that the mass of galaxies had been until then underestimated by a factor of about 10.

    1
    Kelvin, Rubin, Bosma

    By the end of the decade, opinion among many cosmologists and astronomers had crystallised: dark matter was indeed abundant in the universe. Although the same conclusion was reached by many groups of scientists with different subcultures and disciples, many individuals found different lines of evidence to be compelling during this period. Some astronomers were largely persuaded by new and more reliable measurements of rotation curves, such as those by Albert Bosma, Vera Rubin and others. Others were swayed by observations of galaxy clusters, arguments pertaining to the stability of disc galaxies, or even cosmological considerations. Despite disagreements regarding the strengths and weaknesses of these various observations and arguments, a consensus nonetheless began to emerge by the end of the 1970s in favour of dark-matter’s existence.

    Enter the particle physicists

    From our contemporary perspective, it can be easy to imagine that scientists in the 1970s had in mind halos of weakly interacting particles when they thought about dark matter. In reality, they did not. Instead, most astronomers had much less exotic ideas in the form of comparatively low-luminosity versions of otherwise ordinary stars and gas. Over time, however, an increasing number of particle physicists became aware of and interested in the problem of dark matter. This transformation was not just driven by new scientific results, but also by sociological changes in science that had been taking place for some time.

    Half a century ago, cosmology was widely viewed as something of a fringe science, with little predictive power or testability. Particle physicists and astrophysicists did not often study or pursue research in each other’s fields, and it was not obvious what their respective communities might have to offer one another. More than any other problem in science, it was dark matter that brought particle physicists and astronomers together.

    As astrophysical alternatives were gradually ruled out one by one, the view that dark matter is likely to consist of one or more yet undiscovered species of subatomic particle came to be held almost universally among both particle physicists and astrophysicists alike.

    Perhaps unsurprisingly, the first widely studied particle dark-matter candidates were neutrinos. Unlike all other known particle species, neutrinos are stable and do not experience electromagnetic or strong interactions – which are essential characteristics for almost any viable dark-matter candidate. The earliest discussion of the role of neutrinos in cosmology appeared in a 1966 paper by Soviet physicists Gershtein and Zeldovich, and several years later the topic began to appear in the West, beginning in 1972 with a paper by Ram Cowsik and J McClelland. Despite the very interesting and important results of these and other papers, it is notable that most of them did not address or even acknowledge the possibility that neutrinos could account for the missing mass that had been observed by astronomers on galactic and cluster scales. An exception included the 1977 paper by Lee and [Steven]Weinberg, whose final sentence reads: “Of course, if a stable heavy neutral lepton were discovered with a mass of order 1–15 GeV, the gravitational field of these heavy neutrinos would provide a plausible mechanism for closing the universe.”

    While this is still a long way from acknowledging the dynamical evidence for dark matter, it was an indication that physicists were beginning to realise that weakly interacting particles could be very abundant in our universe, and may have had an observable impact on its evolution. In 1980, the possibility that neutrinos might make up the dark matter received a considerable boost when a group studying tritium beta decay reported that they had measured the mass of the electron antineutrino to be approximately 30 eV – similar to the value needed for neutrinos to account for the majority of dark matter. Although this “discovery” was eventually refuted, it motivated many particle physicists to consider the cosmological implications of their research.

    2
    No image caption, no image credit.

    Although we know today that dark matter in the form of Standard Model neutrinos would be unable to account for the observed large-scale structure of the universe, neutrinos provided an important template for the class of hypothetical species that would later be known as weakly interacting massive particles (WIMPs). Astrophysicists and particle physicists alike began to experiment with a variety of other, more viable, dark-matter candidates.

    Cold dark-matter paradigm

    The idea of neutrino dark matter was killed off in the mid-1980s with the arrival of numerical simulations. These could predict how large numbers of dark-matter particles would evolve under the force of gravity in an expanding universe, and therefore allow astronomers to assess the impact of dark matter on the formation of large-scale structure. In fact, by comparing the results of these simulations with those of galaxy surveys, it was soon realised that no relativistic particle could account for dark matter. Instead, the paradigm of cold dark matter – i.e. made of particles that were non-relativistic at the epoch of structure formation – was well on its way to becoming firmly established.

    Meanwhile, in 1982, Jim Peebles pointed out that the observed characteristics of the cosmic microwave background (CMB) also seemed to require the existence of dark matter.

    CMB per ESA/Planck

    ESA/Planck

    If just baryons existed, then one could only explain the observed degree of large-scale structure if the universe started in a fairly anisotropic or “clumpy” state. But by this time, the available data already set an upper limit on CMB anisotropies at a level of 10–4 – too meagre to account for the universe’s structure. Peebles argued that this problem would be relieved if the universe was instead dominated by massive weakly interacting particles whose density fluctuations begin to grow prior to the decoupling of matter and radiation during which the CMB was born. Among other papers, this received enormous attention within the scientific community and helped establish cold dark matter as the leading paradigm to describe the structure and evolution of the universe at all scales.

    Solutions beyond the Standard Model

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Neutrinos might be the only known particles that are stable, electrically neutral and not strongly interacting, but the imagination of particle physicists did not remain confined to the Standard Model for long. Instead, papers started to appear that openly contemplated many speculative and yet undiscovered particles that might account for dark matter. In particular, particle physicists began to find new candidates for dark matter within the framework of a newly proposed space–time symmetry called supersymmetry.

    Standard model of Supersymmetry DESY

    The cosmological implications of supersymmetry were discussed as early as the late 1970s. In Piet Hut’s 1977 paper on the cosmological constraints on the masses of neutrinos, he wrote that the dark-matter argument was not limited to neutrinos or even to weakly interacting particles. The abstract of his paper mentions another possibility made within the context of the supersymmetric partner of the graviton, the spin-3/2 gravitino: “Similar, but much more severe, restrictions follow for particles that interact only gravitationally. This seems of importance with respect to supersymmetric theories,” wrote Hut.

    In their 1982 paper, Heinz Pagels and Joel Primack also considered the cosmological implications of gravitinos. But unlike Hut’s paper, or the other preceding papers that had discussed neutrinos as a cosmological relic, Pagels and Primack were keenly aware of the dark-matter problem and explicitly proposed that gravitinos could provide the solution by making up the missing mass. In many ways, their paper reads like a modern manuscript on supersymmetric dark matter, motivating supersymmetry by its various attractive features and then discussing both the missing mass in galaxies and the role that dark matter could play in the formation of large-scale structure. Around the same time, supersymmetry was being further developed into its more modern form, leading to the introduction of R-parity and constructions such as the minimal supersymmetric standard model (MSSM). Such supersymmetric models included not only the gravitino as a dark-matter candidate, but also neutralinos – electrically neutral mixtures of the superpartners of the photon, Z and Higgs bosons.

    Over the past 35 years, neutralinos have remained the single most studied candidate for dark matter and have been the subject of many thousand scientific publications. Papers discussing the cosmological implications of stable neutralinos began to appear in 1983. In the first two of these, Weinberg and Haim Goldberg independently discussed the case of a photino (a neutralino whose composition is dominated by the superpartner of the photon) and derived a lower bound of 1.8 GeV on its mass by requiring that the density of such particles does not overclose the universe. A few months later, a longer paper by John Ellis and colleagues considered a wider range of neutralinos as cosmological relics. In Goldberg’s paper there is no mention of the phrase “dark matter” or of any missing mass problem, and Ellis et al. took a largely similar approach by simply requiring only that the cosmological abundance of neutralinos not be so large as to overly slow or reverse the universe’s expansion rate. Although most of the papers on stable cosmological relics written around this time did not yet fully embrace the need to solve the dark-matter problem, occasional sentences could be found that reflected the gradual emergence of a new perspective.

    4
    The Bullet Cluster

    During the years that followed, an increasing number of particle physicists would further motivate proposals for physics beyond the Standard Model by showing that their theories could account for the universe’s dark matter. In 1983, for instance, John Preskill, Mark Wise and Frank Wilczek showed that the axion, originally proposed to solve the strong CP problem in quantum chromodynamics, could account for all of the dark matter in the universe. In 1993, Scott Dodelson and Lawrence Widrow proposed a scenario in which an additional, sterile neutrino species that did not experience electroweak interactions could be produced in the early universe and realistically make up the dark matter. Both the axion and the sterile neutrino are still considered as well-motivated dark-matter candidates, and are actively searched for with a variety of particle and astroparticle experiments.

    The triumph of particle dark matter

    In the early 1980s there was still nothing resembling a consensus about whether dark matter was made of particles at all, with other possibilities including planets, brown dwarfs, red dwarfs, white dwarfs, neutron stars and black holes. Kim Griest would later coin the term “MACHOs” – short for massive astrophysical compact halo objects – to denote this class of dark-matter candidates, in response to the alternative of WIMPs. There is a consensus today, based on searches using gravitational microlensing surveys and determinations of the cosmic baryon density based on measurements of the primordial light-element abundances and the CMB, that MACHOs do not constitute a large fraction of the dark matter.

    Gravitational microlensing, S. Liebes, Physical Review B, 133 (1964): 835

    An alternative explanation for particle dark matter is to assume that there is no dark matter in the first place, and that instead our theory of gravity needs to be modified. This simple idea, which was put forward in 1982 by Mordehai Milgrom, is known as modified Newtonian dynamics (MOND) and has far-reaching consequences. At the heart of MOND is the suggestion that the force due to gravity does not obey Newton’s second law, F = ma. If instead gravity scaled as F = ma2/a0 in the limit of very low accelerations (a << a0 ~ 1.2 × 10−10 m/s2), then it would be possible to account for the observed motions of stars and gas within galaxies without postulating the presence of any dark matter.

    In 2006, a group of astronomers including Douglas Clowe transformed the debate between dark matter and MOND with the publication of an article entitled: A direct empirical proof of the existence of dark matter. In this paper, the authors described the observations of a pair of merging clusters collectively known as the Bullet Cluster (image above left). As a result of the clusters’ recent collision, the distribution of stars and galaxies is spatially separated from the hot X-ray-emitting gas (which constitutes the majority of the baryonic mass in this system). A comparison of the weak lensing and X-ray maps of the bullet cluster clearly reveals that the mass in this system does not trace the distribution of baryons. Another source of gravitational potential, such as that provided by dark matter, must instead dominate the mass of this system.

    Following these observations of the bullet cluster and similar systems, many researchers expected that this would effectively bring the MOND hypothesis to an end. This did not happen, although the bullet cluster and other increasingly precise cosmological measurements on the scale of galaxy clusters, as well as the observed properties of the CMB, have been difficult to reconcile with all proposed versions of MOND. It is currently unclear whether other theories of modified gravity, in some yet-unknown form, might be compatible with these observations. Until we have a conclusive detection of dark-matter particles, however, the possibility that dark matter is a manifestation of a new theory of gravity remains open.

    Today, the idea that most of the mass in the universe is made up of cold and non-baryonic particles is not only the leading paradigm, but is largely accepted among astrophysicists and particle physicists alike. Although dark-matter’s particle nature continues to elude us, a rich and active experimental programme is striving to detect and characterise dark-matter’s non-gravitational interactions, ultimately allowing us to learn the identity of this mysterious substance. It has been more than a century since the first pioneering attempts to measure the amount of dark matter in the universe. Perhaps it will not be too many more years before we come to understand what that matter is.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 4:55 pm on February 16, 2017 Permalink | Reply
    Tags: CERN Courier, Dark Matter Physics,   

    From CERN Courier: “Funding injection for SNOLAB” 

    CERN Courier

    1
    DEAP-3600 detector at SNOLAB

    The SNOLAB laboratory in Ontario, Canada, has received a grant of $28.6m to help secure its next three years of operations. The facility is one of 17 research facilities to receive support through Canada’s Major Science Initiative (MSI) fund, which exists to secure state-of-the-art national research facilities.

    SNOLAB, which is located in a mine 2 km beneath the surface, specialises in neutrino and dark-matter physics and claims to be the deepest cleanroom facility in the world. Current experiments located there include: PICO and DEAP-3600, which search for dark matter using bubble-chamber and liquid-argon technology, respectively; EXO, which aims to measure the mass and nature of the neutrino; HALO, designed to detect supernovae; and a new neutrino experiment SNO+ based on the existing SNO detector.

    4
    EXO. U. MD

    The new funds will be used to employ the 96-strong SNOLAB staff and support the operations and maintenance of the lab’s facilities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 2:18 pm on January 14, 2017 Permalink | Reply
    Tags: , , CERN Courier, , India to become associate Member State, ,   

    From CERN Courier: “India to become associate Member State” 

    CERN Courier

    Jan 13, 2017
    No writer credit

    1
    Signing the agreement.

    On 21 November, CERN signed an agreement with Sekhar Basu, chairman of the Atomic Energy Commission (AEC) and secretary of the Department of Atomic Energy (DAE) of the government of India, to admit India as an associate Member State.

    India has been a partner of CERN for more than 50 years, during which it has made substantial contributions to the construction of the LHC and to the ALICE and CMS experiments, as well as Tier-2 centres for the Worldwide LHC Computing Grid. A co-operation agreement was signed in 1991, but India’s relationship with CERN goes back much further, with Indian institutes having provided components for the LEP collider and one of its four detectors, L3, in addition to the WA93 and WA89 detectors.

    CERN LEP Collider
    CERN LEP Collider

    The success of the DAE–CERN partnership regarding the LHC has also led to co-operation on novel accelerator technologies through DAE’s participation in CERN’s Linac4, SPL and CTF3 projects. India also participates in the COMPASS, ISOLDE and nTOF experiments at CERN.

    In recognition of these substantial contributions, India was granted observer status at CERN Council in 2002. When it enters into force, associate membership will allow India to take part in CERN Council meetings and its committees, and will make Indian scientists eligible for staff appointments. “Becoming associate member of CERN will enhance participation of young scientists and engineers in various CERN projects and bring back knowledge for deployment in the domestic programmes,” says Basu. “It will also provide opportunities to Indian industries to participate directly in CERN projects.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 2:07 pm on January 14, 2017 Permalink | Reply
    Tags: , , CERN Courier, , , , Slovenia to become associate Member State in pre-stage to membership   

    From CERN Courier: “Slovenia to become associate Member State in pre-stage to membership” 

    CERN Courier

    Jan 13, 2017
    No writer credit

    1
    Slovenia gains associate membership

    CERN Council has voted unanimously to admit the Republic of Slovenia to associate membership in the pre-stage to CERN membership. Slovenia’s membership will facilitate, strengthen and broaden the participation and activities of Slovenian scientists, said Slovenian minister Maja Makovec Brenčič, and give Slovenian industry full access to CERN procurement orders. “Slovenia is also aware of the CERN offerings in the areas of education and public outreach, and we are therefore looking forward to become eligible for participation in CERN’s fellows, associate and student programmes.”

    Slovenian physicists have participated in the LHC’s ATLAS experiment for the past 20 years, focusing on silicon tracking, protection devices and computing at the Slovenian Tier-2 data centre. However, Slovenian physicists contributed to CERN long before Slovenia became an independent state in 1991, participating in an experiment at LEAR and the DELPHI experiment at LEP. In 1991, CERN and the Executive Council of the Assembly of the Republic of Slovenia signed a co-operation agreement, and in 2009 Slovenia applied to become a Member State.

    Following internal approval procedures, Slovenia will join Cyprus and Serbia as an associate Member State in the pre-stage to membership. At the earliest two years thereafter, Council will decide on the admission of Slovenia to full membership. “It is a great pleasure to welcome Slovenia into our ever-growing CERN family as an associate Member State in the pre-stage to membership,” says CERN Director-General Fabiola Gianotti.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 1:49 pm on January 14, 2017 Permalink | Reply
    Tags: , , CERN Courier, , ,   

    From CERN Courier: “AWAKE makes waves” 

    CERN Courier

    Jan 13, 2017
    No writer credit

    1
    Proton-bunch comparison

    In early December, the AWAKE collaboration made an important step towards a pioneering accelerator technology that would reduce the size and cost of particle accelerators.

    CERN Awake schematic
    CERN AWAKE schematic

    Having commissioned the facility with first beam in November, the team has now installed a plasma cell and observed a strong modulation of high-energy proton bunches as they pass through it. This signals the generation of very strong electric fields that could be used to accelerate electrons to high energies over short distances.

    AWAKE (Advanced Proton Driven Plasma Wakefield Acceleration Experiment) is the first facility to investigate the use of plasma wakefields driven by proton beams. The experiment involves injecting a “drive” bunch of protons from CERN’s Super Proton Synchrotron (SPS) into a 10 m-long tube containing a plasma. The bunch then splits into a series of smaller bunches via a process called self-modulation, generating a strong wakefield as they move through the plasma.

    CERN Super Proton Synchrotron
    CERN Super Proton Synchrotron

    “Although plasma-wakefield technology has been explored for many years, AWAKE is the first experiment to use protons as a driver – which, given the high energy of the SPS, can drive wakefields over much longer distances compared with electron- or laser-based schemes,” says AWAKE spokesperson Allen Caldwell of the Max Planck Institute for Physics in Munich.

    While it has long been known that plasmas may provide an alternative to traditional accelerating methods based on RF cavities, turning this concept into a practical device is a major challenge. The next step for the AWAKE collaboration is to inject a second beam of electrons, the “witness” beam, which is accelerated by the wakefield just as a surfer accelerates by riding a wave. “To have observed indications for the first time of proton-bunch self-modulation, after just a few days of tests, is an excellent achievement. It’s down to a very motivated and dedicated team,” says Edda Gschwendtner, CERN AWAKE project leader.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 1:33 pm on January 14, 2017 Permalink | Reply
    Tags: , CERN Courier, , , ,   

    From CERN Courier: “Run 2 promises a harvest of beauty for LHCb” 

    CERN Courier

    1
    New measurement

    The first b-physics analysis using data from LHC Run 2, which began in 2015 with proton–proton collisions at an energy of 13 TeV, shows great promise for the physics programme of LHCb. During 2015 and 2016, the experiment collected a data sample corresponding to an integrated luminosity of about 2 fb^–1. Although this value is smaller than the total integrated luminosity collected in the three years of Run 1 (3 fb^–1), the significant increase of the LHC energy in Run 2 has almost doubled the production cross-section of beauty particles. Furthermore, the experiment has improved the performance of its trigger system and particle-identification capabilities. Once such an increase is taken into account, along with improvements in the trigger strategy and in the particle identification of the experiment, LHCb has already more than doubled the statistics of beauty particles on tape with respect to Run 1.

    The new analysis is based on 1 fb–1 of available data, aiming to measure the angle γ of the CKM unitarity triangle using B– → D0K*– decays. While B– → D0K– decays have been extensively studied in the past, this is the first time the B– → D0K*– mode has been investigated. The analysis, first presented at CKM2016 (see Triangulating in Mumbai in Faces & Places), allows the LHCb collaboration to cross-check expectations for the increase of signal yields in Run 2 using real data. A significant increase, roughly corresponding to a factor three, is observed per unit of integrated luminosity. This demonstrates that the experiment has benefitted from the increase in b-production cross-section, but also that the trigger of the detector performs better than in Run 1. Although the statistical uncertainty on γ from this measurement alone is still large, the sensitivity will be improved by the addition of more data, as well as by the use of other D-meson decay modes. This bodes well for future measurements of γ to be performed in this and other decay modes with the full Run 2 data set.

    Measurements of the angle γ are of great importance because it is the least well-known angle of the unitarity triangle. The latest combination from direct measurements with charged and neutral B-meson decays and a variety of D-meson final states, all performed with Run 1 data, yielded a central value of 72±7 degrees. LHCb’s ultimate aim, following detector upgrades relevant for LHC Run 3, is to determine γ with a precision below 1°, providing a powerful test of the Standard Model.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 9:24 am on October 17, 2016 Permalink | Reply
    Tags: , , CERN Courier, Electron–positron collider, , ,   

    From CERN Courier: “CLIC steps up to the TeV challenge” 

    CERN Courier

    Oct 14, 2016
    Philipp Roloff
    Daniel Schulte
    CERN

    1
    The CTF3 test facility at CERN

    CLIC

    2
    Compact Linear Collider layout for nominal 3 TeV version
    Date 3 February 2012
    Author Gerbershagen

    One of CERN’s main options for a flagship accelerator in the post-LHC era is an electron–positron collider at the high-energy frontier. The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear collider that has been under development since 1985 and currently involves 75 institutes around the world. Being linear, such a machine does not suffer energy losses from synchrotron radiation, which increases strongly with the beam energy in circular machines. Another option for CERN is a very high-energy circular proton–proton collider, which is currently being considered as the core of the Future Circular Collider (FCC) programme. So far, CLIC R&D has principally focused on collider technology that’s able to reach collision energies in the multi-TeV range. Based on this technology, a conceptual design report (CDR) including a feasibility study for a 3 TeV collider was completed in 2012.

    With the discovery of the Higgs boson in July of that year, and the fact that the particle turned out to be relatively light with a mass of 125 GeV, it became evident that there is a compelling physics case for operating CLIC at a lower centre-of-mass energy. The optimal collision energy is 380 GeV because it simultaneously allows physicists to study two Higgs-production processes in addition to top-quark pair production. Therefore, to fully exploit CLIC’s scientific potential, the collider is foreseen to be constructed in several stages corresponding to different centre-of-mass energies: the first at 380 GeV would be followed by stages at 1.5 and 3 TeV, allowing powerful searches for phenomena beyond the Standard Model (SM).

    While a fully optimised collider at 3 TeV was described in the CDR in 2012, the lower-energy stages were not presented at the same level of detail. In August this year, however, the CLIC and CLICdp (CLIC detector and physics study) collaborations published an updated baseline-staging scenario that places emphasis on an optimised first-energy stage compatible with an extension to high energies. The performance, cost and power consumption of the CLIC accelerator as a function of the centre-of-mass energy were addressed, building on experience from technology R&D and system tests. The resulting first-energy stage is based on already demonstrated performances of CLIC’s novel acceleration technology and will be significantly cheaper than the initial CDR design.

    CLIC physics

    An electron–positron collider provides unique opportunities to make precision measurements of the two heaviest particles in the SM: the Higgs boson (125 GeV) and the top quark (173 GeV). Deviations in the way the Higgs couples to the fermions, the electroweak bosons and itself are predicted in many extensions of the SM, such as supersymmetry or composite Higgs models. Different scenarios lead to specific patterns of deviations, which means that precision measurements of the Higgs couplings can potentially discriminate between different new-physics scenarios. The same is true of the couplings of the top quark to the Z boson and photon. CLIC would offer such measurements as the first step of its physics programme, and full simulations of realistic CLIC detector concepts have been used to evaluate the expected precision and to guide the choice of collision energy.

    The principal Higgs production channel, Higgstrahlung (e+e– → ZH), requires the centre-of-mass energy to be equal to the sum of the Higgs- and Z-boson masses plus a few tens of GeV. For an electron–positron collider such as CLIC, Higgsstrahlung has a maximum cross-section at a centre-of-mass energy of around 240 GeV and decreases as a function of energy. Because the colliding electrons and positrons are elementary particles with a precisely known energy, Higgsstrahlung events can be identified by detecting the Z boson alone as it recoils against the Higgs boson. This can be done without looking at the decay of the Higgs boson, and hence the measurement is completely independent of possible unknown Higgs decays. This is a unique capability of a lepton collider and the reason why the first energy stage of CLIC is so important. The most powerful method with which to measure the Higgsstrahlung cross-section in this way is based on events where a Z boson decays into hadrons, and the best precision is expected at centre-of-mass energies around 350 GeV. (At lower energies it is more difficult to separate signal and background events, while at higher energies the measurement is limited by the smaller signal cross-section and worse recoil mass resolution.)

    The other main Higgs-production channel is WW fusion (e+e− → Hveve). In contrast to Higgsstrahlung, the cross-section for this process rises quickly with centre-of-mass energy. By measuring the rates for the same Higgs decay, such as H → bb, in both Higgsstrahlung and WW-fusion events, researchers can significantly improve their knowledge of the Higgs decay width – which is a challenging measurement at hadron colliders such as the LHC. A centre-of-mass energy of 380 GeV at the first CLIC energy stage is ideal for achieving a sizable contribution of WW-fusion events.

    So far, the energy of electron–positron colliders has not been high enough to allow direct measurements of the top quark. At the first CLIC energy stage, however, properties of the top quark can be obtained via pair-production events (e+e− → tt). A small fraction of the collider’s running time would be used to scan the top pair-production cross-section in the threshold region around 350 GeV. This would allow us to extract the top-quark mass in a theoretically well-defined scheme, which is not possible at hadron colliders. The value of the top-quark mass has an important impact on the stability of the electroweak vacuum at very high energies.

    3
    Centre-of-mass energy dependencies

    With current knowledge, the achievable precision on the top-quark mass is expected to be in the order of 50 MeV, including systematic and theoretical uncertainties. This is about an order of magnitude better than the precision expected at the High-Luminosity LHC (HL-LHC).

    The couplings of the top quark to the Z boson and photon can be probed using the top-production cross-sections and “forward-backward” asymmetries for different electron-beam polarisation configurations available at CLIC. These observables lead to expected precisions on the couplings which are substantially better than those achievable at the HL-LHC. Deviations of these couplings from their SM expectations are predicted in many new physics scenarios, such as composite-Higgs scenarios or extra-dimension models. It was recently shown, using detailed detector simulations, that although higher energies are preferred, this measurement is already feasible at an energy of 380 GeV, provided the theoretical uncertainties improve in the coming years. The expected precisions depend on our ability to reconstruct tt- events correctly, which is more challenging at 380 GeV compared to higher energies because both top quarks decay almost isotropically.

    Combining all available knowledge therefore led to the choice of 380 GeV for the first-energy stage of the CLIC programme in the new staging baseline. Not only is this close to the optimal value for Higgs physics around 350 GeV but it would also enable substantial measurements of the top quark. An integrated luminosity of 500 fb–1 is required for the Higgs and top-physics programmes, which could take roughly five years. The top cross-section threshold scan, meanwhile, would be feasible with 100 fb–1 collected at several energy points near the production threshold.

    Stepping up

    After the initial phase of CLIC operation at 380 GeV, the aim is to operate CLIC above 1 TeV at the earliest possible time. In the current baseline, two stages at 1.5 TeV and 3 TeV are planned, although the exact energies of these stages can be revised as new input from the LHC and HL-LHC becomes available. Searches for beyond-the-SM phenomena are the main goal of high-energy CLIC operation. Furthermore, additional unique measurements of Higgs and top properties are possible, including studies of double Higgs production to extract the Higgs self-coupling. This is crucial to probe the Higgs potential experimentally and its measurement is extremely challenging in hadron collisions, even at the HL-LHC. In addition, the full data sample with three million Higgs events would lead to very tight constraints on the Higgs couplings to vector bosons and fermions. In contrast to hadron colliders, all events can be used for physics and there are no QCD backgrounds.

    4
    CLIC footprints in the vicinity of CERN

    Two fundamentally different approaches are possible to search for phenomena beyond the SM. The first is to search directly for the production of new particles, which in electron–positron collisions can take place almost up to the kinematic limit. Due to the clean experimental conditions and low backgrounds compared to hadron colliders, CLIC is particularly well suited for measuring new and existing weakly interacting states. Because the beam energies are tunable, it is also possible to study the production thresholds of new particles in detail. Searches for dark-matter candidates, meanwhile, can be performed using single-photon events with missing energy. Because lepton colliders probe the coupling of dark-matter particles to leptons, searches at CLIC are complementary to those at hadron colliders, which are sensitive to the couplings to quarks and gluons.

    The second analysis approach at CLIC, which is sensitive to even higher mass scales, is to search for unexpected signals in precision measurements of SM observables. For example, measurements of two-fermion processes provide discovery potential for Z´ bosons with masses up to tens of TeV. Another important example is the search for additional resonances or anomalous couplings in vector-boson scattering. For both indirect and direct searches, the discovery reach improves significantly with increasing centre-of-mass energy. If new phenomena are found, beam polarisation might help to constrain the underlying theory through observables such as polarisation asymmetries.

    The CLIC concept

    CLIC will collide beams of electrons and positrons at a single interaction point, with the main beams generated in a central facility that would fit on the CERN site. To increase the brilliance of the beams, the particles are “cooled” (slowed down and reaccelerated continuously) in damping rings before they are sent to the two high-gradient main linacs, which face each other. Here, the beams are accelerated to the full collision energy in a single pass and a magnetic telescope consisting of quadrupoles and different multipoles is used to focus the beams to nanometre sizes in the collision point inside of the detector. Two additional complexes produce high-current (100 A) electron beams to drive the main linacs – this novel two-beam acceleration technique is unique to CLIC.

    5
    Reconstructed particles

    The CLIC accelerator R&D is focused on several core challenges. First, strong accelerating fields are required in the main linac to limit its length and cost. Outstanding beam quality is also essential to achieve a high rate of physics events in the detectors. In addition, the power consumption of the CLIC accelerator complex has to be limited to about 500 MW for the highest-energy stage; hence a high efficiency to generate RF power and transfer it into the beams is mandatory. CLIC will use high-frequency (X-band) normal-conducting accelerating structures (copper) to achieve accelerating gradients at the level of 100 MV/m. A centre-of-mass energy of 3 TeV can be reached with a collider of about 50 km length, while 380 GeV for CLIC’s first stage would require a site length of 11 km, which is slightly larger than the diameter of the LHC. The accelerator is operated using 50 RF pulses of 244 ns length per second. During each pulse, a train of 312 bunches is accelerated, which are separated by just 0.5 ns. To generate the accelerating field, each CLIC main-linac accelerating structure needs to be fed with an RF power of 60 MW. With a total of 140,000 structures in the 3 TeV collider, this adds up to more than 8 TW.

    Because it is not possible to generate this peak power at reasonable cost with conventional klystrons (even for the short pulse length of 244 ns), a novel power-production scheme has been developed for CLIC. The idea is to operate a drive beam with a current of 100 A that runs parallel to the main beam via power extraction and transfer structures. In these structures, the beam induces electric fields, thereby losing energy and generating RF power, that is transferred to the main-linac accelerating structures. The drive beam is produced as a long (146 μs) high-current (4 A) train of bunches and is accelerated to an energy of about 2.4 GeV and then sent into a delay loop and combiner-ring complex where sets of 24 consecutive sub-pulses are used to form 25 trains of 244 ns length with a current of about 100 A. Each of these bunch-trains is then used to power one of the 25 drive-beam sectors, which means that the initial 146 μs-long pulse is effectively compressed in time by a factor of 600, and therefore its power is increased by the same factor.

    6
    CLIC-design energy stages

    To demonstrate this novel scheme, a test facility (CTF3) was constructed at CERN since 2001 that reused the LEP pre-injector building and components as well as adding many more. The facility now consists of a drive-beam accelerator, the delay loop and one combiner ring. CTF3 can produce a drive-beam pulse of about 30 A and accelerate the main beam with a gradient of up to 145 MV/m. A large range of components, feedback systems and operational procedures needed to be developed to make the facility a success, and by the end of 2016 it will have finished its mission. Further beam tests at SLAC, KEK and various light sources remain important. The CALIFES electron beam facility at CERN, which is currently being evaluated for operation from 2017, can provide a testing ground for high-gradient structures and main-beam studies. More prototypes for CLIC’s main-beam and drive-beam components are being developed and characterised in dedicated test facilities at CERN and collaborating institutes. The resulting progress in X-band acceleration technology also generated important interest in the Free Electron Laser (FEL) community, where it may allow for more compact facilities.

    To achieve the required luminosities (6 × 1034 cm–2 s–1 at 3 TeV), nanometre beam sizes are required at CLIC’s interaction point. This is several hundred times smaller than at the SLC, which operated at SLAC in the 1990s and was the first and only operational linear collider, and therefore requires novel hardware and sophisticated beam-based alignment algorithms. A precision pre-alignment system has been developed and tested that can achieve an alignment accuracy in the range of 10 μm, while beam-based tuning algorithms have been successfully tested at SLAC and other facilities. These algorithms use beams of different energies to diagnose and correct the offset of the beam-position monitors, reducing the effective misalignments to a fraction of a micron. Because the motion of the ground due to natural and technical sources can cause the beam-guiding quadrupole magnets to move, knocking the beams out of focus, the magnets will be stabilised with an active feedback system that has been developed by a collaboration of several institutes, and which has already been demonstrated experimentally.

    7
    Integrated luminosity

    CLIC’s physics potential has been illustrated through the simulation and reconstruction of benchmark physics processes in two dedicated detector concepts. These are based on the SiD and ILD detector concepts developed for the International Linear Collider (ILC), an alternative machine currently under consideration for construction in Japan, and have been adapted to the experimental environment at the higher-energy CLIC. Because the high centre-of-mass energies and CLIC’s accelerator technology lead to relatively high beam-induced background levels for a lepton collider, the CLIC detector design and the event-reconstruction techniques are both optimised to suppress the influence of these backgrounds. A main driver for the ILC and CLIC detector concepts is the required jet-energy resolution. To achieve the required precision, the CLIC detector concepts are based on fine-grained electromagnetic and hadronic calorimeters optimised for particle-flow analysis techniques. A new study is almost complete, which defines a single optimised CLIC detector for use in future CLIC physics benchmark studies. The work by CLICdp was crucial for the new staging baseline (especially for the choice of 380 GeV) because the physics potential as a function of energy can only be estimated with the required accuracy using detailed simulations of realistic detector concepts.

    The new staged design

    To optimise the CLIC accelerator, a systematic design approach has been developed and used to explore a large range of configurations for the RF structures of the main linac. For each structure design, the luminosity performance, power consumption and total cost of the CLIC complex are calculated. For the first stage, different accelerating structures operated at a somewhat lower accelerating gradient of 72 MV/m will be used to reach the luminosity goal at a cost and power consumption similar to earlier projects at CERN – while also not inflating the cost of the higher-energy stages. The design should also be flexible enough to take advantage of projected improvements in RF technology during the construction and operation of the first stage.

    When upgrading to higher energies, the structures optimised for 380 GeV will be moved to the beginning of the new linear accelerator and the remaining space filled with structures optimised for 3 TeV operation. The RF pulse length of 244 ns is kept the same at all stages to avoid major modifications to the drive-beam generation scheme. Data taking at the three energy stages is expected to last for a period of seven, five and six years, respectively. The stages are interrupted by two upgrade periods each lasting two years, which means that the overall three-stage CLIC programme will last for 22 years from the start of operation. The duration of each stage is derived from integrated luminosity targets of 500 fb–1 at 380 GeV, 1.5 ab–1 at 1.5 TeV and 3 ab–1 at 3 TeV.

    An intense R&D programme is yielding other important improvements. For instance, the CLIC study recently proposed a novel design for klystrons that can increase the efficiency significantly. To reduce the power consumption further, permanent magnets are also being developed that are tunable enough to be able to replace the normal conducting magnets. The goal is to develop a detailed design of both the accelerator and detector in time for the update of the European Strategy for Particle Physics towards the end of the decade.

    Mature option

    With the discovery of the Higgs boson, a great physics case exists for CLIC at a centre-of-mass energy of 380 GeV. Hence particular emphasis is being placed on the first stage of the accelerator, for which the focus is on reducing costs and power consumption. The new accelerating structure design will be improved and more statistics on the structure performance will be obtained. The detector design will continue to be optimised, driven by the requirements of the physics programme. Technology demonstrators for the most challenging detector elements, including the vertex detector and main tracker, are being developed in parallel.

    Common studies with the ILC, which is currently being considered for implementation in Japan, are also important, both for accelerator and detector elements, in particular for the initial stage of CLIC. Both the accelerator and detector parameters and designs, in particular for the second- and third-energy stages, will evolve according to new LHC results and studies as they emerge.

    CLIC is the only mature option for a future multi-TeV high-luminosity linear electron–positron collider. The two-beam technology has been demonstrated in large-scale tests and no show stoppers were identified. CLIC is therefore an attractive option for CERN after the LHC. Once the European particle-physics community decides to move forward, a technical design will be developed and construction could begin in 2025.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 5:18 pm on October 14, 2016 Permalink | Reply
    Tags: , CERN Courier, , , ,   

    From CERN Courier: “First physics at HIE-ISOLDE begins” 

    CERN Courier

    Oct 14, 2016

    1
    HIE-ISOLDE cryomodules

    In early September, the first physics experiment using radioactive beams from the newly upgraded ISOLDE facility got under way: a study of tin, which is a special element because it has two double magic isotopes. ISOLDE is CERN’s long-running nuclear research facility, which for the past 50 years has allowed many different studies of the properties of atomic nuclei. The upgrade means the machine can now reach an energy of 5.5 MeV per nucleon, making ISOLDE the only Isotope Separator On-Line (ISOL) facility in the world capable of investigating heavy and super-heavy radioactive nuclei.

    HIE-ISOLDE (High Intensity Energy-ISOLDE) is a major upgrade of the ISOLDE facility that will increase the energy, intensity and quality of the beams delivered to scientific users.

    CERN ISOLDE New
    CERN ISOLDE

    “Our success is the result of eight years of development and manufacturing,” explains HIE-ISOLDE project-leader Yacine Kadi. “The community around ISOLDE has grown a lot recently, as more scientists are attracted by the possibilities that new higher energies bring. It’s an energy domain that’s not explored much, since no other facility in the world can deliver pure beams at these energies.”

    The first run of the facility took place in October last year, but because the machine only had one cryomodule, it operated at an energy of 4.3 MeV per nucleon. Now, with the second cryomodule in place, the machine is capable of reaching up to 5.5 MeV per nucleon and therefore can investigate the structure of heavier isotopes. The rest of 2016 will be a busy time for HIE-ISOLDE, with scheduled experiments studying nuclei over a wide range of mass numbers – from 9Li to 142Xe. When two additional cryomodules are installed in 2017 and 2018, the facility will operate at 10 MeV per nucleon and be capable of investigating nuclei of all masses.

    HIE-ISOLDE will run until mid-November, and all but one of the seven different experiments planned during this time will use the Miniball detection station.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
  • richardmitnick 5:11 pm on October 14, 2016 Permalink | Reply
    Tags: , CERN Courier, , , ,   

    From CERN Courier: “All systems go for the High-Luminosity LHC” 

    CERN Courier

    Oct 14, 2016

    1
    Model quadrupole magnet

    On 19 September, the European Investment Bank (EIB) signed a 250 million Swiss francs (€230 million) credit facility with CERN in order to finance the High-Luminosity Large Hadron Collider (HL-LHC) project.

    CERN HL-LHC bloc

    The finance contract follows recent approval from CERN Council, and will allow CERN to carry out the work necessary for the HL-LHC within a constant CERN budget.

    The HL-LHC is expected to produce data from 2026 onwards, with the overall goal of increasing the integrated luminosity recorded by the LHC by a factor 10. Following approval of the HL-LHC as a priority project in the European Strategy Report for Particle Physics, this major upgrade is now gathering speed together with companion upgrade programmes of the LHC injectors and detectors. Engineers are currently putting the finishing touches to a full working model of an HL-LHC quadrupole, which will eventually be installed in the insertion regions close to the ATLAS and CMS experiments in order to focus the HL-LHC beam. Built in partnership with Fermilab, the magnets are based on an innovative niobium-tin superconductor (Nb3Sn) that can produce higher magnetic fields than the niobium-titanium magnets used in the LHC.

    The contract signed between CERN and EIB falls under the InnovFin Large Projects facility, which is part of the new generation of financial instruments developed and supported under the European Union’s Horizon 2020 scheme. It’s the second EIB financing for CERN, following a loan of €300 million in 2002 for the LHC. “This loan under Horizon 2020, the EU’s research-funding programme, will help keep CERN and Europe at the forefront of particle-physics research,” says the European commissioner for research, science and innovation, Carlos Moedas. “It’s an example of how EU funding helps extend frontiers of human knowledge.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: