Tagged: Superconducting super collider Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:54 pm on July 13, 2021 Permalink | Reply
    Tags: "Plasma Particle Accelerators Could Find New Physics", , Accelerators come in two shapes: circular (synchrotron) or linear (linac)., At the start of the 20th century scientists had little knowledge of the building blocks that form our physical world., , By the end of the century they had discovered not just all the elements that are the basis of all observed matter but a slew of even more fundamental particles that make up our cosmos., CERN CLIC collider, CERN is proposing a 100-kilometer-circumference electron-positron and proton-proton collider called the Future Circular Collider., , , , , International Linear Collider (ILC), , , , Plasma is often called the fourth state of matter., , , Superconducting super collider   

    From Scientific American (US) : “Plasma Particle Accelerators Could Find New Physics” 

    From Scientific American (US)

    July 2021
    Chandrashekhar Joshi

    1
    Credit: Peter and Maria Hoey.

    At the start of the 20th century scientists had little knowledge of the building blocks that form our physical world. By the end of the century they had discovered not just all the elements that are the basis of all observed matter but a slew of even more fundamental particles that make up our cosmos, our planet and ourselves. The tool responsible for this revolution was the particle accelerator.

    The pinnacle achievement of particle accelerators came in 2012, when the Large Hadron Collider (LHC) uncovered the long-sought Higgs boson particle.

    The LHC is a 27-kilometer accelerating ring that collides two beams of protons with seven trillion electron volts (TeV) of energy each at CERN near Geneva.

    It is the biggest, most complex and arguably the most expensive scientific device ever built. The Higgs boson was the latest piece in the reigning theory of particle physics called the Standard Model. Yet in the almost 10 years since that discovery, no additional particles have emerged from this machine or any other accelerator.

    Have we found all the particles there are to find? Doubtful. The Standard Model of particle physics does not account for dark matter—particles that are plentiful yet invisible in the universe. A popular extension of the Standard Model called supersymmetry predicts many more particles out there than the ones we know about.

    And physicists have other profound unanswered questions such as: Are there extra dimensions of space? And why is there a great matter-antimatter imbalance in the observable universe? To solve these riddles, we will likely need a particle collider more powerful than those we have today.

    Many scientists support a plan to build the International Linear Collider (ILC), a straight-line-shaped accelerator that will produce collision energies of 250 billion (giga) electron volts (GeV).

    Though not as powerful as the LHC, the ILC would collide electrons with their antimatter counterparts, positrons—both fundamental particles that are expected to produce much cleaner data than the proton-proton collisions in the LHC. Unfortunately, the design of the ILC calls for a facility about 20 kilometers long and is expected to cost more than $10 billion—a price so high that no country has so far committed to host it.

    In the meantime, there are plans to upgrade the energy of the LHC to 27 TeV in the existing tunnel by increasing the strength of the superconducting magnets used to bend the protons. Beyond that, CERN is proposing a 100-kilometer-circumference electron-positron and proton-proton collider called the Future Circular Collider.

    Such a machine could reach the unprecedented energy of 100 TeV in proton-proton collisions. Yet the cost of this project will likely match or surpass the ILC. Even if it is built, work on it cannot begin until the LHC stops operation after 2035.

    But these gargantuan and costly machines are not the only options. Since the 1980s physicists have been developing alternative concepts for colliders. Among them is one known as a plasma-based accelerator, which shows great promise for delivering a TeV-scale collider that may be more compact and much cheaper than machines based on the present technology.

    The Particle Zoo

    The story of particle accelerators began in 1897 at the Cavendish physics laboratory at the University of Cambridge (UK).

    There J. J. Thomson created the earliest version of a particle accelerator using a tabletop cathode-ray tube like the ones used in most television sets before flat screens. He discovered a negatively charged particle—the electron.

    Soon physicists identified the other two atomic ingredients—protons and neutrons—using radioactive particles as projectiles to bombard atoms. And in the 1930s came the first circular particle accelerator—a palm-size device invented by Ernest Lawrence called the cyclotron, which could accelerate protons to about 80 kilovolts.

    2
    Ernest Lawrence’s First Cyclotron, 1930 Stock Photo – Alamy.

    Thereafter accelerator technology evolved rapidly, and scientists were able to increase the energy of accelerated charged particles to probe the atomic nucleus. These advances led to the discovery of a zoo of hundreds of subnuclear particles, launching the era of accelerator-based high-energy physics. As the energy of accelerator beams rapidly increased in the final quarter of the past century, the zoo particles were shown to be built from just 17 fundamental particles predicted by the Standard Model [above]. All of these, except the Higgs boson, had been discovered in accelerator experiments by the late 1990s. The Higgs’s eventual appearance [above] at the LHC made the Standard Model the crowning achievement of modern particle physics.

    Aside from being some of the most successful instruments of scientific discovery in history, accelerators have found a multitude of applications in medicine and in our daily lives. They are used in CT scanners, for x-rays of bones and for radiotherapy of malignant tumors. They are vital in food sterilization and for generating radioactive isotopes for myriad medical tests and treatments. They are the basis of x-ray free-electron lasers, which are being used by thousands of scientists and engineers to do cutting-edge research in physical, life and biological sciences.

    3
    Scientist tests a prototype plasma accelerator at the Facility for Advanced Accelerator Experimental Tests (FACET) at the DOE’s SLAC National Accelerator Laboratory (US) in California. Credit: Brad Plummer and SLAC National Accelerator Laboratory.

    Accelerator Basics

    Accelerators come in two shapes: circular (synchrotron) or linear (linac). All are powered by radio waves or microwaves that can accelerate particles to near light speed. At the LHC, for instance, two proton beams running in opposite directions repeatedly pass through sections of so-called radio-frequency cavities spaced along the ring.

    Radio waves inside these cavities create electric fields that oscillate between positive and negative to ensure that the positively charged protons always feel a pull forward. This pull speeds up the protons and transfers energy to them. Once the particles have gained enough energy, magnetic lenses focus the proton beams to several very precise collision points along the ring. When they crash, they produce extremely high energy densities, leading to the birth of new, higher-mass particles.

    When charged particles are bent in a circle, however, they emit “synchrotron radiation.” For any given radius of the ring, this energy loss is far less for heavier particles such as protons, which is why the LHC is a proton collider. But for electrons the loss is too great, particularly as their energy increases, so future accelerators that aim to collide electrons and positrons must either be linear colliders or have very large radii that minimize the curvature and thus the radiation the electrons emit.

    The size of an accelerator complex for a given beam energy ultimately depends on how much radio-frequency power can be pumped into the accelerating structure before the structure suffers electrical breakdown. Traditional accelerators have used copper to build this accelerating structure, and the breakdown threshold has meant that the maximum energy that can be added per meter is between 20 million and 50 million electron volts (MeV). Accelerator scientists have experimented with new types of accelerating structures that work at higher frequencies, thereby increasing the electrical breakdown threshold. They have also been working on improving the strength of the accelerating fields within superconducting cavities that are now routinely used in both synchrotrons and linacs. These advances are important and will almost certainly be implemented before any paradigm-changing concepts disrupt the highly successful conventional accelerator technologies.

    Eventually other strategies may be necessary. In 1982 the U.S. Department of Energy’s program on high-energy physics started a modest initiative to investigate entirely new ways to accelerate charged particles. This program generated many ideas; three among them look particularly promising.

    The first is called two-beam acceleration. This scheme uses a relatively cheap but very high-charge electron pulse to create high-frequency radiation in a cavity and then transfers this radiation to a second cavity to accelerate a secondary electron pulse. This concept is being tested at CERN on a machine called the Compact Linear Collider (CLIC).

    Another idea is to collide muons, which are much heavier cousins to electrons. Their larger mass means they can be accelerated in a circle without losing as much energy to synchrotron radiation as electrons do. The downside is that muons are unstable particles, with a lifetime of two millionths of a second. They are produced during the decay of particles called pions, which themselves must be produced by colliding an intense proton beam with a special target. No one has ever built a muon accelerator, but there are die-hard proponents of the idea among accelerator scientists.

    Finally, there is plasma-based acceleration. The notion originated in the 1970s with John M. Dawson of the University of California-Los Angeles (US), who proposed using a plasma wake produced by an intense laser pulse or a bunch of electrons to accelerate a second bunch of particles 1,000 or even 10,000 times faster than conventional accelerators can. This concept came to be known as the plasma wakefield accelerator.

    4

    It generated a lot of excitement by raising the prospect of miniaturizing these gigantic machines, much like the integrated circuit miniaturized electronics starting in the 1960s.

    The Fourth State of Matter

    Most people are familiar with three states of matter: solid, liquid and gas. Plasma is often called the fourth state of matter. Though relatively uncommon in our everyday experience, it is the most common state of matter in our universe. By some estimates more than 99 percent of all visible matter in the cosmos is in the plasma state—stars, for instance, are made of plasma. A plasma is basically an ionized gas with equal densities of electrons and ions. Scientists can easily form plasma in laboratories by passing electricity through a gas as in a common fluorescent tube.

    A plasma wakefield accelerator takes advantage of the kind of wake you can find trailing a motorboat or a jet plane. As a boat moves forward, it displaces water, which moves out behind the boat to form a wake. Similarly, a tightly focused but ultraintense laser pulse moving through a plasma at the speed of light can generate a relativistic wake (that is, a wake also propagating nearly at light speed) by exerting radiation pressure and displacing the plasma electrons out of its way. If, instead of a laser pulse, a high-energy, high-current electron bunch is sent through the plasma, the negative charge of these electrons can expel all the plasma electrons, which feel a repulsive force. The heavier plasma ions, which are positively charged, remain stationary. After the pulse passes by, the expelled electrons are attracted back toward the ions by the force between their negative and positive charges. The electrons move so quickly they overshoot the ions and then again feel a backward pull, setting up an oscillating wake. Because of the separation of the plasma electrons from the plasma ions, there is an electric field inside this wake.

    If a second “trailing” electron bunch follows the first “drive” pulse, the electrons in this trailing bunch can gain energy from the wake much in the same way an electron bunch is accelerated by the radio-frequency wave in a conventional accelerator. If there are enough electrons in the trailing bunch, they can absorb sufficient energy from the wake so as to dampen the electric field. Now all the electrons in the trailing bunch see a constant accelerating field and gain energy at the same rate, thereby reducing the energy spread of the beam.

    The main advantage of a plasma accelerator over other schemes is that electric fields in a plasma wake can easily be 1,000 times stronger than those in traditional radio-frequency cavities. Plus, a very significant fraction of the energy that the driver beam transfers to the wake can be extracted by the trailing bunch. These effects make a plasma wakefield-based collider potentially both more compact and cheaper than conventional colliders.

    The Future of Plasma

    Both laser- and electron-driven plasma wakefield accelerators have made tremendous progress in the past two decades. My own team at U.C.L.A. has carried out prototype experiments with SLAC National Accelerator Laboratory physicists at their Facility for Advanced Accelerator Experimental Tests (FACET) in Menlo Park, Calif.

    We injected both drive and trailing electron bunches with an initial energy of 20 GeV and found that the trailing electrons gained up to 9 GeV after traveling through a 1.3-meter-long plasma. We also achieved a gain of 4 GeV in a positron bunch using just a one-meter-long plasma in a proof-of-concept experiment. Several other labs around the world have used laser-driven wakes to produce multi-GeV energy gains in electron bunches.

    Plasma accelerator scientists’ ultimate goal is to realize a linear accelerator that collides tightly focused electron and positron, or electron and electron, beams with a total energy exceeding 1 TeV. To accomplish this feat, we would likely need to connect around 50 individual plasma accelerator stages in series, with each stage adding an energy of 10 GeV.

    Yet aligning and synchronizing the drive and the trailing beams through so many plasma accelerator stages to collide with the desired accuracy presents a huge challenge. The typical radius of the wake is less than one millimeter, and scientists must inject the trailing electron bunch with submicron accuracy. They must synchronize timing between the drive pulse and the trailing beam to less than a hundredth of a trillionth of one second. Any misalignment would lead to a degradation of the beam quality and a loss of energy as well as charge caused by oscillation of the electrons about the plasma wake axis. This loss shows up in the form of hard x-ray emission, known as betatron emission, and places a finite limit on how much energy we can obtain from a plasma accelerator.

    Other technical hurdles also stand in the way of immediately turning this idea into a collider. For instance, the primary figure of merit for a particle collider is the luminosity—basically a measure of how many particles you can squeeze through a given space in a given time. The luminosity multiplied by the cross section—or the chances that two particles will collide— tells you how many collisions of a particular kind per second you are likely to observe at a given energy. The desired luminosity for a 1-TeV electron-positron linear collider is 10^34 cm^–2s^–1. Achieving this luminosity would require the colliding beams to have an average power of 20 megawatts each—10^10 particles per bunch at a repetition rate of 10 kilohertz and a beam size at the collision point of tens of a billionth of a meter. To illustrate how difficult this is, let us focus on the average power requirement. Even if you could transfer energy from the drive beam to the accelerating beam with 50 percent efficiency, 20 megawatts of power will be left behind in the two thin plasma columns. Ideally we could partially recover this power, but it is far from a straightforward task.

    And although scientists have made substantial progress on the technology needed for the electron arm of a plasma-based linear collider, positron acceleration is still in its infancy. A decade of concerted basic science research will most likely be needed to bring positrons to the same point we have reached with electrons. Alternatively, we could collide electrons with electrons or even with protons, where one or both electron arms are based on a plasma wakefield accelerator. Another concept that scientists are exploring at CERN is modulating a many-centimeters-long proton bunch by sending it through a plasma column and using the accompanying plasma wake to accelerate an electron bunch.

    The future for plasma-based accelerators is uncertain but exciting. It seems possible that within a decade we could build 10-GeV plasma accelerators on a large tabletop for various scientific and commercial applications using existing laser and electron beam facilities. But this achievement would still put us a long way from realizing a plasma-based linear collider for new physics discoveries. Even though we have made spectacular experimental progress in plasma accelerator research, the beam parameters achieved to date are not yet what we would need for just the electron arm of a future electron-positron collider that operates at the energy frontier. Yet with the prospects for the International Linear Collider and the Future Circular Collider uncertain, our best bet may be to persist with perfecting an exotic technology that offers size and cost savings. Developing plasma technology is a scientific and engineering grand challenge for this century, and it offers researchers wonderful opportunities for taking risks, being creative, solving fascinating problems—and the tantalizing possibility of discovering new fundamental pieces of nature.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    Scientific American (US) , the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

     
  • richardmitnick 3:51 pm on February 5, 2019 Permalink | Reply
    Tags: , , , , , , Particle Physics Is Doing Just Fine, , Superconducting super collider   

    From slate.com: “Particle Physics Is Doing Just Fine” 

    SLATE

    From slate.com

    Jan 31, 2019
    Chanda Prescod-Weinstein
    Tim M.P. Tait


    CERN/ALICE Detector

    Research is a search through the unknown. If you knew the answer, there would be no need to do the research, and until you do the research, you don’t know the answer. Science is a complex social phenomenon, but certainly its history includes repeated episodes of people having ideas, trying experiments to test those ideas, and using the results to inform the next round of ideas. When an experimental result indicates that one particular idea is not correct, this is neither a failure of the experiment nor of the original idea itself; it’s an advancement of our understanding of the world around us.

    Recently, particle physics has become the target of a strange line of scientific criticism. Articles like Sabine Hossenfelder’s New York Times op-ed questioning the “uncertain future” of particle physics and Vox’s “The $22 Billion Gamble: Why Some Physicists Aren’t Excited About Building a Bigger Particle Collider” raise the specter of failed scientists. To read these articles, you’d think that unless particle physics comes home with a golden ticket in the form of a new particle, it shouldn’t come home at all. Or at least, it shouldn’t get a new shot at exploring the universe’s subatomic terrain. But the proposal that particle physicists are essentially setting money on fire comes with an insidious underlying message: that science is about the glory of discovery, rather than the joy of learning about the world. Finding out that there are no particles where we had hoped tells us about the distance between human imagination and the real world. It can operate as a motivation to expand our vision of what the real world is like at scales that are totally unintuitive. Not finding something is just as informative as finding something.

    That’s not to say resources should be infinite or to suggest that community consensus isn’t important. To the contrary, the particle physics community, like the astronomy and planetary science communities, takes the conversation about what our priorities should be so seriously that we have it every half decade or so. Right now, the European particle physics community is in the middle of a “strategy update,” and plans are underway for the U.S. particle physics community to hold the next of its “Snowmass community studies,” which take place approximately every five years. These events are opportunities to take stock of recent developments and to devise a strategy to maximize scientific progress in the field. In fact, we’d wager that they’re exactly what Hossenfelder is asking for when she suggests “it’s time for particle physicists to step back and reflect on the state of the field.”

    One of the interesting questions that both of these studies will confront is whether or not the field should prioritize construction of a new high-energy particle accelerator. In past decades, many resources have been directed toward the construction and operation of the Large Hadron Collider, a gigantic device whose tunnel spans two countries and whose budget is in the billions of dollars. Given funding constraints, it is entirely appropriate to ask whether it makes sense to prioritize a future particle accelerator at this moment in history. A new collider is likely to have a price tag measured in tens of billions of dollars and would represent a large investment—though not large compared with the scale of other areas of government spending, and the collider looks even less expensive when spread out over decades and shared by many nations.

    The LHC was designed to reach energies of 14 trillion electron volts, about seven times more than its predecessor, the Tevatron at Fermilab in Chicagoland.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    FNAL/Tevatron map

    FNAL/Tevatron

    There was very strong motivation to explore collisions at these energies; up until the LHC began operations, our understanding of the Standard Model of particle physics, the leading theory describing subatomic particles and their interactions, contained a gaping hole.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The theory could only consistently describe the massive fundamental particles that are observed in our experiments if one included the Higgs boson—a particle that had yet to be observed.

    Self-consistency demanded that either the Higgs or something else providing masses would appear at the energies studied by the LHC. There were a host of competing theories, and only experimental data could hope to judge which one was realized in nature.

    So we tried it. And because the LHC allowed us to actually observe the Higgs, we now know that the picture in which masses arise from the Higgs is either correct or very close to being correct.

    Peter Higgs

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    The LHC discovered a particle whose interactions with the known particles matches the predictions to within about 10 percent or so. This represents a triumph in our understanding of the fundamental building blocks of nature, one that would have been impossible without both 1) the theoretical projections that defined the characteristics that the Higgs must have to play its role and 2) the experimental design of the accelerator and particle detectors and the analysis of the data that they collected. In order to learn nature’s secrets, theory and experiment must come together.

    [I.E., you must do the math.]

    Some people have labeled the LHC a failure because even though it confirmed the Standard Model’s vision for how particles get their masses, it did not offer any concrete hint of any further new particles besides the Higgs. We understand the disappointment. Given the exciting new possibilities opened up by exploring energy levels we’ve never been privy to here on earth, this feeling is easy to relate to. But it is also selling the accomplishments short and fails to appreciate how research works. Theorists come up with fantastical ideas about what could be. Most of them are wrong, because the laws of physics are unchanging and universal. Experimentalists are taking on the task of actually popping open the hood and looking at what’s underneath it all. Sometimes, they may not find anything new.

    A curious species, we are left to ask more questions. Why did we find this and not that? What should we look for next? What a strange and fascinating universe we live in, and how wonderful to have the opportunity to learn about it.

    It cannot be ignored that if the U.S. had built the Superconducting Super Collider a particle accelerator complex under construction in the vicinity of Waxahachie, Texas, Higgs would have been found in the U.S. and High Energy Physics would not have been ceded to Europe.

    3
    Tracing the path of the particle accelerators and tunnels planned for the Superconducting Supercollider Project. You can see the main ring circling Waxahachie.

    The Superconducting Super Collider planned ring circumference was 87.1 kilometers (54.1 mi) with an energy of 20 TeV per proton and was set to be the world’s largest and most energetic. It would have greatly surpassed the current record held by the Large Hadron Collider which has ring circumference 27 km (17 mi) and energy of 13 TeV per proton. The project’s director was Roy Schwitters, a physicist at the University of Texas at Austin. Dr. Louis Ianniello served as its first Project Director for 15 months. The project was cancelled in 1993 due to budget problems [Congress cancelled the Collider for having “no immediate econmic value].

    See the full article here .
    See also the possible future of HEP here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Slate is a daily magazine on the Web. Founded in 1996, we are a general-interest publication offering analysis and commentary about politics, news, business, technology, and culture. Slate’s strong editorial voice and witty take on current events have been recognized with numerous awards, including the National Magazine Award for General Excellence Online. The site, which is owned by Graham Holdings Company, does not charge for access and is supported by advertising revenues.

     
  • richardmitnick 11:11 am on May 24, 2017 Permalink | Reply
    Tags: , , , , , , Our failure in resolve, Superconducting super collider   

    From FNAL: “Fermilab scientists set upper limit for Higgs boson mass” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    In 1977, theoretical physicists at Fermilab — Ben Lee and Chris Quigg, along with Hank Thacker — published a paper setting an upper limit for the mass of the Higgs boson. This calculation helped guide the design of the Large Hadron Collider by setting the energy scale necessary for it to discover the particle. The Large Hadron Collider turned on in 2008, and in 2012, the LHC’s ATLAS and CMS discovered the long-sought Higgs boson — 35 years after the seminal paper.

    1

    CERN CMS Higgs Event


    CERN/CMS Detector


    CERN ATLAS Higgs Event


    CERN/ATLAS detector

    Where it all started:

    FNAL Tevatron

    FNAL/Tevatron map


    FNAL/Tevatron DZero detector


    FNAL/Tevatron CDF detector

    Where we failed and handed it to Europe:

    3
    Sight of the planned Superconducting Super Collider, in the vicinity of Waxahachie, Texas. Cancelled by our idiot Congress under Bill Clinton in 1993. We could have had it all.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    FNAL Icon
    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 7:56 pm on October 6, 2016 Permalink | Reply
    Tags: , , , Superconducting super collider   

    From Physics Today: “A bridge too far: The demise of the Superconducting Super Collider” A very important article. 

    Physics Today bloc

    Physics Today

    10.6.16
    Michael Riordan

    The largest basic scientific project ever attempted, the supercollider proved to be beyond the management capacity of the US high-energy physics community. A smaller proton collider would have been substantially more achievable.

    2
    Map of the proposed Superconducting Super Collider.

    When the US Congress terminated the Superconducting Super Collider (SSC) in October 1993 after about $2 billion had been spent on the project, it ended more than four decades of American leadership in high-energy physics. To be sure, US hegemony in the discipline had been deteriorating for more than a decade, but the SSC cancellation was the ultimate blow that put Europe unquestionably in the driver’s seat and opened the door to the discovery of the Higgs boson at CERN (see Physics Today, September 2012, page 12). The causes and consequences of the SSC’s collapse, a watershed event in the history of science, have been discussed and debated ever since it happened.

    At least a dozen good reasons have been suggested for the demise of the SSC. Primary among them are the project’s continuing cost overruns, its lack of significant foreign contributions, and the end of the Cold War. But recent research and documents that have come to light have led me to an important new conclusion: The project was just too large and too expensive to have been pursued primarily by a single nation, however wealthy and powerful. Wolfgang “Pief” Panofsky, founding director of SLAC, voiced that possibility during a private conversation in the months after the project’s demise; he suggested that perhaps the SSC project was “a bridge too far” for US high-energy physics. That phrase became lodged firmly in my mind throughout the many years I was researching its history.

    1
    View along the Superconducting Super Collider main-ring tunnel, in early 1993. (Courtesy of Fermilab Archives.)

    Some physicists will counter that the SSC was in fact being pursued as an international project, with the US taking the lead in anticipation that other nations would follow; it had done so on large physics projects in the past and was doing so with the much costlier International Space Station. But that argument ignores the inconvenient truth that the gargantuan project was launched by the Reagan administration as a deliberate attempt to reestablish US leadership in a scientific discipline the nation had long dominated. If other nations were to become involved, they would have had to do so as junior partners in a multibillion-dollar enterprise led by US physicists.

    That fateful decision, made by the leader of the world’s most powerful government, established the founding rhetoric for the SSC project, which proved difficult to abandon when it came time to enlist foreign partners.

    The SSC and the LHC

    In contrast, CERN followed a genuinely international approach in the design and construction of its successful Large Hadron Collider (LHC), albeit at a much more leisurely pace than had been the case for the SSC.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Serious design efforts begun during the late 1980s and early 1990s ramped up after the SSC’s termination. Although the LHC project also experienced trying growth problems and cost overruns—its cost increased from an estimated 2.8 billion Swiss francs ($2.3 billion at the time) in 1996 to more than 4.3 billion Swiss francs in 2009—it managed to survive and become the machine that allowed the Higgs-boson discovery using only about half of its originally designed 14 TeV energy.

    CERN CMS Higgs Event
    CERN CMS Higgs Event

    CERN ATLAS Higgs Event
    CERN ATLAS Higgs Event

    (The SSC, by comparison, was designed for 40 TeV collision energy.) When labor costs and in-kind contributions from participating nations are included, the total LHC price tag approached $10 billion, a figure often given in the press. Having faced problems similar to, though not as severe as, what the SSC project experienced, the LHC’s completion raises an obvious question: Why did CERN and its partner nations succeed where the US had failed?

    From the SSC’s early days, many scientists thought it should have been sited at or near Fermilab to take advantage of the existing infrastructure, both physical and human. University of Chicago physicist and Nobel laureate James Cronin explicitly stated that opinion in a letter he circulated to his fellow high-energy physicists in August 1988. CERN has followed that approach for decades, building one machine after another as extensions of its existing facilities and reusing parts of the older machines in new projects, thereby reducing costs. Perhaps as important, CERN had also gathered and developed some of the world’s most experienced accelerator physicists and engineers, who work together well. During the late 1980s, Fermilab had equally adept machine builders—plus substantial physical infrastructure—who could have turned to other productive endeavors when the inevitable funding shortfalls occurred during the annual congressional appropriations process.

    Troublesome clashes occurred at the SSC between the high-energy physicists and engineers who had been recruited largely from the shrinking US military–industrial complex as the Cold War wound down during the late 1980s and early 1990s. For example, the methods by which SSC general manager Edward Siskin and magnet division director Thomas Bush managed large projects and developed sophisticated components differed greatly from those customarily employed by high-energy physicists. A particular bone of contention was the project’s initial lack of a cost-and-schedule control system, which by then had become mandatory practice for managing large military-construction and development projects overseen by the Department of Defense. Such clashes would probably not have erupted in the already well-integrated Fermilab high-energy physicist culture, nor would the disagreements have been as severe.

    Those pro-Fermilab arguments, however, ignore the grim realities of the American political process. A lucrative new project that was to cost more than $5 billion and promised more than 2000 high-tech jobs could not be sole-sourced to an existing US laboratory, no matter how powerful its state congressional delegation. As politically astute leaders of the Department of Energy recognized, the SSC project had to be offered up to all states able to provide a suitable site, with the decision based (at least publicly) on objective, rational criteria. Given the political climate of the mid 1980s, a smaller project costing less than $1 billion and billed as an upgrade of existing facilities might have been sole-sourced to Fermilab, but not one as prominent and costly as the SSC. It had to be placed on the US auction block, and Texas made the best bid according to the official DOE criteria.

    Unlike the SSC, the LHC project benefited from the project management skills of a single physicist, Lyndon Evans, who came to the task with decades of experience on proton colliders. Despite the facility’s major problems and cost overruns, Evans enjoyed the strong support of the CERN management and a deeply experienced cadre of physicists and engineers. On the LHC project, engineers reported ultimately to physicists, who as the eventual users of the machine were best able to make the required tradeoffs when events did not transpire as originally planned. The project encountered daunting difficulties and major delays, including the September 2008 quench of dozens of superconducting magnets. But the core management team led by Evans worked through those problems, shared a common technological culture, and understood and supported the project’s principal scientific goals.

    Similar observations cannot be made regarding the military–industrial engineers who came to dominate the SSC lab’s collider construction. Until 1992 a succession of acting or ineffectual project managers could not come to grips with the demands of such a complex, enormous project that involved making countless decisions weekly. Secretary of energy James D. Watkins deliberately had Siskin inserted into the SSC management structure in late 1990 in an effort to wrest control of the project from the high-energy physicists. After SLAC physicist John Rees stepped in as the SSC project manager in 1992, he and Siskin began working together effectively and finally got a computerized cost-and-schedule control system up and running—and thus the project under better control. But it proved to be too late, as the SSC had already gained a hard-to-shake reputation in Congress as being mismanaged and out of control.

    CERN also enjoys an enviable internal structure, overseen by its governing council, that largely insulates its leaders and scientists from the inevitable political infighting and machinations of member nations. Unlike in the US, the director general or project manager could not be subpoenaed to appear before a parliamentary investigations subcommittee or be required to testify under oath about its management lapses or cost overruns—as SSC director Roy Schwitters had to do before Congress. Nor did the LHC project face annual congressional appropriations battles and threats of termination, as did major US projects like the SSC and the space station. Serious problems that arose with the LHC—for example, a large cost overrun in 2001—were addressed in the council, which represents the relevant ministries of its member nations and generally operates by consensus, especially on major laboratory initiatives. That supple governing structure helps keep control of a project within the hands of the scientists involved and hinders government officials from intervening directly.

    Because the council must also address the wider interests of individual European ministries, CERN leaders have to be sensitive to the pressures that the annual budget, new projects, and cost overruns can exert on other worthy science. In that manner, European scientists in other disciplines have a valuable voice in CERN governing circles. The LHC project consequently had to be tailored to address such concerns before the council would grant it final approval. In the US, the only mechanism available was for disgruntled scientists to complain openly, which Philip Anderson of Princeton University, Theodore Geballe of Stanford University, Rustum Roy of the Pennsylvania State University, and others did in prominent guest editorials or in congressional hearings when SSC costs got out of hand between 1989 and 1991. The resulting polarization of the US physics community helped undermine what had been fairly broad support for the SSC project in the House of Representatives, which in 1989 had voted 331–92 to proceed with construction.

    Because of financial pressures, CERN had to effectively internationalize the LHC project—obtaining monetary and material commitments from such nonmember nations as Canada, China, India, Japan, Russia and the US—before the council would give approval to go ahead with it. When that approval finally came in 1996, the LHC was a truly international scientific project with firm financial backing from more than 20 nations. Those contributions enabled Evans and his colleagues to proceed with the design of a collider able to reach the full 14 TeV collision energy as originally planned.

    Scale matters

    In hindsight, the LHC was (somewhat fortuitously) more appropriately sized to its primary scientific goal: the discovery of the Higgs boson. The possibility that this elusive quarry could turn up at a mass as low as 125 GeV was not widely appreciated until the late 1980s, when theories involving supersymmetry began to suggest the possibility of such a light Higgs boson emerging from collisions. But by then the SSC die had been cast in favor of a gargantuan 40 TeV collider, 87 km in circumference, that would be able to uncover the roots of spontaneous symmetry breaking even if the long-anticipated phenomenon required the protons’ constituent quarks and gluons to collide with energies9 as high as 2 TeV. When it became apparent in late 1989 that roughly $2 billion more would be needed to reduce design risks that could make it difficult for the SSC to attain its intended collision rate, Panofsky argued that the project should be down-scoped to 35 TeV to save hundreds of millions of dollars. But nearly everyone else countered that the full 40 TeV was required to make sure users could discover the Higgs boson—or whatever else was responsible for spontaneous symmetry breaking and elementary-particle masses.

    3
    Schematic of the Superconducting Super Collider, depicting its main 87 km ring—designed to circulate and collide twin proton beams, each at energies up to 20 TeV—the injector accelerators, and experimental halls, where the protons were to collide. That ring circumference is more than three times the 27 km circumference of CERN’s Large Hadron Collider (orange). The footprints of yet smaller particle colliders at Fermilab (purple) and SLAC (green) are also shown for comparison.

    A US High-Energy Physics Advisory Panel (HEPAP) subpanel, chaired by SLAC deputy director Sidney Drell, unanimously endorsed that fateful decision in 1990. The US high-energy physics community had thus committed itself to an enormous project that became increasingly difficult to sustain politically amid the worsening fiscal climate of the early 1990s. With the end of the Cold War and subsequent absence of a hoped-for peace dividend during a stubborn recession, the US entered a period of fiscal austerity not unlike what is now occurring in many developed Western nations. In that constrained environment, a poorly understood basic-science project experiencing large, continuing cost overruns and lacking major foreign contributions presented an easy political target for congressional budget cutters.

    A 20 TeV proton collider—or perhaps just a billion-dollar extension of existing facilities such as the 4–5 TeV Dedicated Collider proposed by Fermilab in 1983—would likely have survived the budget axe and discovered the light Higgs boson long ago. Indeed, another option on the table during the 1983 meetings of a HEPAP subpanel chaired by Stanford physicist Stanley Wojcicki was for Brookhaven National Laboratory to continue construction of its Isabelle collider while Fermilab began the design work on that intermediate-energy proton– antiproton collider, whose costs were then projected at about $600 million.

    That more conservative, gradual approach would have maintained the high-energy physics research productivity of the DOE laboratories for at least another decade. And such smaller projects would certainly have been more defensible during the economic contractions of the early 1990s, for they aligned better with the high-energy physics community’s diminishing political influence in Washington. Their construction would also have been far easier for physicists to manage and control by themselves without having to involve military–industrial engineers.

    The Wojcicki subpanel had originally recommended that the US design a 20–40 TeV collider, but that was before European physicists led by CERN decided in 1984 to focus their long-range plans on a 14 TeV proton collider that they could eventually achieve by adding superconducting magnets to the Large Electron–Positron Collider (LEP) then under construction. (Actually, they considered 18 TeV achievable when they made this decision.) Lowering the SSC energy as Panofsky suggested thus risked Congress raising the awkward question that had already been voiced by SSC opponents, “Why don’t US physicists just join the LHC project and save US taxpayers billions of dollars?” Although justified on purely physics grounds, the 1990 decision to keep the original SSC energy clearly had a significant political dimension, too.

    The US high-energy physics community therefore elected to “bet the company” on an extremely ambitious 40 TeV collider, so large that it ultimately had to be sited at a new laboratory in the American Southwest, as was originally envisioned in 1982. Such a choice, however, meant abandoning the three-laboratory DOE system that had worked well for nearly two decades and had fostered US leadership in high-energy physics. (That was Cronin’s primary concern when he urged his fellow physicists and DOE to site the SSC at Fermilab.) But perceived European threats to US hegemony and Reagan administration encouragement tipped the balance toward making the SSC a national project and away from it becoming the truly international “world laboratory” that others had long been advocating.

    Infrastructure problems

    In retrospect, the SSC leadership faced two daunting tasks in establishing a new high-energy physics laboratory in Waxahachie, Texas:

    ► Building the physical infrastructure for a laboratory that would cost billions of US taxpayer dollars and was certain to be a highly visible, contentious project.

    ► Organizing the human infrastructure needed to ensure that the SSC became a world-class laboratory where scientists could do breakthrough high-energy physics research.

    Addressing those tasks meant having to draw resources away from other worthy programs and projects that competed with the SSC during a period of tight annual budgets. Reagan administration officials had insisted that the project would be funded by new money, but that was only a convenient fiction. Congress, not the president, holds the federal purse strings, so the SSC always had to compete against other powerful interests—especially energy and water projects—for its annual funding. And it usually came up short, which further delayed the project and increased its costs.

    Schwitters and other managers attempted to attract top-notch physicists to staff the laboratory, but after 1988 many of its original, primary advocates in the SSC Central Design Group (CDG) returned to their tenured positions in universities and national labs. For example, CDG director Maury Tigner, who returned to Cornell University, might have been the best choice for the original project manager. (Second-tier CDG managers did go to Texas, however, as did many younger, untenured physicists.) Despite the promise and likely prestige of building a world-class scientific laboratory, the Dallas–Fort Worth area was viewed as an intellectual backwater by many older, accomplished high-energy physicists. They might have come to work there on a temporary or consulting basis, as did Rees originally, but making a permanent, full-time commitment and bringing their spouses and families with them proved a difficult choice for many.

    Achieving the first daunting task in a cost-effective way thus required bringing in an alien, military–industrial culture that made realizing the second task much more difficult. Teaming with EG&G and Sverdrup Corporations helped the SSC laboratory to tap the growing surplus of military–industrial engineers. It was crucial to get capable engineers working on the project quickly so that all the detailed design and construction work could occur on schedule and costs could be controlled. But the presence of military–industrial engineers at high levels in the SSC organization served as an added deterrent to established physicists who might otherwise have moved to Texas to help organize and build the laboratory.

    Estimates of the infrastructure costs that could have been saved by siting the SSC adjacent to Fermilab range from $495 million to $3.28 billion. The official DOE figures came in at the lower end, from $495 million to $1.03 billion, but they ignored the value of the human infrastructure then available at Fermilab. In hindsight, the costs of establishing such infrastructure anew at a green-field site were not small. In Tunnel Visions, my coauthors and I estimate that the total added infrastructure costs—physical plus human—of building the SSC in Texas would have been about $2 billion.

    Unlike historians gazing into the past, however, physicists do not enjoy the benefit of hindsight when planning a new machine. Guided in part by the dominant theoretical paradigm, they work with a cloudy crystal ball through which they can only guess at phenomena likely to occur in a new energy range, and they must plan accordingly. And few can foresee what may transpire in the economic or political realms that could jeopardize an enormous project that requires about a decade to complete and will cost billions of dollars, euros, or Swiss francs—or, relevant today, a trillion yen. That climate of uncertainty thus argues for erring on the side of fiscal conservatism and for trying to reduce expenses by building a new machine at or near an existing laboratory. Such a gradual, incremental approach has been followed successfully at CERN for six decades now, and to a lesser extent at other high-energy physics labs.

    But US physicists, perhaps enticed by Reagan administration promises, elected to stray from that well-worn path in the case of the SSC. It took a giant leap of faith to imagine that they could construct an enormous new collider at a green-field site where everything had to be assembled from scratch—including the SSC management team—and defend the project before Congress in times of increasing fiscal austerity. A more modest project sited at Fermilab would likely have weathered less opposition and still be operating today.

    In the multibillion-dollar realm it had entered, the US high-energy physics community had to form uneasy alliances with powerful players in Washington and across the nation. And those alliances involved uncomfortable compromises that led, directly or indirectly, to the SSC project’s demise. That community of a few thousand physicists had a small and diminishing supply of what Beltway insiders recognize as “political capital.” It could not by itself lay claim to more than 5 billion taxpayer dollars when many other pressing demands were being made on the federal purse. Thus for the SSC to move forward as a principally national project meant that those physicists had to give up substantial control to powerful partners with their own competing agendas. The Texans’ yearning for high-tech jobs, for example, helped congressional opponents paint the SSC as a pork-barrel project in the public mind. In the process, the high-energy physics community effectively lost control of its most important project.

    A personal perspective

    Part of the problem driving up the SSC costs was the project’s founding rhetoric: the intention to leapfrog European advances and reassert US leadership in high-energy physics. The Reagan administration in particular was promoting US competitiveness over international cooperation; treating other nations as equal partners would not have gained the administration’s support. And a smaller, say 20 TeV, proton collider would not have sufficed either, for that was much too close in energy to what CERN could eventually achieve in the 27 km LEP tunnel then under construction. The SSC therefore had to shoot for 40 TeV, which was presented as a scientific necessity but was in fact mainly a political choice. That energy was more than 20 times the energy of the Fermilab Tevatron, and the SSC proved to be nearly 20 times as expensive. And along with its onerous price tag came other, unanticipated complications—managerial as well as political—that US physicists were ill-equipped to confront. As Panofsky suggested, the SSC was indeed “a bridge too far”—a phrase he probably borrowed from the title of Cornelius Ryan’s 1974 book about a disastrous Allied campaign to capture the Arnhem Bridge over the Rhine River during World War II.

    I became convinced of that interpretation only in April 2014, when previously suppressed documents surfaced at the William J. Clinton Presidential Library. The documents were memos to Clinton’s chief of staff regarding a draft letter being circulated among top administration officials in early 1993 by new secretary of energy Hazel O’Leary. In the letter, Clinton was to request a billion-dollar SSC contribution from Japanese prime minister Kiichi Miyazawa. Such a contribution would have helped tremendously to reassure House members that major foreign support was indeed forthcoming and perhaps would have kept the project alive. But the memos, one from science adviser John Gibbons and the other from assistant to the president John Podesta and staff secretary Todd Stern, recommended against the president sending such a letter. The latter memo was particularly adamant:

    NSC [the National Security Council] agrees that we should convey to the Japanese our firm backing for the SSC, but still objects strongly [emphasis in the original] to sending a letter to Miyazawa. Such a letter could be seen as suggesting that we attach greater importance to Japanese participation in the SSC than we do to Japanese efforts on other fronts, such as aid to Russia.

    The document underscored for me what insurmountable competition the SSC faced in securing the required billions of dollars in federal and foreign funding. Despite their political influence reaching back to the years after World War II, high-energy physicists were not accustomed to playing in the major leagues of US politics. No such letter was ever sent.

    In the final analysis, the Cold War model of doing Big Science projects, with the US taking the lead unilaterally and expecting other Western nations to follow in its footsteps, was no longer appropriate. By the 1980s the global scientific community had begun an epochal transition into a multipolar world in which other nations expect to be treated as equal partners in such major scientific endeavors—especially considering the large financial contributions involved. As US high-energy physicists have hopefully learned from the 1993 termination of the SSC, it should have been promoted from day one as a genuinely international world-laboratory project.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Our mission

    The mission of Physics Today is to be a unifying influence for the diverse areas of physics and the physics-related sciences.

    It does that in three ways:

    • by providing authoritative, engaging coverage of physical science research and its applications without regard to disciplinary boundaries;
    • by providing authoritative, engaging coverage of the often complex interactions of the physical sciences with each other and with other spheres of human endeavor; and
    • by providing a forum for the exchange of ideas within the scientific community.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: