Tagged: Symmetry Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:29 pm on September 14, 2019 Permalink | Reply
    Tags: , , , , , Symmetry Magazine,   

    From Symmetry: “A new way to study high-energy gamma rays” 

    From Symmetry

    Jim Daley

    The Čerenkov Telescope Array will combine experimental and observatory-style approaches to investigate the universe’s highest energies.


    They permeate the cosmos, whizzing through galaxies and solar systems at energies far higher than what even our most powerful particle accelerators can achieve. Emitted by sources such as far-distant quasars, or, closer to Earth, occasionally ejected from the remnants of supernovae, high-energy cosmic rays are believed to play a role in the evolution of galaxies and the growth of black holes.

    Exactly how cosmic rays originate remains a mystery. Now, an ambitious project—part observatory, part experiment—is preparing to investigate them by studying the gamma rays they produce at sensitivities never achieved before.

    The Čerenkov Telescope Array being built in Chile and Spain’s Canary Islands is the newest generation of ground-based gamma-ray detectors. CTA involves collaborators from 31 countries and comprises more than 100 telescopes of varying sizes. Its detectors will be 10 times more sensitive to gamma rays than existing instruments, which will allow scientists to investigate their properties at a breathtaking range of energy levels—from about 20 billion electronvolts up to 300 trillion electronvolts. This is far above current capabilities: Existing gamma-ray observatories’ energy ranges top out at about 50 trillion electronvolts.

    Rene Ong, an astrophysicist at UCLA and the co-spokesperson for the project, says that CTA is unique in that it will function as both an experiment—zeroing in to investigate specific points and topics of interest—and an observatory—creating an overall record of a portion of the night sky over time.

    It will be the first ground-based gamma-ray observatory, and users will be granted observatory access time for their own projects in a proposal-driven program. “CTA will operate like an astronomical facility with a mix of guest-observer time, dedicated time for major observation projects, and time reserved for the CTA observatory director,” he says.

    Part of what makes CTA an astronomical observatory is that it will make its data freely available, explains Ulisses Barres, an astrophysicist at the Brazilian Center for Physical Research who is leading part of that country’s contribution to CTA’s design and construction.

    Until now, very-high-energy gamma-ray band astronomy research has been conducted by “closed” research groups, which have reserved most or all of their data for their own use. CTA will not only make its data public; just like a typical observatory, it will also structure its data to make it accessible even to nonspecialists and people in other scientific fields.

    “That’s because CTA wants to kind of kick-start astronomy in the [high-energy gamma-ray] band in a new way,” Barres says. “People from other fields can request data from CTA in a competitive way and analyze it, pretty much like what an experimental telescope does.”

    Elisabete de Gouveia Dal Pino, an astrophysicist at the University of São Paulo and also one of the leaders of the CTA Consortium in Brazil, says the project’s design will allow scientists to investigate some of the most energetic events that occur anywhere in the universe. These events are theorized to come mostly from compact sources like supermassive black holes and supernovae explosions.

    “There is a whole slew of processes and particles that we can decipher [by] observing the universe in gamma rays,” Dal Pino says. Other wavelengths have already been probed and are well-developed fields of study, she explains. “This is the last energy band window that we are currently able to open on the universe right now.”

    CTA may also test physics beyond the Standard Model, Ong says. In particular, it will search for dark matter, which scientists think makes up 85% of the known matter in the universe but has yet to be detected, let alone fully understood. It’s possible that gamma rays are produced when dark matter particles bump into one another and self-annihilate.

    CTA’s dark matter program will attempt to discover the nature of this phenomenon by observing the galactic halo, a roughly spherical, thinly populated area that surrounds the visible galaxy and is believed to be home to these particles.

    For now, the project is still in its design and construction phase. Barres says he expects a “critical mass” of telescopes—enough to begin taking useable data—in the northern hemisphere by 2022. “We expect that by the middle of the next decade, CTA may already be fully operational,” he says. “For now, there is a lot of coordination to be done among the partner institutions.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:17 pm on September 10, 2019 Permalink | Reply
    Tags: , , , Francesca Ricci-Tam, Grace Cummings, Symmetry Magazine,   

    From Symmetry: Women in STEM-“Finding happiness in hardware” Francesca Ricci-Tam, Grace Cummings 

    Symmetry Mag
    From Symmetry

    Illustration by Sandbox Studio, Chicago with Corinne Mucha
    Francesca Ricci-Tam

    Sarah Charley

    Working on hardware doesn’t come easily to all physicists, but Francesca Ricci-Tam has learned that what matters most is a willingness to put in the practice.

    Francesca Ricci-Tam remembers an organic chemistry lab she took during her undergraduate studies, before she became a physicist.

    “The professor told us that the vacuum tubes were very expensive and delicate and that we shouldn’t destroy them,” she recalls.

    Five minutes later, her tube exploded.

    “I never considered myself very good at lab work,” she says. “I was very awkward.”

    The student who bashfully cleaned shards of glass from her lab bench is now a hardware specialist building electronics for one of the largest scientific experiments in the world. Over time, she has learned that this work is a skill to be learned through practice and that early mistakes like hers with the vacuum tube are an essential part of the process.

    Facing a fear of failure

    Ricci-Tam entered the University of California, Davis in 2006 as a premed student with a double major in biochemistry and physics. She was home-schooled for most of her education and had very little experience working with her hands.

    She describes herself as a perfectionist, a trait she struggled with while adjusting to the laboratory. “I was always worried about adding one too many drops of solution or breaking something,” she says.

    After being rejected from several medical schools, she was faced with two choices: Take a year to gain more experience through a clinical internship and then try again, or change course and apply to graduate school in physics. She chose the latter.

    Being a physicist requires learning the basic principles and equations that describe matter, and then performing experiments to test and possibly push beyond them. The transition from the classroom into the laboratory is where the next generation of physicists learns what being an experimentalist is all about—and that possessing a high level of intelligence means very little if you don’t cultivate an accompanying amount of persistence and just plain do the hard work.

    A few years into her PhD, Ricci-Tam’s advisor asked her to help the UC Davis team build components for the 14,000-ton CMS detector, which a collaboration of about 4000 scientists use to study the collisions generated by the Large Hadron Collider at CERN.


    CERN CMS Higgs Event May 27, 2012

    Ricci-Tam had never done anything like it. She closely watched her colleagues as they unscrewed electronics and attached cables to the CMS pixel detector.

    She remembers flipping into a completely different mindset when it was her turn to work with the electronics. “I would be completely focused—and panic later,” she says.

    One day, a colleague told her that she worked like a surgeon. Ricci-Tam says the comment changed her perception of herself. “I thought, I can do this,” she says.

    “I’ve been doing hardware work on and off ever since.”

    The more Ricci-Tam worked on hardware, the more she discovered her own capabilities. As she gained experience and confidence, she began to find a balance between being completely focused and relaxed while working on tasks. She gradually let go of her perfectionist mindset and learned to give herself more space and time to work through problems.

    “You cannot afford to be a perfectionist,” she says. “Working on hardware teaches you patience.”

    Illustration by Sandbox Studio, Chicago with Corinne Mucha
    Grace Cummings

    Gaining an ally

    Ricci-Tam is now a postdoctoral researcher at the University of Maryland working on upgrades to the Hadronic Calorimeter, a part of the CMS detector that records the energy and trajectory of fundamental particles called quarks.

    Images of CMS HCAL Forward Calorimeter (HF) – CERN Document Server

    Scientists are preparing CMS for the High-Luminosity LHC, an upgrade to the LHC that will increase the collision rate by a factor of 10 and provide scientists with the huge amount of data they need to look for and study rare subatomic processes.

    The upgrades will make the CMS detector both more robust and more sensitive to the tiny particles produced in the collisions.

    Last winter, Ricci-Tam started working with University of Virginia graduate student Grace Cummings on assembling and testing new electronics for the calorimeter called ngCCMs: “Next Generation Clock Control Module.” Cummings was the resident expert on the project, and Ricci-Tam was impressed with her organization and self-assurance. The two soon became friends.

    “I’m not a very confident person, so I look to other people to learn how to be more confident,” Ricci-Tam says. “Grace is one of them.”

    Unlike Ricci-Tam, Cummings started her pursuit of experimental physics with a strong desire to work on hardware. Cummings connects it to the satisfaction she found building massive towers out of blocks and creating three-dimensional sculptures during her art classes as a kid. “I’ve always liked working with my hands,” Cummings says. “It makes me feel connected to my work.”

    She applied to colleges as a physics major and early on knew she wanted to go to graduate school. “I knew I wouldn’t be happy if I wasn’t asking questions and answering them,” she says.

    During a summer internship at the US Department of Energy’s Fermi National Accelerator Laboratory, she was introduced to particle physics hardware and how a detector actually works. “I learned what scintillators are and how wavelength shifters work,” she says. “I got really excited. I wrote about how I wanted to do hardware in my graduate school applications.”

    Working on hardware showed Cummings that part of being an experimentalist is looking to answer questions she never realized she would need to ask—including “What’s that smudge?”

    In summer 2018 Cummings was tasked with inspecting freshly arrived electronics for the CMS calorimeter at Fermilab.

    CMS calorimeter at Fermilab. https://www.fnal.gov/pub/science/experiments/energy/lhc/cms.html

    She and her colleagues found an entire shipment of circuit boards, each with a strange blotch on one side.

    “It wouldn’t come off, so we thought it might be something intrinsic to the printed circuit board,” Cummings says. “These are going to be in detector for the rest of the lifetime of CMS, so we want to make sure that everything is as perfect as it possibly can be and think about all the ways it could fail. Even if you don’t think something’s a big deal, it could become a big deal later.”

    They ran through a series of tests and inspections, and the cards all seemed to be functioning as expected. She and her colleagues were scratching their heads when one of them thought to ask how the electronics had been packaged.

    “It turns out that the Fermilab logo hadn’t been completely dry when they were packaged,” Cummings says. “Those were our white smudges: the imprint of the screen-printing ink.”

    Cummings and her colleagues laugh about the situation today, but they know the work they do has serious implications for the experiments they’re building and repairing.

    Cummings says every time she goes underground to install electronics in the four-story CMS detector, she is amazed at just how important every little piece becomes. “Working on hardware for me has been the biggest thing that shows why CMS signs all its papers as ‘CMS collaboration,’” she says. “I’m flabbergasted it works. It’s really a wonder.

    “At the same time, I know how much time, effort and love I put into my work. If everyone cares half as much as I care, we’ll be fine.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 1:54 pm on August 29, 2019 Permalink | Reply
    Tags: , , “Every lock-in amplifier has its own sound” Goetsch says., “LOCK-IN AMPLIFIER” featuring analogue displays four dials of varying sizes and a neat blue sticker label reading “184” CLYCLOTRON BLDG. 80 RM. 121 EXT. 5467.”, Construction of the 184 inch cyclotron began in 1940 and its last run took place on December 29 1987., During World War II research at this 184-inch cyclotron was essential to the Manhattan Project., Frankly it’s pretty neat acquiring all this equipment with stickers all over it that shows it has a real history.”, Goetsch describes the PAR 220 as “The Beast of Princeton” and “the gnarliest and most interesting sounding module I’ve found.”, Lock-in amplifiers are used to find signals that are buried in noise- isolating and amplifying them., Reddit user /u/CakeLie42 noted that the earliest it could have been used was in 1962 when the first commercial lock-in amplifiers were manufactured., Symmetry Magazine, The latest was in 1977 when Princeton Applied Research (the “PAR” in PAR 220) was acquired by another company., The mystery of the PAR 220’s historical purpose was mostly solved. But Goetsch had new plans for the amplifier: He wanted to use it to make music., The plea of Stefan Paul Goetsch- musician, The unit called a PAR 220 may have been monitoring extremely minute oscillations of the magnetic field inside magnets that accelerated charged particles., There was just one 184-inch cyclotron: a masterpiece that cyclotron-inventor E.O. Lawrence built in the afterglow of his 1939 Nobel Prize.in honor of the technology.   

    From Symmetry: “Upcycled instrument tied to auspicious accelerator” 

    Symmetry Mag
    From Symmetry<

    Mika McKinnon


    A composer has given new life to an amplifier used within a historically significant particle accelerator.

    In May 2019, a Berlin-based, classically-trained symphony composer with a fondness for converting old scientific gear into musical instruments posted to a discussion forum: “Any nuclear scientists here? 184[-inch] cyclotron question.”

    In a single photo, he presented a mystery: His latest find was a ’70s-era piece of equipment labeled a “LOCK-IN AMPLIFIER” featuring analogue displays, four dials of varying sizes, and a neat blue sticker label reading “184” CLYCLOTRON, BLDG. 80, RM. 121, EXT. 5467.” Could anyone help him find the story behind it?

    Stefan Paul Goetsch, who uses the name Hainbach when creating electronic music, posted the plea hoping the equipment might be traceable via the label. He had no idea that blue sticker would provide a clear connection to a major piece of scientific history.

    A cyclotron is a particle accelerator that uses a circular magnet to accelerate charged particles in a spiral from its center. And, it turns out, there was just one 184-inch cyclotron: a masterpiece that cyclotron-inventor E.O. Lawrence built in the afterglow of his 1939 Nobel Prize in honor of the technology.

    184-inch cyclotron. Flickr

    E.O.Lawrence original 27″ cyclotron

    During World War II, research at this 184-inch cyclotron was essential to the Manhattan Project. After the war, it was converted into a synchrotron—an accelerator in which particles travel in a fixed loop instead of a spiral—that was key to early research into pions and mesons.

    Goetsch picked up the unit in an eBay auction from a user called Tminus7, who frequently sells scavenged test equipment like the amplifier. Tminus7 says he picked it up at a surplus auction from University of California, Berkeley.

    “There might be oscilloscopes, power supplies, all kinds of oddball equipment and buyers bid on the pallet,” he explains in an email. “Frankly it’s pretty neat acquiring all this equipment with stickers all over it that shows it has a real history.”


    Lock-in amplifiers are used to find signals that are buried in noise, isolating and amplifying them. After running across Goetsch’s photo of the lock-in amplifier online, nuclear engineering graduate student Kathy Shield led a brainstorming session with her peers at UC Berkeley and Lawrence Berkeley National Laboratory to try to figure out how, exactly, the equipment could have been used at the cyclotron.

    Their most plausible theory was that the unit, called a PAR 220, may have been monitoring extremely minute oscillations of the magnetic field inside magnets that accelerated charged particles. Even an early-generation lock-in amplifier may have been able to monitor those oscillations down to the nanovolt.

    But when was this equipment used? And what was the cyclotron doing at the time? Internet detectives began to narrow down the possibilities.

    Construction of the cyclotron began in 1940, and its last run took place on December 29, 1987, so the PAR 220 must have been used sometime within that window. Reddit user /u/CakeLie42 noted that the earliest it could have been used was in 1962, when the first commercial lock-in amplifiers were manufactured, and that the latest was in 1977, when Princeton Applied Research (the “PAR” in PAR 220) was acquired by another company.

    Richard Burdett, product specialist at AMETEK SIGNAL RECOVERY, the company now responsible for the amplifier’s descendants, offered further clues. He pointed out that the LED overload indicator on the PAR 220 was only part of the modules manufactured between 1973 and 1975.

    By that time, the 184-inch cyclotron had started its third career: accelerating particles for medical treatment and research. In 1973 the cyclotron was upgraded with a unit where patients could be situated for particle therapy.

    That year the cyclotron also saw upgrades to help stretch and focus the particle beam, along with the creation of additional experimental bays, which may have necessitated the installation of the then-new PAR 220. In 1974 and 1975, scientists further upgraded the cyclotron’s medical capacities to include helium ion radiography, another possible reason to install the PAR 220. Room 121 has since been renovated into a high bay, but current building occupants believe it used to be a cyclotron control room.

    The mystery of the PAR 220’s historical purpose was mostly solved. But Goetsch had new plans for the amplifier: He wanted to use it to make music.

    “Every lock-in amplifier has its own sound,” Goetsch says. “The sound of these machines was just out of this world because they were designed for the absolute maximum. They were designed to shoot rockets in the sky or to listen to particles.”

    “All the materials that are in there such as coils and vacuum tubes are carefully selected and fine-tuned. The range that these instruments offer both in the frequency that they put out and the volume is unheard of in musical equipment.”

    In a video [below], Goetsch gives the PAR 220 a short ping as a trigger-signal that sets the whole machine resonating in a hypnotic, driving beat.

    “It’s a great audio processor, because that is what it was meant to do!” he laughs. He describes the PAR 220 as “The Beast of Princeton” and “the gnarliest and most interesting sounding module I’ve found.”

    Adjusting the four knobs—meant to control the phase, sensitivity, time constant and reference signal—now allows the module to be played from a resonating bass buzz that could drive a dance floor into a frenzy up to a stuttering high-pitched chirp.

    “It will give me so much back,” he says. “It has a texture that’s absolutely unique.”

    Using scientific equipment as a musical instrument is not without its hazards. “I’ve had a few things go up in smoke,” Goetsch admits. He makes a point of outlining safety concerns when he makes videos about transforming test equipment like the amplifier into musical instruments.

    But even the failures can be fascinating, he says. “Some units break so beautifully that their swan song is just amazing because it unlocks hidden layers of distortion that sing with overtones.”

    Despite the difficulties in acquiring new equipment, figuring out how to play it, and occasionally losing it in a catastrophic failure, Goetsch says he loves this work.

    “It’s very much zeitgeist,” he says, “as it’s the repurposing of abandoned equipment that would be thrown away.”

    His PAR 220 unit was destined for the landfill until Goetsch found a new use for it. “All these machines, they get lost unless someone comes along and does something beautiful with them again.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 1:00 pm on August 22, 2019 Permalink | Reply
    Tags: , , , , , , Symmetry Magazine   

    From Symmetry: “Holography class gives students new perspective” 

    Symmetry Mag
    From Symmetry

    [I must say, nothing in this article tells me why this is An important subject for Symmetry.]

    Bailey Bedford

    A holography class at the Ohio State University combines art and physics to provide a more complete picture of how we understand the world around us.

    Art and science are often seen as incompatible lenses through which to view the world. Science provides one perspective, characterized by detachment and certainty, and art provides another, characterized by emotion and unpredictability, and never the twain shall meet.

    But sometimes you need more than one perspective to understand the whole picture. Harris Kagan, an Ohio State University physics professor and collaborator on the ATLAS experiment at the Large Hadron Collider at CERN, proves this in his classes about the art and science of holography.

    The word “holography” derives from two Greek words that together mean “entire picture.” A hologram is essentially a 3-D picture that is designed to provide a complete image including different perspectives and parallax—the way an object’s position appears to vary for different lines of sight.

    In physics terms, each part of a hologram records an interference pattern to recreate the light that was emitted or reflected from the subject of the image. This method allows the viewer to move around and see the object from different angles like they could if the object were on the opposite side of a window.

    “My philosophy is that art and science are really the same thing,” he says. “The techniques you use to create a new idea in science are very, very similar [to the ones used in art]. To create a new idea in art, you’re using different tools, maybe different fundamentals, but the goals are the same; the honesty is the same.”

    Courtesy of Harris Kagan

    Courtesy of Harris Kagan

    Marrying art and science

    Kagan has been teaching holography classes since the mid-1980s. When OSU art professor Susan Dallas-Swan saw a hologram that he had produced for display using equipment from a laboratory class he taught, she arranged for Kagan to work with an art graduate student using the medium.

    The success with the graduate student led the pair of professors to set the blueprint for the classes. Some of Kagan’s classes have been in the physics department and some in the art department, with students from a variety of backgrounds mixed together in each. Kagan teaches beginner, advanced and honors undergraduate holography courses as well as a graduate course.

    Students in the class are not required to have any background in art or physics. The classes are meant to help students explore both subjects and how they intersect with math and visual perception. They include elements usually associated with science classes, such as unsupervised time in the lab working with lasers, and elements usually associated with art classes, such as artistic critiques of the students’ work. The students perform a series of projects culminating in an original piece for an art show.

    One point the critique process drove home was that the students’ art for the class should be concept-driven, says Shreyas Muralidharan, who participated as an undergraduate majoring in electrical and computer engineering and physics. By that, Kagan meant “that you need to really be able to clearly define what you want to achieve with this piece of art,” Muralidharan says. “From a physics and more scientific background, I haven’t really been exposed to [that idea].”

    Muralidharan, now a graduate student, says that Kagan would often challenge students to simplify the language in their explanations of their pieces and processes. Asking the students to explain concepts in simple terms ensured they actually understood them—a practice that he says remains useful in giving scientific presentations.

    Muralidharan says that idea encouraged him to think outside the box in his science classes as well. “A lot of the time, you can get stuck in the method of thinking in math,” he says. “We think of integrals, numbers, probability. And you kind of step back, and you realize that maybe you don’t have a good intuition for what’s actually happening.”

    Both art students and physics students benefited from the class, Muralidharan says. “I think talking to each other across that bridge helped solidify concepts.”

    Beyond the classroom

    Kagan estimates that between 2000 and 3000 students have gone through his classes. Those students have gone on to a wide variety of careers.

    “What comes with these lessons is a perspective with which to do art or to do science—a perspective with which you understand your role in the universe,” Kagan says.

    Jeff Hazelden, who took Kagan’s classes as a photography major, says Kagan’s classes introduced him to characteristics of light that are still useful in his career as a photographer and art teacher. He says he also uses parts of Kagan’s structured format for artistic critiques with his students that are new to the critique process.

    Katherine Hanlon, another former photography major, now works as a medical imaging specialist. She helps identify skin diseases by taking specialized photos using lasers and 3-D modeling. Kagan’s class introduced her to important aspects of those techniques.

    “I look back and realize that a lot of what I ended up doing in my career and my skill level and knowledge level was influenced specifically by this class,” Hanlon says. “I think it was easily the most important class I ever took in any of my education.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 11:31 am on August 20, 2019 Permalink | Reply
    Tags: "With open data scientists share their work", , , , Gran Sasso, , Symmetry Magazine   

    From Symmetry: “With open data, scientists share their work” 

    Symmetry Mag
    From Symmetry

    Meredith Fore

    Illustration by Sandbox Studio, Chicago

    There are barriers to making scientific data open, but doing so has already contributed to scientific progress.

    It could be said that astronomy, one of the oldest sciences, was one of the first fields to have open data. The open records of Chinese astronomers from 1054 A.D. allowed astronomer Carlo Otto Lampland to identify the Crab Nebula as the remnant of a supernova in 1921.

    Supernova remnant Crab nebula. NASA/ESA Hubble

    In 1705 Edward Halley used the previous observations of Johannes Kepler and Petrus Apianus—who did their work before Halley was old enough to use a telescope—to deduce the orbit of his eponymous comet.

    Comet 1P/Halley as taken March 8, 1986 by W. Liller, Easter Island, part of the International Halley Watch (IHW) Large Scale Phenomena Network.
    NASA/W. Liller

    In science, making data open means making available, free of charge, the observations or other information collected in a scientific study for the purpose of allowing other researchers to examine it for themselves, either to verify it or to conduct new analyses.

    Scientists continue to use open data to make new discoveries today. In 2010, a team of scientists led by Professor Doug Finkbeiner at Harvard University found vast gamma-ray bubbles above and below the Milky Way. The accomplishment was compared to the discovery of a new continent on Earth. The scientists didn’t find the bubbles by making their own observations; they did it by analyzing publicly available data from the Fermi Gamma Ray Telescope.

    NASA/Fermi LAT

    NASA/Fermi Gamma Ray Space Telescope

    “Open data often can be used to answer other kinds of questions that the people who collected the data either weren’t interested in asking, or they just never thought to ask,” says Kyle Cranmer, a professor at New York University. By making scientific data available, “you’re enabling a lot of new science by the community to go forward in a more efficient and powerful way.”

    Cranmer is a member of ATLAS, one of the two general-purpose experiments that, among other things, co-discovered the Higgs boson at the Large Hadron Collider at CERN.

    CERN ATLAS Image Claudia Marcelloni

    CERN ATLAS Higgs Event

    He and other CERN researchers recently published a letter in Nature Physics titled “Open is not enough,” which shares lessons learned about providing open data in high-energy physics. The CERN Open Data Portal, which facilitates public access of datasets from CERN experiments, now contains more than two petabytes of information.

    Computing at CERN

    The fields of both particle physics and astrophysics have seen rapid developments in the use and spread of open data, says Ulisses Barres, an astrophysicist at the Brazilian Center for Research in Physics. “Astronomy is going to, in the next decade, increase the amount of data that it produces by a factor of hundreds,” he says. “As the amount of data grows, there is more pressure for increasing our capacity to convert information into knowledge.”

    The Square Kilometer Array Telescope—built in Australia and South Africa and set to turn on in the 2020s—is expected to produce about 600 terabytes of data per year.

    SKA Square Kilometer Array

    SKA South Africa

    Raw data from studies conducted during the site selection process are already available on the SKA website, with a warning that “these files are very large indeed, and before you download them you should check whether your local file system will be able to handle them.”

    Barres sees the growth in open data as an opportunity for developing nations to participate in the global science community in new ways. He and a group of fellow astrophysicists helped develop something called the Open Universe Initiative “with the objective of stimulating a dramatic increase in the availability and usability of space science data, extending the potential of scientific discovery to new participants in all parts of the world and empowering global educational services.”

    The initiative, proposed by the government of Italy, is currently in the “implementation” phase within the United Nations Office for Outer Space Affairs.

    “I think that data is this proper entry point for science development in places that don’t have much science developed yet,” Barres says. “Because it’s there, it’s available, there is much more data than we can properly analyze.”

    There are barriers to implementing open data. One is the concept of ownership—a lab might not want to release data that they could use for another project or might worry about proper credit and attribution. Another is the natural human fear of being accused of being wrong or having your data used irresponsibly.

    But one of the biggest barriers, according to physics professor Jesse Thaler of MIT, is making the data understandable. “From the user perspective, every single aspect of using public data is challenging,” Thaler says.

    Think of a high school student’s chemistry lab notebook. A student might mark certain measurements in her data table with a star, to remind herself that she used a different instrument to take those measurements. Or she may use acronyms to name different samples. Unless she writes these schemes down, another student wouldn’t know the star’s significance and wouldn’t be able to know what the samples were.

    This has been a challenge for the CERN Open Data Portal, Cranmer says. “It’s very well curated, but it’s hard to use, because the data has got a lot of structure to it. It’s very complicated. You have to put additional effort to make it more usable.”

    And for a lot of scientists already working to manage gigantic projects, doing extra work to make their data useable to outside groups—well, “that’s just not mission critical,” he says. But Thaler adds that the CMS experiment has been very responsive to the needs of outside users.

    CERN CMS Higgs Event

    “Figuring out how to release data is challenging because you want to provide as much relevant information to outside users as possible,” Thaler says. “But it’s often not obvious, until outside users actually get their hands on the data, what information is relevant.”

    Still, there are many examples of open data benefiting astrophysics and particle physics. Members of the wider scientific community have discovered exoplanets through public data from the Kepler Space Telescope. When the Gaia spacecraft mapped the positions of 1.7 billion stars and released them as open data, scientists flocked to hackathons hosted by the Flatiron Institute to interpret it and produced about 20 papers’ worth of research.

    Open data policies have allowed for more accountability. The physics community was able to thoroughly check data from the first black hole collisions detected by LIGO and question a proposed dark-matter signal from the DAMA/LIBRA experiment.

    DAMA-LIBRA at Gran Sasso

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    Open data has also allowed for new collaborations and has nourished existing ones. Thaler, who is a theorist, says the dialogue between experimentalists and theorists has always been strong, but “open data is an opportunity to accelerate that conversation,” he says.

    For Cari Cesarotti, a graduate student who uses CMS Open Data for research in particle physics theory at Harvard, one of the most important benefits of open data is how it maximizes the scientific value of data experimentalists have to work very hard to obtain.

    “Colliders are really expensive and quite laborious to build and test,” she says. “So the more that we can squeeze out utility using the tools that we already have—to me, that’s the right thing to do, to try to get as much mileage as we possibly can out of the data set.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 11:44 am on August 13, 2019 Permalink | Reply
    Tags: , DAMA/LIBRA at Gran Sasso searching for WIMPS., , , , Symmetry Magazine, Two new experiments ANAIS and COSINE-100 are looking for WIMPs.   

    From Symmetry: “Testing DAMA” 

    Symmetry Mag
    From Symmetry

    Jim Daley

    An Italian experiment has a 20-year signal of what could be dark matter—and scientists are embarking on their most promising efforts yet to confirm or refute its results.

    Illustration by Sandbox Studio, Chicago

    For more than two decades, a detector deep beneath the Apennine Mountains in Italy has observed a regularly changing signal that its operators think comes from our planet’s movements through the “halo” of dark matter suffusing the Milky Way galaxy.

    Dark matter—a substance that scientists have otherwise only indirectly detected—is thought to make up 85% of the matter in the universe. DAMA, an experiment at the Gran Sasso National Laboratory in Italy, has since 1997 reported findings consistent with its discovery. But there has not been an overwhelming consensus in the physics community that they’ve found it.

    DAMA-LIBRA at Gran Sasso

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO, located in the Abruzzo region of central Italy

    DAMA/LIBRA—the full name of the current generation of the experiment—released its most recent results in 2018. Like previous data, those findings show a signal that cycles annually, peaking in around June 2.

    This so-called annual modulation is what we would expect to see happen with a dark matter signal collected on Earth. Our planet orbits the sun, which in turn moves through the Milky Way. Half of the year they’re moving in the same direction, increasing the speed with which the Earth should pass through our galaxy’s dark matter; the other half they’re moving in opposite directions, making that speed slower.

    An observatory on Earth should move faster through any dark matter halo in spring than fall, resulting in an uptick in the rate of particle detection. “The effect is like a dark matter headwind,” says Jason Kumar, a physicist at the University of Hawaii at Manoa. In the Northern Hemisphere, “you would expect it to peak in June, and then see a dip in the event rate in the winter.”

    Scientists don’t know exactly what dark matter is. One candidate DAMA is searching for: WIMPs, or weakly interacting massive particles. Sodium iodide crystals in DAMA’s detectors emit bursts of radiation whenever a particle (possibly a WIMP) collides with the crystals’ atomic nuclei. DAMA’s signal shows these bursts occurring in the annual cycle physicists would expect to see.

    Rita Bernabei, a physicist at the University of Rome Tor Vergata and the longtime leader of DAMA, says the experiment’s signal indicates the presence of dark matter particles in the galactic halo. “There are no alternative explanations for the observed signal,” Bernabei says.

    There is still some room for doubt, though: Unaccounted-for seasonal variations in environmental conditions at Gran Sasso could be causing the modulation, for example. Or it could be “systematics,” a catch-all shorthand physicists use to refer to errors in equipment calibration or experimental design. Adding to the skepticism is the fact that other dark matter detectors have yet to confirm what DAMA is seeing. But that could soon change.

    Two new experiments, ANAIS and COSINE-100, are also looking for WIMPs in the galactic halo—and unlike other experiments that have attempted to verify DAMA’s signal, they’re using DAMA’s same detection material, sodium iodide.


    COSINE-100 at Yangyang underground laboratory in South Korea

    ANAIS, a Spanish detector that is effectively a smaller version of DAMA, began collecting data in 2017. In the next few years the project should start to have statistically significant results, says ANAIS spokesperson Marisa Sarsa, a physicist at the University of Zaragoza, Spain. “If we find a similar signal with annual modulation, it will really be quite an impact,” she says.

    If ANAIS finds no signal, Sarsa says the challenge will then be to determine what is actually causing the annual modulation observed by DAMA.

    COSINE-100 has been taking data since 2016. Located in South Korea, that experiment’s sodium iodide crystals are submerged in 2000 liters of liquid to help reduce background noise that can complicate analysis. The experiment will have enough data to search for modulation by about 2021.

    In July 2019, Reina Maruyama, a professor of physics at Yale University, was awarded a National Science Foundation grant to test DAMA’s results with the COSINE experiment. “With five years of running the experiment, if the signal is there, we should be able to see it,” Maruyama says. “If it’s not there—and this is a little harder to do, but—we’ll be able to refute it.”

    If COSINE or ANAIS does see a signal that appears to confirm DAMA’s results, the next step would be to confirm the finding with a detector in the Southern hemisphere. Construction is slated to begin this fall on an experiment named SABRE, a dark matter detector that will be located in a former gold mine in western Australia.

    A confirmation that DAMA is indeed seeing dark matter would open another gold mine of sorts. Katherine Freese, a theoretical astrophysicist at the University of Texas, first proposed the technique of searching for an annual modulation of the dark matter signal in the galactic halo in 1986. Freese says that if the DAMA signal is confirmed, physicists would then have to start exploring the properties of the particles making the signal. “You would try to figure out what is the mass of the particle, what is the scattering strength of the particle, and what is the interaction [with the detector], in detail,” she says. “And then we’ll keep going with other experiments until we figure out exactly what its details are.”

    Meanwhile, DAMA/LIBRA is still taking data. Researchers at Gran Sasso are developing a new phase of the experiment that will increase the instrument’s sensitivity and decrease its energy threshold. Their goals, Bernabei says, are to improve the precision on the dark matter annual modulation parameters, potentially disentangle various scenarios that could explain the mysterious signal, and possibly explore a joint detection of its annual modulation.

    Whatever ANAIS and COSINE-100 do ultimately find, pinning down the source of the mysterious annually cycling signal will be a major step forward in the search for dark matter in the Milky Way.

    “If we don’t see anything, then I think the community can really move on and focus on clearing up the dark matter landscape,” Maruyama says. “If we see a signal?” She pauses, considering the consequence of confirming DAMA after all these years. “I think we’d really open up the field. And that’s really exciting.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:37 pm on August 1, 2019 Permalink | Reply
    Tags: "Powered by pixels", , ArgonCube, , , , , , Liquid-argon neutrino detectors, Symmetry Magazine, University of Bern in Switzerland   

    From FNAL via Symmetry: “Powered by pixels” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.


    Symmetry Mag

    Lauren Biron

    An innovative use of pixel technology is making liquid-argon neutrino detectors even better.

    Dan Dwyer and Sam Kohn

    It’s 2019. We want our cell phones fast, our computers faster and screens so crisp they rival a morning in the mountains. We’re a digital society, and blurry photos from potato-cameras won’t cut it for the masses. Physicists, it turns out, aren’t any different — and they want that same sharp snap from their neutrino detectors.

    Cue ArgonCube: a prototype detector under development that’s taking a still-burgeoning technology to new heights with a plan to capture particle tracks worthy of that 4K TV. The secret at its heart? It’s all about the pixels.

    But let’s take two steps back. Argon is an element that makes up about 1 percent of that sweet air you’re breathing. Over the past several decades, the liquid form of argon has grown into the medium of choice for neutrino detectors. Neutrinos are those pesky fundamental particles that rarely interact with anything but could be the key to understanding why there’s so much matter in the universe.

    Big detectors full of cold, dense argon provide lots of atomic nuclei for neutrinos to bump into and interact with — especially when accelerator operators are sending beams containing trillions of the little things. When the neutrinos interact, they create showers of other particles and lights that the electronics in the detector capture and transform into images.

    Each image is a snapshot that captures an interaction by one of the most mysterious, flighty, elusive particles out there; a particle that caused Wolfgang Pauli, upon proposing it in 1930, to lament that he thought experimenters would never be able to detect it.

    Scientists are testing the ArgonCube technology in a prototype constructed at the University of Bern in Switzerland. James Sinclair

    Current state-of-the-art liquid-argon neutrino detectors — big players like MicroBooNE, ICARUS and ProtoDUNE — use wires to capture the electrons knocked loose by neutrino interactions.



    Cern ProtoDune

    CERN Proto Dune

    Vast planes of thousands of wires crisscross the detectors, each set collecting coordinates that are combined by algorithms into 3-D reconstructions of a neutrino’s interaction.

    These setups are effective, well-understood and a great choice for big projects — and you don’t get much bigger than the international Deep Underground Neutrino Experiment hosted by Fermilab.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    SURF-Sanford Underground Research Facility, Lead, South Dakota, USA

    DUNE will examine how the three known types of neutrinos change as they travel long distances, further exploring a phenomenon called neutrino oscillations. Scientists will send trillions of neutrinos from Fermilab every second on a 1,300-kilometer journey through the earth — no tunnel needed — to South Dakota. DUNE will use wire chambers in some of the four enormous far detector modules, each one holding more than 17,000 tons of liquid argon.

    But scientists also need to measure the beam of neutrinos as it leaves Fermilab, where the DUNE near detector will be close to the neutrino source and see more interactions.

    “We expect the beam to be so intense that you will have a dozen neutrino interactions per beam pulse, and these will all overlap within your detector,” says Dan Dwyer, a scientist at Lawrence Berkeley National Laboratory who works on ArgonCube. Trying to disentangle a huge number of events using the 2-D wire imaging is a challenge. “The near detector will be a new range of complexity.”

    And new complexity, in this case, means developing a new kind of liquid-argon detector.

    This rough diagram of an ArgonCube detector module was drawn by Knut Skarpaas. James Sinclair.

    Pixel me this

    People had thought about making a pixelated detector before, but it never got off the ground.

    “This was a dream,” says Antonio Ereditato, father of the ArgonCube collaboration and a scientist at the University of Bern in Switzerland. “We developed this original idea in Bern, and it was clear that it could fly only with the proper electronics. Without it, this would have been just wishful thinking. Our colleagues from Berkeley had just what was required.”

    Pixels are small, and neutrino detectors aren’t. You can fit roughly 100,000 pixels per square meter. Each one is a unique channel that — once it is outfitted with electronics — can provide information about what’s happening in the detector. To be sensitive enough, the tiny electronics need to sit right next to the pixels inside the liquid argon. But that poses a challenge.

    “If they used even the power from your standard electronics, your detector would just boil,” Dwyer says. And a liquid-argon detector only works when the argon remains … well, liquid.

    Dan Dwyer points out features of the pixelated electronics. Roman Berner.

    So Dwyer and ASIC engineer Carl Grace at Berkeley Lab proposed a new approach: What if they left each pixel dormant?

    “When the signal arrives at the pixel, it wakes up and says, ‘Hey, there’s a signal here,’” Dwyer explains. “Then it records the signal, sends it out and goes back to sleep. We were able to drastically reduce the amount of power.”

    At less than 100 microwatts per pixel, this solution seemed like a promising design that wouldn’t turn the detector into a tower of gas. They pulled together a custom prototype circuit and started testing. The new electronics design worked.

    The first test was a mere 128 pixels, but things scaled quickly. The team started working on the pixel challenge in December 2016. By January 2018 they had traveled with their chips to Switzerland, installed them in the liquid-argon test detector built by the Bern scientists and collected their first 3-D images of cosmic rays.

    For the upcoming installation at Fermilab, collaborators will need even more electronics. The next step is to work with manufacturers in industry to commercially fabricate the chips and readout boards that will sustain around half a million pixels. And Dwyer has received a Department of Energy Early Career Award to continue his research on the pixel electronics, complementing the Swiss SNSF grant for the Bern group.

    “We’re trying to do this on a very aggressive schedule — it’s another mad dash,” Dwyer says. “We’ve put together a really great team on ArgonCube and done a great job of showing we can make this technology work for the DUNE near detector. And that’s important for the physics, at the end of the day.”

    Samuel Kohn, Gael Flores, and Dan Dwyer work on ArgonCube technology at Lawrence Berkeley National Laboratory.
    Marilyn Chung, LBNL

    More innovations ahead

    While the pixel-centered electronics of ArgonCube stand out, they aren’t the only technological innovations that scientists are planning to implement for the upcoming near detector of DUNE. There’s research and development on a new kind of light detection system and new technology to shape the electric field that draws the signal to the electronics. And, of course, there are the modules.

    Most liquid-argon detectors use a large container filled with the argon and not too much else. The signals drift long distances through the fluid to the long wires strung across one side of the detector. But ArgonCube is going for something much more modular, breaking the detector up into smaller units still contained within the surrounding cryostat. This has certain perks: The signal doesn’t have to travel as far, the argon doesn’t have to be as pure for the signal to reach its destination, and scientists could potentially retrieve and repair individual modules if required.

    “It’s a little more complicated than the typical, wire-based detector,” says Min Jeong Kim, who leads the team at Fermilab working on the cryogenics and will be involved with the mechanical integration of the ArgonCube prototype test stand. “We have to figure out how these modules will interface with the cryogenic system.”

    That means figuring out everything from filling the detector with liquid argon and maintaining the right pressure during operation to properly filtering impurities from the argon and circulating the fluid around (and through) the modules to maintain an even temperature distribution.

    Researchers assemble components in the test detector at the University of Bern.
    James Sinclair

    The ArgonCube prototype under assembly at the University of Bern will run until the end of the year before being shipped to Fermilab and installed 100 meters underground, making it the first large prototype for DUNE sent to Fermilab and tested with neutrinos. After working out its kinks, researchers can finalize the design and build the full ArgonCube detector.

    Additional instrumentation and components such as a gas-argon chamber and a beam spectrometer will round out the near detector.

    It’s an exciting time for the 100-some physicists from 23 institutions working on ArgonCube — and for the more than 1,000 neutrino physicists from over 30 countries working on DUNE. What started as wishful thinking has become a reality — and no one knows how far the pixel technology might go.

    Ereditato even dreams of replacing the design of one of the four massive DUNE far detector modules with a pixelated version. But one thing at a time, he says.

    “Right now we’re concentrating on building the best possible near detector for DUNE,” Ereditato says. “It’s been a long path, with many people involved, but the liquid-argon technology is still young. ArgonCube technology is the proof that the technique has the potential to perform even better in the future.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

  • richardmitnick 12:19 pm on June 25, 2019 Permalink | Reply
    Tags: "The future of particle accelerators may be autonomous", , Fermilab | FAST Facility, FNAL PIP-II Injector Test (PIP2IT) facility, In December 2018 operators at LCLS at SLAC successfully tested an algorithm trained on simulations and actual data from the machine to tune the beam., , Symmetry Magazine   

    From Symmetry: “The future of particle accelerators may be autonomous” 

    Symmetry Mag
    From Symmetry

    Caitlyn Buongiorno

    Illustration by Sandbox Studio, Chicago with Ana Kova

    Particle accelerators are some of the most complicated machines in science. Scientists are working on ways to run them with a diminishing amount of direction from humans.

    In 2015, operators at the Linac Coherent Light Source particle accelerator looked into how they were spending their time managing the machine.


    They tracked the hours they spent on tasks like investigating problems and orchestrating new configurations of the particle beam for different experiments.

    They discovered that, if they could automate the process of tuning the beam—tweaking the magnets that keep the LCLS particle beam on its course through the machine—it would free up a few hundred hours each year.

    Scientists have been working to automate different aspects of the operation of accelerators since the 1980s. In today’s more autonomous era of self-driving cars and vacuuming robots, efforts are still going strong, and the next generation of particle accelerators promises to be more automated than ever. Scientists are using machine learning to optimize beamlines more efficiently, detect problems more effectively and create the simulations they need in real-time.

    Quicker fixes

    With any machine, there is a chance that a part might malfunction or break. In the case of an accelerator, that part might be one of the many magnets that direct the particle beam.

    If one magnet stops working, there are ways to circumvent the problem using the magnets around it. But it’s not easy. A particle accelerator is a nonlinear system; when an operator makes a change to it, all of the possible downstream effects of that change can be difficult to predict.

    “The human brain isn’t good at that kind of optimization,” says Dan Ratner, the leader of the strategic initiative for machine learning at the US Department of Energy’s SLAC National Accelerator Laboratory in California.

    An operator can find the solution by trial and error, but that can take some time. With machine learning, an autonomous accelerator could potentially do the same task many times faster.

    In December 2018, operators at LCLS at SLAC successfully tested an algorithm trained on simulations and actual data from the machine to tune the beam.

    Ratner doesn’t expect either LCLS or its upgrade, LCLS-II, scheduled to come online in 2021, to run without human operators, but he’s hoping to give operators a new tool. “Ultimately, we’re trying to free up operators for tasks that really need a human,” he says.

    SLAC/LCLS II projected view

    Practical predictions

    At Fermi National Accelerator Laboratory in Illinois, physicist Jean-Paul Carneiro is working on an upgrade to the lab’s accelerator complex in the hopes that it will one day run with little to no human intervention.

    He was recently awarded a two-year grant for the project through the University of Chicago’s FACCTS program—France And Chicago Collaborating in The Sciences. He is integrating a code developed by scientist Dider Uriot at France’s Saclay Nuclear Research Center into the lab’s PIP-II Injector Test (PIP2IT) facility.

    FNAL PIP-II Injector Test (PIP2IT) facility

    PIP2IT is the proving ground for technologies intended for PIP-II, the upgrade to Fermilab’s accelerator complex that will supply the world’s most intense beams of neutrinos for the international Deep Underground Neutrino Experiment.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    Carneiro says autonomous accelerator operation would increase the usability of the beam for experiments by drastically reducing the accelerator’s downtime. On average, accelerators can currently expect to run at about 90% usability, he says. “If you want to achieve a 98 or 99% availability, the only way to do it is with a computer code.”

    Beyond quickly fixing tuning problems, another way to increase the availability of beam is to detect potential complications before they happen.

    Even in relatively stable areas, the Earth is constantly shifting under our feet—and shifting underground particle accelerators as well. People don’t feel these movements, but an accelerator beam certainly does. Over the course of a few days, these shifts can cause the beam to begin creeping away from its intended course. An autonomous accelerator could correct the beam’s path before a human would even notice the problem.

    Lia Merminga, PIP-II project director at Fermilab, says she thinks the joint project with CEA Saclay is a fantastic opportunity for the laboratory. “Part of our laboratory’s mission is to advance the science and technology of particle accelerators. These advancements will free up accelerator physicists to focus their talent more on developing new ideas and concepts, while providing users with higher reliability and more efficient beam delivery, ultimately increasing the scientific output.”

    Speedy simulations

    Accelerator operators don’t spend all of their time trouble-shooting; they also make changes to the beam to optimize it for specific experiments. Scientists can apply for time on an accelerator to conduct a study. The parameters they originally wanted sometimes change as they begin to conduct their experiment. Finding ways to automate this process would save operators and experimental physicists countless hours.

    Auralee Edelen, a research associate at SLAC, is doing just that by exploring how scientists can improve their models of different beam configurations and how to best achieve them.

    To map the many parameters of an entire beam line from start to end, scientists have thus far needed to use thousands of hours on a supercomputer—not always ideal for online adjustments or finding the best way to obtain a particular beam configuration. A machine learning model, on the other hand, could be trained to simulate what would happen if variables were changed, in under a second.

    “This is one of the new capabilities of machine learning that we want to leverage,” Edelen says. “We’re just now getting to a point where we can integrate these models into the control system for operators to use.”

    In 2016 a neural network—a machine learning algorithm designed to recognize patterns—put this idea to the test at the Fermilab Accelerator Science and Technology facility [FAST].

    Fermilab | FAST Facility

    It completed what had been a 20-minute process to compare a few different simulations in under a millisecond. Edelen is expanding on her FAST research at LCLS, pushing the limits of what is currently possible.

    Simulations also come in handy when it isn’t possible for a scientist to take a measurement they want, because doing so would interfere with the beam. To get around this, scientists can use an algorithm to correlate the measurement with others that don’t affect the beam and infer what the desired measurement would have shown.

    Initial studies at FAST demonstrated that a neural network could use this technique to predict meaurements. Now, SLAC’s Facility for Advanced Accelerator and Experimental Tests, or FACET, and its successor, FACET-II, are leading SLAC’s effort to refine this technique for the scientists that use their beam line.


    FACET-II Design, Parameters and Capabilities

    “It’s an exciting time,” says Merminga. “Any one of these improvements would help advance the field of accelerator physics. I am delighted that PIP2IT is being used to test new concepts in accelerator operation.”

    Who knows—within the next few decades, autonomous accelerators may seem as mundane as roaming robotic vacuums

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 11:09 am on June 4, 2019 Permalink | Reply
    Tags: , , , , , Symmetry Magazine   

    From Symmetry: “Engineering the world’s largest digital camera” 

    Symmetry Mag
    From Symmetry

    Erika K. Carlson

    Building the Large Synoptic Survey Telescope also means solving extraordinary technological challenges.


    LSST Camera, built at SLAC

    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

    In a brightly lit clean room at the US Department of Energy’s SLAC National Accelerator Laboratory, engineers are building a car-sized digital camera for the Large Synoptic Survey Telescope.

    When it’s ready, LSST will image almost all of the sky visible from its vantage point on a Chilean mountain, Cerro Pachón, every few nights for a decade to make an astronomical movie of unprecedented proportions.

    The camera is a combination of many extremes. Its largest lens is one of the biggest ever created for astronomy and astrophysics. The ceramic grid that will hold its imaging sensors is so flat that no feature larger than a human red blood cell sticks up from its surface. The electronics that control the sensors are customized to fit in a very tight space and use as little power as possible.

    All of these specifications are vital for letting LSST achieve its scientific goals. And not many of them are easy to achieve. The LSST camera will do what no camera has been capable of doing before, and building it requires solving technical problems that have never been solved before.

    A game of ‘Operation’

    “When you consider a project this complex, you can’t just dive in and say ‘Here, I’m going to design and build this in one shot,’ right?” says Tim Bond, head of the LSST Camera Integration and Test team at SLAC. “You have to divide and conquer. So you break it up into smaller pieces that individual groups can work on.”

    One of those pieces is figuring out how to get the camera’s sensors into place.

    The 3.2-billion-pixel LSST camera will be the largest digital camera ever constructed. Much like handheld digital cameras, the LSST camera will be made up of imaging sensors called charge-coupled devices—189 of them. These sensors and their bundles of electronics are arranged into 21 nine-sensor pallets called “rafts.” Each one weighs more than 20 pounds and stands almost 2 feet tall.

    Each sensor is fragile enough to chip if it even touches one of the other rafts. And, to minimize gaps in the sensors’ images, all of the rafts must be installed two hundredths of an inch apart inside the camera’s ceramic grid.

    The LSST engineers couldn’t possibly install the delicate rafts by hand without destroying them, so they took on the challenge of creating a device that could do this very specific task in their place.

    They concocted one concept after another. Travis Lange, a SLAC mechanical engineer, created computer models of each to find a design that could both do the job and be built with the level of machining precision available.

    “One of the bigger challenges for this is just the tolerance of all the individual pieces and how it corresponds to how much motion I am allowed to use,” Lange says. If a part is the wrong size by even just the width of a human hair, it’s a problem. “If you have many of those parts that are off by that much, those errors all stack up.”

    One of the designs that the team drew up resembled a claw-machine game. The device would sit on a structure above the cryostat, the apparatus that keeps the camera cold. With a long arm, it would reach through to a raft waiting for installation below. Over the course of several hours, it would pull the raft up through a very precisely sized slot and into place in the grid.

    Four specialized cameras pointing at the edges of the imaging sensors would help steer the raft into place without hitting neighboring sensors, and unique imaging software would measure the gaps between rafts in real time. “It’s a crazy game of ‘Operation,’” Lange says.

    The team went with the claw-machine plan. In May 2018, they put it to the test with its first practice raft and a mock-up of the camera. After most of a day had passed, the raft was successfully in place.

    The installation robot has since gone through several other successful test runs. Now that they’ve figured out the kinks in the process, installing each raft takes about two hours. Engineers plan to start the real installation process this summer.

    Not your everyday refrigerator

    The electronics and sensors crammed together inside the camera heat up as electricity runs through them. But heat is the enemy of astronomical observation. A warm sensor will sabotage its own observations by behaving as if it senses light where there is none. And as anyone who has ever heard their laptop fan working overtime before the computer crashed may know, heat can also cause electronics to stop working.

    To keep the camera cold enough, the engineers needed to create a customized refrigeration system. They eventually made a system of eight refrigeration circuits—two for the electronics and six for the sensors.

    Each of these systems works similarly to a kitchen refrigerator, in which a fluid refrigerant carries heat away from the object or area it’s supposed to cool. Networks of tubes carry the refrigerant into and out of the camera.

    At first, the team used only metal tubes for this job. Metal is good at keeping moisture out, which is important because any water that gets into the tubes from the surrounding air would freeze and clog the system. At parts of the system where the camera would need to move around with the telescope as it points to different parts of the sky, the tubes were corrugated to make them into flexible metal hoses.

    But there was a problem. The refrigeration system’s compressor, a device that forces the refrigerant to dump its absorbed heat outside the camera, uses lubricating oil to work smoothly. As the refrigeration system ran, some of the oil would leave the compressor and travel through the tubes.

    This wouldn’t have been a problem if the oil had traveled at a consistent pace all the way through the circuit, back to the compressor. But that wasn’t happening; the oil was getting slowed down and sometimes trapped by the grooves in the corrugated metal hoses. The compressor was getting oil back in trickles or spurts rather than in a steady stream. This made the refrigeration system unpredictable and harder to maintain.

    So the team switched to a different kind of hose for the refrigeration system’s “joints,” says Diane Hascall, a SLAC mechanical engineer on the LSST camera team. “You can almost think of it like a garden hose. But it’s a very special garden hose that’s made to work with refrigerants.”

    The new hoses, called smooth-bore hoses, are made of layers of rubber, braid and other flexible materials, and they are smooth on the inside. The smooth hose lets oil return to the compressor more effectively, Hascall says.

    But there was a trade-off. Unlike the metal hoses, the smooth-bore hoses do let some moisture in.

    To deal with that, the team installed filter dryers that absorb moisture from the system. They are still figuring out how often the dryers need to be replaced to keep the camera in good shape.

    Building next-gen technology

    Building each component of a piece of technology as sophisticated as LSST is a challenge in itself, but the challenges don’t end there. Engineers must also design specialized equipment, software and procedures to test different pieces; put the pieces together; and determine what maintenance the technology will need to run smoothly.

    “There’s a huge number of subsystems,” Bond says. “All of those subsystems have to present their products. And all those products have to be assembled and tested and work in the final finished full product.”

    Bond says working on the project has been a great boon to the engineering team. He says figuring out all of the unexpected challenges that have come with making such an advanced piece of technology has been a great experience, and he looks forward to seeing what future projects the team will tackle together.

    “It’s like picking a bunch of players to set up a hockey team or something,” Bond says. “We’ve actually put together a very good team, and we’re just getting some of our younger people up to speed and trained for the next generation of experiments and projects that will come along.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 12:04 pm on May 14, 2019 Permalink | Reply
    Tags: >Model-dependent vs model-independent research, , , , , , , , , , , Symmetry Magazine   

    From Symmetry: “Casting a wide net” 

    Symmetry Mag
    From Symmetry

    Jim Daley

    Illustration by Sandbox Studio, Chicago

    In their quest to discover physics beyond the Standard Model, physicists weigh the pros and cons of different search strategies.

    On October 30, 1975, theorists John Ellis, Mary K. Gaillard and D.V. Nanopoulos published a paper [Science Direct] titled “A Phenomenological Profile of the Higgs Boson.” They ended their paper with a note to their fellow scientists.

    “We should perhaps finish with an apology and a caution,” it said. “We apologize to experimentalists for having no idea what is the mass of the Higgs boson… and for not being sure of its couplings to other particles, except that they are probably all very small.

    “For these reasons, we do not want to encourage big experimental searches for the Higgs boson, but we do feel that people performing experiments vulnerable to the Higgs boson should know how it may turn up.”

    What the theorists were cautioning against was a model-dependent search, a search for a particle predicted by a certain model—in this case, the Standard Model of particle physics.

    Standard Model of Particle Physics

    It shouldn’t have been too much of a worry. Around then, most particle physicists’ experiments were general searches, not based on predictions from a particular model, says Jonathan Feng, a theoretical particle physicist at the University of California, Irvine.

    Using early particle colliders, physicists smashed electrons and protons together at high energies and looked to see what came out. Samuel Ting and Burton Richter, who shared the 1976 Nobel Prize in physics for the discovery of the charm quark, for example, were not looking for the particle with any theoretical prejudice, Feng says.

    That began to change in the 1980s and ’90s. That’s when physicists began exploring elegant new theories such as supersymmetry, which could tie up many of the Standard Model’s theoretical loose ends—and which predict the existence of a whole slew of new particles for scientists to try to find.

    Of course, there was also the Higgs boson. Even though scientists didn’t have a good prediction of its mass, they had good motivations for thinking it was out there waiting to be discovered.

    And it was. Almost 40 years after the theorists’ tongue-in-cheek warning about searching for the Higgs, Ellis found himself sitting in the main auditorium at CERN next to experimentalist Fabiola Gianotti, the spokesperson of the ATLAS experiment at the Large Hadron Collider who, along with CMS spokesperson Joseph Incandela, had just co-announced the discovery of the particle he had once so pessimistically described.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    Model-dependent vs model-independent

    Scientists’ searches for particles predicted by certain models continue, but in recent years, searches for new physics independent of those models have begun to enjoy a resurgence as well.

    “A model-independent search is supposed to distill the essence from a whole bunch of specific models and look for something that’s independent of the details,” Feng says. The goal is to find an interesting common feature of those models, he explains. “And then I’m going to just look for that phenomenon, irrespective of the details.”

    Particle physicist Sara Alderweireldt uses model-independent searches in her work on the ATLAS experiment at the Large Hadron Collider.

    CERN ATLAS Image Claudia Marcelloni CERN/ATLAS

    Alderweireldt says that while many high-energy particle physics experiments are designed to make very precise measurements of a specific aspect of the Standard Model, a model-independent search allows physicists to take a wider view and search more generally for new particles or interactions. “Instead of zooming in, we try to look in as many places as possible in a consistent way.”

    Such a search makes room for the unexpected, she says. “You’re not dependent on the prior interpretation of something you would be looking for.”

    Theorist Patrick Fox and experimentalist Anadi Canepa, both at Fermilab, collaborate on searches for new physics.

    In Canepa’s work on the CMS experiment, the other general-purpose particle detector at the LHC, many of the searches are model-independent.

    While the nature of these searches allows them to “cast a wider net,” Fox says, “they are in some sense shallower, because they don’t manage to strongly constrain any one particular model.”

    At the same time, “by combining the results from many independent searches, we are getting closer to one dedicated search,” Canepa says. “Developing both model-dependent and model-independent searches is the approach adopted by the CMS and ATLAS experiments to fully exploit the unprecedented potential of the LHC.”

    Driven by data and powered by machine learning

    Model-dependent searches focus on a single assumption or look for evidence of a specific final state following an experimental particle collision. Model-independent searches are far broader—and how broad is largely driven by the speed at which data can be processed.

    “We have better particle detectors, and more advanced algorithms and statistical tools that are enabling us to understand searches in broader terms,” Canepa says.

    One reason model-independent searches are gaining prominence is because now there is enough data to support them. Particle detectors are recording vast quantities of information, and modern computers can run simulations faster than ever before, she says. “We are able to do model-independent searches because we are able to better understand much larger amounts of data and extreme regions of parameter and phase space.”

    Machine-learning is a key part of this processing power, Canepa says. “That’s really a change of paradigm, because it really made us make a major leap forward in terms of sensitivity [to new signals]. It really allows us to benefit from understanding the correlations that we didn’t capture in a more classical approach.”

    These broader searches are an important part of modern particle physics research, Fox says.

    “At a very basic level, our job is to bequeath to our descendants a better understanding of nature than we got from our ancestors,” he says. “One way to do that is to produce lots of information that will stand the test of time, and one way of doing that is with model-independent searches.”

    Models go in and out of fashion, he adds. “But model-independent searches don’t feel like they will.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: