Tagged: Symmetry Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:55 pm on December 6, 2016 Permalink | Reply
    Tags: , , Deep learning takes on physics, , Symmetry Magazine   

    From Symmetry: “Deep learning takes on physics” 

    Symmetry Mag

    Symmetry

    12/06/16
    Molly Olmstead

    1
    Illustration by Sandbox Studio, Chicago with Ana Kova

    Can the same type of technology Facebook uses to recognize faces also recognize particles?

    When you upload a photo of one of your friends to Facebook, you set into motion a complex behind-the-scenes process. An algorithm whirs away, analyzing the pixels in the photo until it spits out your friend’s name. This same cutting-edge technique enables self-driving cars to distinguish pedestrians and other vehicles from the scenery around them.

    Can this technology also be used to tell a muon from an electron? Many physicists believe so. Researchers in the field are beginning to adapt it to analyze particle physics data.

    Proponents hope that using deep learning will save experiments time, money and manpower, freeing physicists to do other, less tedious work. Others hope they will improve the experiments’ performance, making them better able to identify particles and analyze data than any algorithm used before. And while physicists don’t expect deep learning to be a cure-all, some think it could be key to warding off an impending data-processing crisis.

    Neural networks

    Up until now, computer scientists have often coded algorithms by hand, a task that requires countless hours of work with complex computer languages. “We still do great science,” says Gabe Perdue, a scientist at Fermi National Accelerator Laboratory. “But I think we could do better science.”

    Deep learning, on the other hand, requires a different kind of human input.

    One way to conduct deep learning is to use a convolutional neural network, or CNN. CNNs are modeled after human visual perception. Humans process images using a network of neurons in the body; CNNs process images through layers of inputs called nodes. People train CNNs by feeding them pre-processed images. Using these inputs, an algorithm continuously tweaks the weight it places on each node and learns to identify patterns and points of interest. As the algorithm refines these weights, it becomes more and more accurate, often outperforming humans.

    Convolutional neural networks break down data processing in a way that short-circuits steps by tying multiple weights together, meaning fewer elements of the algorithm have to be adjusted.

    CNNs have been around since the late ’90s. But in recent years, breakthroughs have led to more affordable hardware for processing graphics, bigger data sets for training and innovations in the design of the CNNs themselves. As a result, more and more researchers are starting to use them.

    The development of CNNs has led to advances in speech recognition and translation, as well as in other tasks traditionally completed by humans. A London-based company owned by Google used a CNN to create AlphaGo, a computer program that in March beat the second-ranked international player of Go, a strategy board game far more complex than chess.

    CNNs have made it much more feasible to handle previously prohibitively large amounts of image-based data—the kind of amounts seen often in high-energy physics.

    Reaching the field of physics

    CNNs became practical around the year 2006 with the emergence of big data and graphics processing units, which have the necessary computing power to process large amounts of information. “There was a big jump in accuracy, and people have been innovating like wild on top of that ever since,” Perdue says.

    Around a year ago, researchers at various high-energy experiments began to consider the possibility of applying CNNs to their experiments. “We’ve turned a physics problem into, ‘Can we tell a car from a bicycle?’” says SLAC National Accelerator Laboratory researcher Michael Kagan. “We’re just figuring out how to recast problems in the right way.”

    For the most part, CNNs will be used for particle identification and classification and particle-track reconstruction. A couple of experiments are already using CNNs to analyze particle interactions, with high levels of accuracy. Researchers at the NOvA neutrino experiment, for example, have applied a CNN to their data.

    FNAL/NOvA experiment
    FNAL/NOvA experiment

    “This thing was really designed for identifying pictures of dogs and cats and people, but it’s also pretty good at identifying these physics events,” says Fermilab scientist Alex Himmel. “The performance was very good—equivalent to 30 percent more data in our detector.”

    Scientists on experiments at the Large Hadron Collider hope to use deep learning to make their experiments more autonomous, says CERN physicist Maurizio Pierini.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    “We’re trying to replace humans on a few tasks. It’s much more costly to have a person watching things than a computer.”

    CNNs promise to be useful outside of detector physics as well. On the astrophysics side, some scientists are working on developing CNNs that can discover new gravitational lenses, massive celestial objects such as galaxy clusters that can distort light from distant galaxies behind them. The process of scanning the telescope data for signs of lenses is highly time-consuming, and normal pattern-recognizing programs have a hard time distinguishing their features.

    “It’s fair to say we’ve only begun to scratch the surface when it comes to using these tools,” says Alex Radovic, a postdoctoral fellow at The College of William & Mary who works on the NOvA experiment at Fermilab.

    2
    Illustration by Sandbox Studio, Chicago with Ana Kova

    The upcoming data flood

    Some believe neural networks could help avert what they see as an upcoming data processing crisis.

    An upgraded version of the Large Hadron Collider planned for 2025 will produce roughly 10 times as much data.

    CERN HL-LHC bloc

    The Dark Energy Spectroscopic Instrument will collect data from about 35 million cosmic objects, and the Large Synoptic Survey Telescope will capture high-resolution video of nearly 40 billion galaxies.

    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018
    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018

    LSST
    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST telescope, currently under construction at Cerro Pachón Chile

    Data streams promise to grow, but previously exponential growth in the power of computer chips is predicted to falter. That means greater amounts of data will become increasingly expensive to process.

    “You may need 100 times more capability for 10 times more collisions,” Pierini says. “We are going toward a dead end for the traditional way of doing things.”

    Not all experiments are equally fit for the technology, however.

    “I think this’ll be the right tool sometimes, but it won’t be all the time,” Himmel says. “The more dissimilar your data is from natural images, the less useful the networks are going to be.”

    Most physicists would agree that CNNs are not appropriate for data analysis at experiments that are just starting up, for example—neural networks are not very transparent about how they do their calculations. “It would be hard to convince people that they have discovered things,” Pierini says. “I still think there’s value to doing things with paper and pen.”

    In some cases, the challenges of running a CNN will outweigh the benefits. For one, the data need to be converted to image form if they aren’t already. And the networks require huge amounts of data for the training—sometimes millions of images taken from simulations. Even then, simulations aren’t as good as real data. So the networks have to be tested with real data and other cross-checks.

    “There’s a high standard for physicists to accept anything new,” says Amir Farbin, an associate professor of physics at The University of Texas, Arlington. “There’s a lot of hoops to jump through to convince everybody this is right.”

    Looking to the future

    For those who are already convinced, CNNs spawn big dreams for faster physics and the possibility of something unexpected.

    Some look forward to using neural networks for detecting anomalies in the data—which could indicate a flaw in a detector or possibly a hint of a new discovery. Rather than trying to find specific signs of something new, researchers looking for new discoveries could simply direct a CNN to work through the data and try to find what stands out. “You don’t have to specify which new physics you’re searching for,” Pierini says. “It’s a much more open-minded way of taking data.”

    Someday, researchers might even begin to take tackle physics data with unsupervised learning. In unsupervised learning, as the name suggests, an algorithm would train with vast amounts of data without human guidance. Scientists would be able to give algorithms data, and the algorithms would be able to figure out what conclusions to draw from it themselves.

    “If you had something smart enough, you could use it to do all types of things,” Perdue says. “If it could infer a new law of nature or something, that would be amazing.”

    “But,” he adds, “I would also have to go look for new employment.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 2:08 pm on December 2, 2016 Permalink | Reply
    Tags: , , , , Symmetry Magazine   

    From Symmetry: “Viewing our turbulent universe” 

    Symmetry Mag
    Symmetry

    12/02/16
    Liz Kruesi

    Construction has begun for the Cherenkov Telescope Array [CTA], a discovery machine that will study the highest energy objects and events across the entire sky.

    1
    Daniel Mazinkn, CTA Observatory

    Billions of light-years away, a supermassive black hole is spewing high-energy radiation, launching it far outside of the confines of its galaxy. Some of the gamma rays released by that turbulent neighborhood travel unimpeded across the universe, untouched by the magnetic fields threading the cosmos, toward our small, rocky, blue planet.

    We have space-based devices, such as the Fermi Gamma-ray Space Telescope, that can detect those messengers, allowing us to see into the black hole’s extreme environment or search for evidence of dark matter.

    NASA/Fermi Telescope
    NASA/Fermi Telescope

    But Earth’s atmosphere blocks gamma rays. When they meet the atmosphere, sequences of interactions with gas molecules break them into a shower of fast-moving secondary particles. Some of those generated particles—which could be, for example, fast-moving electrons and their antiparticles, positrons—speed through the atmosphere so quickly that they generate a faint flash of blue light, called Cherenkov radiation.

    A special type of telescope—large mirrors fitted with small reflective cones to funnel the faint light—can detect this blue flash in the atmosphere. Three observatories equipped with Cherenkov telescopes look at the sky during moonless hours of the night: VERITAS in Arizona has an array of four; MAGIC in La Palma, Spain, has two; and HESS in Namibia, Africa, has an array of five.

    CfA/VERITAS, AZ, USA
    CfA/VERITAS, AZ, USA

    MAGIC Cherenkov gamma ray telescope  on the Canary island of La Palma, Spain
    MAGIC Cherenkov gamma ray telescope on the Canary island of La Palma, Spain

    HESS Cherenko Array, located on the Cranz family farm, Göllschau, in Namibia, near the Gamsberg
    HESS Cherenko Array, located on the Cranz family farm, Göllschau, in Namibia, near the Gamsberg

    All three observatories have operated for at least 10 years, revealing a gamma-ray sky to astrophysicists.

    “Those telescopes really have helped to open the window, if you like, on this particular region of the electromagnetic spectrum,” says Paula Chadwick, a gamma-ray astronomer at Durham University in the United Kingdom. But that new window has also hinted at how much more there is to learn.

    “It became pretty clear that what we needed was a much bigger instrument to give us much better sensitivity,” she says. And so gamma-ray scientists have been working since 2005 to develop the next-generation Cherenkov observatory: “a discovery machine,” as Stefan Funk of Germany’s Erlangen Centre for Astroparticle Physics calls it, that will reveal the highest energy objects and events across the entire sky. This is the Cherenkov Telescope Array (CTA), and construction has begun.

    Ironing out the details

    As of now, nearly 1400 researchers and engineers from 32 countries are members of the CTA collaboration, and membership continues to grow. “If we look at the number of CTA members as a function of time, it’s essentially a linear increase,” says CTA spokesperson Werner Hofmann.

    Technology is being developed in laboratories spread across the globe: in Germany, Italy, the United Kingdom, Japan, the United States (supported by the NSF—given the primarily astrophysics science mission of the CTA, it is not a part of the Department of Energy High Energy Physics program), and others. Those nearly 1400 researchers are collaborating and working together to gain a better understanding of how our universe works. “It’s the science that’s got everybody together, got everybody excited, and devoting so much of their time and energy to this,” Chadwick says.

    3
    G. Pérez, IAC, SMM

    The CTA will be split between two locations, with one array in the Northern Hemisphere and a larger one in the Southern Hemisphere. The dual location enables a view of the entire sky.

    CTA’s northern site will host four large telescopes (23 meters wide) and 15 medium telescopes (12 meters wide). The southern site will also host four large telescopes, plus 25 medium and 70 small telescopes (4 meters) that will use three different designs. The small telescopes are equipped to capture the highest energy gamma rays, which emanate, for example, from the center of our galaxy. That high-energy source is visible only from the Southern Hemisphere.

    In July 2015, the CTA Observatory (CTAO) council—the official governing body that acts on behalf of the observatory—chose their top locations in each hemisphere. And in 2016, the council has worked to make those preferences official. On September 19 the council and the Instituto de Astrofísica de Canarias signed an agreement stating that the Roque de los Muchachos Observatory on the Canary Island of La Palma would host the northern array and its 19 constituent telescopes. This same site hosts the current-generation Cherenkov array MAGIC.

    IAC

    Construction of the foundation is progressing at the La Palma site to prepare for a prototype of the large telescope. The telescope itself is expected to be complete in late 2017.

    “It’s an incredibly aggressive schedule,” Hofmann says. “With a bit of luck we’ll have the first of these big telescopes operational at La Palma a year from now.”

    While the large telescope prototype is being built on the La Palma site, the medium and small prototype telescopes are being built in laboratories across the globe and installed at observatories similarly scattered. The prototypes’ optical designs and camera technologies need to be tested in a variety of environments. For example, the team working on one of the small telescope designs has a prototype on the slope of Mount Etna in Sicily. There, volcanic ash sometimes batters the mirrors and attached camera, providing a test to ensure CTA telescopes and instruments can withstand the environment. Unlike optical telescopes, which sit in protective domes, Cherenkov telescopes are exposed to the open air.

    The CTAO council expects to complete negotiations with the European Southern Observatory before the end of 2016 to finalize plans for the southern array. The current plan is to build 99 telescopes in Chile.

    ESO Bloc Icon

    This year, the council also chose the location of the CTA Science Management Center, which will be the central point of data processing, software updates and science coordination. This building, which will be located at Deutsches Elektronen-Synchrotron (also known as DESY) outside of Berlin, has not yet been built, but Hofmann says that should happen in 2018.

    DESY

    The observatory is on track for the first trial observations (essentially, testing) in 2021 and the first regular observations beginning in 2022. How close the project’s construction stays to this outlined schedule depends on funding from nations across the globe. But if the finances remain on track, then in 2024, the full observatory should be complete, and its 118 telescopes will then look for bright flashes of Cherenkov light signaling a violent event or object in the universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 1:13 pm on November 21, 2016 Permalink | Reply
    Tags: Symmetry Magazine, What to do with the data that is here and what is coming.   

    From Symmetry: “What to do with the data?” 

    Symmetry Mag

    Symmetry

    11/15/16
    Manuel Gnida

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Physicists and scientific computing experts prepare for an onslaught of petabytes.

    Rapid advances in computing constantly translate into new technologies in our everyday lives. The same is true for high-energy physics. The field has always been an early adopter of new technologies, applying them in ever more complex experiments that study fine details of nature’s most fundamental processes. However, these sophisticated experiments produce floods of complex data that become increasingly challenging to handle and analyze.

    Researchers estimate a decade from now, computing resources may have a hard time keeping up with the slew of data produced by state-of-the-art discovery machines. CERN’s Large Hadron Collider, for example, already generates tens of petabytes (millions of gigabytes) of data per year today, and it will produce ten times more after a future high-luminosity upgrade.

    CERN HL-LHC bloc
    HL-LHC

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Big data challenges like these are not limited to high-energy physics. When the Large Synoptic Survey Telescope begins observing the entire southern sky in never-before-seen detail, it will create a stream of 10 million time-dependent events every night and a catalog of 37 billion astronomical objects over 10 years.

    LSST
    LSST/Camera, built at SLAC
    LSST/Camera, built at SLAC
    LSST Interior
    LSST telescope, currently under construction at Cerro Pachón Chile
    LSST telescope, currently under construction at Cerro Pachón Chile

    Another example is the future LCLS-II X-ray laser at the Department of Energy’s SLAC National Accelerator Laboratory, which will fire up to a million X-ray pulses per second at materials to provide unprecedented views of atoms in motion. It will also generate tons of scientific data.

    lcls-ii-image
    SLAC/LCLS II schematic
    SLAC/LCLS II

    To make things more challenging, all big data applications will have to compete for available computing resources, for example when shuttling information around the globe via shared networks.

    What are the tools researchers will need to handle future data piles, sift through them and identify interesting science? How will they be able to do it as fast as possible? How will they move and store tremendous data volumes efficiently and reliably? And how can they possibly accomplish all of this while facing budgets that are expected to stay flat?

    “Clearly, we’re at a point where we need to discuss in what direction scientific computing should be going in order to address increasing computational demands and expected shortfalls,” says Richard Mount, head of computing for SLAC’s Elementary Particle Physics Division.

    The researcher co-chaired the 22nd International Conference on Computing in High-Energy and Nuclear Physics (CHEP 2016), held Oct. 10-14 in San Francisco, where more than 500 physicists and computing experts brainstormed possible solutions.

    Here are some of their ideas.

    Exascale supercomputers

    Scientific computing has greatly benefited from what is known as Moore’s law—the observation that the performance of computer chips has doubled every 18 months or so for the past decades. This trend has allowed scientists to handle data from increasingly sophisticated machines and perform ever more complex calculations in reasonable amounts of time.

    Moore’s law, based on the fact that hardware engineers were able to squeeze more and more transistors into computer chips, has recently reached its limits because transistor densities have begun to cause problems with heat.

    Instead, modern hardware architectures involve multiple processor cores that run in parallel to speed up performance. Today’s fastest supercomputers, which are used for demanding calculations such as climate modeling and cosmological simulations, have millions of cores and can perform tens of millions of billions of computing operations per second.

    “In the US, we have a presidential mandate to further push the limits of this technology,” says Debbie Bard, a big-data architect at the National Energy Research Scientific Computing Center. “The goal is to develop computing systems within the next 10 years that will allow calculations on the exascale, corresponding to at least a billion billion operations per second.”

    Software reengineering

    Running more data analyses on supercomputers could help address some of the foreseeable computing shortfalls in high-energy physics, but the approach comes with its very own challenges.

    “Existing analysis codes have to be reengineered,” Bard says. “This is a monumental task, considering that many have been developed over several decades.”

    Maria Girone, chief technology officer at CERN openlab, a collaboration of public and private partners developing IT solutions for the global LHC community and other scientific research, says, “Computer chip manufacturers keep telling us that our software only uses a small percentage of today’s processor capabilities. To catch up with the technology, we need to rewrite software in a way that it can be adapted to future hardware developments.”

    Part of this effort will be educating members of the high-energy physics community to write more efficient software.

    “This was much easier in the past when the hardware was less complicated,” says Makoto Asai, who leads SLAC’s team for the development of Geant4, a widely used simulation toolkit for high-energy physics and many other applications. “We must learn the new architectures and make them more understandable for physicists, who will have to write software for our experiments.”

    Smarter networks and cloud computing

    Today, LHC computing is accomplished with the Worldwide LHC Computing Grid, or WLCG, a network of more than 170 linked computer centers in 42 countries that provides the necessary resources to store, distribute and analyze the tens of petabytes of data produced by LHC experiments annually.

    “The WLCG is working very successfully, but it doesn’t always operate in the most cost-efficient way,” says Ian Fisk, deputy director for computing at the Simons Foundation and former computing coordinator of the CMS experiment at the LHC.

    “We need to move large amounts of data and store many copies so that they can be analyzed in various locations. In fact, two-thirds of the computing-related costs are due to storage, and we need to ask ourselves if computing can evolve so that we don’t have to distribute LHC data so widely.”

    More use of cloud services that offer internet-based, on-demand computing could be a viable solution for remote data processing and analysis without reproducing data.

    Commercial clouds have the capacity and capability to take on big data: Google, receives billions of photos per day and hundreds of hours of video every minute, posing technical challenges that have led to the development of powerful computing, storage and networking solutions.

    Deep machine learning for data analysis

    While conventional computer algorithms perform only operations that they are explicitly programmed to perform, machine learning uses algorithms that learn from the data and successively become better at analyzing them.

    In the case of deep learning, data are processed in several computational layers that form a network of algorithms inspired by neural networks. Deep learning methods are particularly good at finding patterns in data. Search engines, text and speech recognition, and computer vision are all examples.

    “There are many areas where we can learn from technology developments outside the high-energy physics realm,” says Craig Tull, who co-chaired CHEP 2016 and is head of the Science Software Systems Group at Lawrence Berkeley National Laboratory. “Machine learning is a very good example. It could help us find interesting patterns in our data and detect anomalies that could potentially hint at new science.”

    At present, machine learning in high-energy physics is in its infancy, but researchers have begun implementing it in the analysis of data from a number of experiments, including ATLAS at the LHC, the Daya Bay neutrino experiment in China and multiple experiments at Fermi National Accelerator Laboratory near Chicago.

    CERN/ATLAS detector
    CERN/ATLAS detector

    Daya Bay, approximately 52 kilometers northeast of Hong Kong and 45 kilometers east of Shenzhen, China
    Daya Bay, approximately 52 kilometers northeast of Hong Kong and 45 kilometers east of Shenzhen, China

    FNAL LBNF/DUNE from FNAL to SURF
    FNAL LBNF/DUNE from FNAL to SURF

    Quantum computing

    The most futuristic approach to scientific computing is quantum computing, an idea that goes back to the 1980s when it was first brought up by Richard Feynman and other researchers.

    Unlike conventional computers, which encode information as a series of bits that can have only one of two values, quantum computers use a series of quantum bits, or qubits, that can exist in several states at once. This multitude of states at any given time exponentially increases the computing power.

    A simple one-qubit system could be an atom that can be in its ground state, excited state or a superposition of both, all at the same time.

    “A quantum computer with 300 qubits will have more states than there are atoms in the universe,” said Professor John Martinis from the University of California, Santa Barbara, during his presentation at CHEP 2016. “We’re at a point where these qubit systems work quite well and can perform simple calculations.”

    Martinis has teamed up with Google to build a quantum computer. In a year or so, he says, they will have built the first 50-qubit system. Then, it will take days or weeks for the largest supercomputers to validate the calculations done within a second on the quantum computer.

    We might soon find out in what directions scientific computing in high-energy physics will develop: The community will give the next update at CHEP 2018 in Bulgaria.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 9:22 am on November 21, 2016 Permalink | Reply
    Tags: , , , , , , Symmetry Magazine   

    From Symmetry- “Q and A: What more can we learn about the Higgs?” 

    Symmetry Mag

    Symmetry

    11/17/16
    Angela Anderson

    1
    Illustration by Sandbox Studio, Chicago with Ana Kova

    Four physicists discuss Higgs boson research since the discovery.

    CERN CMS Higgs Event
    CERN CMS Higgs Event

    CERN ATLAS Higgs Event
    CERN ATLAS Higgs Event

    More than two decades before the discovery of the Higgs boson, four theoretical physicists wrote a comprehensive handbook called The Higgs Hunter’s Guide. The authors—Sally Dawson of the Department of Energy’s Brookhaven National Laboratory; John F. Gunion from the University of California, Davis; Howard E. Haber from the University of California, Santa Cruz; and Gordon Kane from the University of Michigan—were recently recognized for “instrumental contributions to the theory of the properties, reactions and signatures of the Higgs boson” as recipients of the American Physical Society’s 2017 J.J. Sakurai Prize for Theoretical Physics.

    They are still investigating the particle that completed the Standard Model, and some are hunting different Higgs bosons that could take particle physics beyond that model.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Dawson, Gunion and Haber recently attended the Higgs Couplings 2016 workshop at SLAC National Accelerator Laboratory, where physicists gathered to talk about the present and future of Higgs research. Symmetry interviewed all four to find out what’s on the horizon.

    S: What is meant by “Higgs couplings”?
    JG: The Higgs is an unstable particle that lasts a very short time in the detector before it decays into pairs of things like top quarks, gluons, and photons. The rates and relative importance of these decays is determined by the couplings of the Higgs boson to these different particles. And that’s what the workshop is all about, trying to determine whether or not the couplings predicted in the Standard Model agree with the couplings that are measured experimentally.

    SD: Right, we can absolutely say how much of the time we expect the Higgs to decay to the known particles, so a comparison of our predictions with the experimental measurements tells us whether there’s any possible deviation from our Standard Model.

    JG: For us what would be really exciting is if we did see deviations. However, that probably requires more precision than we currently have experimentally.

    GK: But we don’t all agree on that, in the sense that I would prefer that it almost exactly agree with the Standard Model predictions because of a theory that I like that says it should. But most of the people in the world would prefer what John and Sally said.

    S.How many people are working in Higgs research now worldwide?

    GK: I did a search for “Higgs” in the title of scientific papers after 2011 on arXiv.org and came up with 5211 hits; there are several authors per paper, of course, and some have written multiple papers, so we can only estimate.

    SD: There are roughly 5000 people on each experiment, ATLAS and CMS, and some fraction of those work on Higgs research, but it’s really too hard to calculate. They all contribute in different ways. Let’s just say many thousands of experimentalists and theorists worldwide.
    What are Higgs researchers hoping to accomplish?

    HH: There are basically two different avenues. One is called the precision Higgs program designed to improve precision in the current data. The other direction addresses a really simple question: Is the Higgs boson a solo act or not? If additional Higgs-like particles exist, will they be discovered in future LHC experiments?

    SD: I think everybody would like to see more Higgs bosons. We don’t know if there are more, but everybody is hoping.

    JG: If you were Gordy [Kane] who only believes in one Higgs boson, you would be working to confirm with greater and greater precision that the Higgs boson you see has precisely the properties predicted in the Standard Model. This will take more and more luminosity and maybe some future colliders like a high luminosity LHC or an e+e- collider.

    HH: The precision Higgs program is a long-term effort because the high luminosity LHC is set to come online in the mid 2020s and is imagined to continue for another 10 years. There are a lot of people trying to predict what precision could you ultimately achieve in the various measurements of Higgs boson properties that will be made by the mid 2030s. Right now we have a set of measurements with statistical and systematic errors of about 20 percent. By the end of the high luminosity LHC, we anticipate that the size of the measurement errors can be reduced to around 10 percent and maybe in some cases to 5 percent.

    S. How has research on the topic changed since the Higgs discovery?

    SD: People no longer build theoretical models that don’t have a Higgs in them. You have to make sure that your model is consistent with what we know experimentally. You can’t just build a crazy model; it has to be a model with a Higgs with roughly the properties we’ve observed, and that is actually pretty restrictive.

    JG: Many theoretical models have either been eliminated or considerably constrained. For example, the supersymmetric models that are theoretically attractive kind of expect a Higgs boson of this mass, but only after pushing parameters to a bit of an extreme. There’s also an issue called naturalness: In the Standard Model alone there is no reason why the Higgs boson should have such a light mass as we see, whereas in some of these theories it is natural to see the Higgs boson at this mass. So that’s a very important topic of research—looking for those models that are in a certain sense naturally predicting what we see and finding additional experimental signals associated with such models.

    GK: For example, the supersymmetric theories predict that there will be five Higgs bosons with different masses. The extent to which the electroweak symmetry is broken by each of the five depends on their couplings, but there should be five discovered eventually if the others exist.

    HH: There’s also a slightly different attitude to the research today. Before the Higgs boson was discovered it was known that the Standard Model was theoretically inconsistent without the Higgs boson. It had to be there in some form. It wasn’t going to be that we ran the LHC and saw nothing—no Higgs boson and nothing else. This is called a no-lose theorem. Now, having discovered the Higgs boson, you cannot guarantee that additional new phenomenon exists that must be discovered at the LHC. In other words, the Standard Model itself, with the Higgs boson, is a theoretically consistent theory. Nevertheless, not all fundamental phenomena can be explained by Standard Model physics (such as neutrino masses, dark matter and the gravitational force), so we know that new phenomena beyond the Standard Model must be present at some very high-energy scale. However, there is no longer a no-lose theorem that states that this new phenomena must appear at the energy scale that is probed at the LHC.

    S. How have the new capabilities of the LHC changed the game?

    SD: We have way more Higgs bosons; that’s really how it’s changed. Since the energy is higher we can potentially make heavier new particles.

    GK: There were about a million Higgs bosons produced in the first run of the LHC, and there will be more than twice that in the second run, but they only can find a small fraction of those in the detector because of background noise and some other things. It’s very hard. It takes clever experimenters. To find a couple of hundred Higgs you need to produce a million.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    SD: Most of the time the Higgs decays into something we can’t see in our detector. But as the measurements get better and better, experimentalists who have been extracting the couplings are quantifying more properties of the Higgs decays. So instead of just counting how many Higgs bosons decay to two Z bosons, they will look at where the two Z bosons are in the detector or the energy of the Z bosons.

    S. Are there milestones you are looking forward to?

    GK: Confirming the Standard Model Higgs with even more precision. The decay the Higgs boson was discovered in—two photons—could happen in any other kind of particle. But the decay to W boson pairs is the one that you need for it to break the electroweak symmetry [a symmetry between the masses of the particles associated with the electromagnetic and weak forces], which is what it should do according to the Standard Model.

    SD: So, one of the things we will see a lot of in the next year or two is better measurements of the Higgs decay into the bottom quarks. Within a few years, we should learn whether or not there are more Higgs bosons. Measuring the couplings to the desired precision will take 20 years or more.

    JG: There’s another thing people are thinking about, which is how the Higgs can be connected to the important topic of dark matter. We are working on models that establish such a connection, but most of these models, of course, have extra Higgs bosons. It’s even possible that one of those extra Higgs bosons might be invisible dark matter. So the question is whether the Higgs we can see tells us something about dark matter Higgs bosons or other dark matter particles, such as the invisible particles that are present in supersymmetry.

    S. Are there other things still to learn?

    JG: There are many possible connections between Higgs bosons, in a generic sense and the history of the universe. For example, it could be that a Higgs-like particle called the inflaton is responsible for the expansion of the universe. As a second example, generalized Higgs boson models could explain the preponderance of matter over antimatter in the current universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 12:14 pm on November 8, 2016 Permalink | Reply
    Tags: , , , Symmetry Magazine   

    From Symmetry: “The origins of dark matter” 

    Symmetry Mag
    Symmetry

    11/08/16
    Matthew R. Francis

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Theorists think dark matter was forged in the hot aftermath of the Big Bang.

    Transitions are everywhere we look. Water freezes, melts, or boils; chemical bonds break and form to make new substances out of different arrangements of atoms.

    The universe itself went through major transitions in early times. New particles were created and destroyed continually until things cooled enough to let them survive.

    CMB per ESA/Planck
    “CMB per ESA/Planck

    Those particles include ones we know about, such as the Higgs boson or the top quark.

    CERN CMS Higgs Event
    CERN CMS Higgs Event

    CERN ATLAS Higgs Event
    CERN ATLAS Higgs Event

    But they could also include dark matter, invisible particles which we presently know only because of their gravitational effects.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al
    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al.

    In cosmic terms, dark matter particles could be a “thermal relic,” forged in the hot early universe and then left behind during the transitions to more moderate later eras. One of these transitions, known as “freeze-out,” changed the nature of the whole universe.

    The hot cosmic freezer

    On average, today’s universe is a pretty boring place. If you pick a random spot in the cosmos, it’s far more likely to be in intergalactic space than, say, the heart of a star or even inside an alien solar system. That spot is probably cold, dark and quiet.

    The same wasn’t true for a random spot shortly after the Big Bang.

    “The universe was so hot that particles were being produced from photons smashing into other photons, of photons hitting electrons, and electrons hitting positrons and producing these very heavy particles,” says Matthew Buckley of Rutgers University.

    The entire cosmos was a particle-smashing party, but parties aren’t meant to last. This one lasted only a trillionth of a second. After that came the cosmic freeze-out.

    During the freeze-out, the universe expanded and cooled enough for particles to collide far less frequently and catastrophically.

    “One of these massive particles floating through the universe is finding fewer and fewer antimatter versions of itself to collide with and annihilate,” Buckley says.

    “Eventually the universe would get large enough and cold enough that the rate of production and the rate of annihilation basically goes to zero, and you just a relic abundance, these few particles that are floating out there lonely in space.”

    Many physicists think dark matter is a thermal relic, created in huge numbers in before the cosmos was a half-second old and lingering today because it barely interacts with any other particle.

    A WIMPy miracle

    One reason to think of dark matter as a thermal relic is an interesting coincidence known as the “WIMP miracle.”

    WIMP stands for “weakly-interacting massive particle,” and WIMPs are the most widely accepted candidates for dark matter. Theory says WIMPs are likely heavier than protons and interact via the weak force, or at least interactions related to the weak force.

    The last bit is important, because freeze-out for a specific particle depends on what forces affect it and the mass of the particle. Thermal relics made by the weak force were born early in the universe’s history because particles need to be jammed in tight for the weak force, which only works across short distances, to be a factor.

    “If dark matter is a thermal relic, you can calculate how big the interaction [between dark matter particles] needs to be,” Buckley says.

    Both the primordial light known as the cosmic microwave background and the behavior of galaxies tell us that most dark matter must be slow-moving (“cold” in the language of physics). That means interactions between dark matter particles must be low in strength.

    “Through what is perhaps a very deep fact about the universe,” Buckley says, “that interaction turns out to be the strength of what we know as the weak nuclear force.”

    That’s the WIMP miracle: The numbers are perfect to make just the right amount of WIMPy matter.

    The big catch, though, is that experiments haven’t found any WIMPs yet. It’s too soon to say WIMPs don’t exist, but it does rule out some of the simpler theoretical predictions about them.

    Ultimately, the WIMP miracle could just be a coincidence. Instead of the weak force, dark matter could involve a new force of nature that doesn’t affect ordinary matter strongly enough to detect. In that scenario, says Jessie Shelton of the University of Illinois at Urbana-Champaign, “you could have thermal freeze-out, but the freeze-out is of dark matter to some other dark field instead of [something in] the Standard Model.”

    In that scenario, dark matter would still be a thermal relic but not a WIMP.

    For Shelton, Buckley, and many other physicists, the dark matter search is still full of possibilities.

    “We have really compelling reasons to look for thermal WIMPs,” Shelton says. “It’s worth remembering that this is only one tiny corner of a much broader space of possibilities.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 10:14 am on October 28, 2016 Permalink | Reply
    Tags: International Neutrino Experiment Xenon TPC, , Symmetry Magazine   

    From Symmetry: “A bright idea” 

    Symmetry Mag
    Symmetry

    10/27/16
    Ricarda Laasch

    Can a biochemistry technique win the battle against background for scientists studying the nature of neutrinos?

    1
    Hermina Nedelescu and Satoru Yoshioka

    While we read, think, move or just perceive the world around us, thousands of neurons fire in our brain. Ions, like little messengers, jump from neuron to neuron and create a cascade of information transfer. Using a technique called single-molecule fluorescence imaging, neuroscientists can make these cascades glow.

    David Nygren, a physics professor who studies neutrinos at the University of Texas at Arlington, was reading about how neuroscientists watch brain cells think. If Nygren’s brain cells had been lit up with fluorescence while he read, they would have looked like glowing trees branching up into the sky. He had an idea.

    To conduct single-molecule fluorescence imaging, scientists release a dye into brain cells—they use rat brain cells—and hit them with light. The dye gets excited and starts to glow. This works because the dye attaches only to certain ions—calcium ions, which act as messengers between neurons.

    “It hit me,” Nygren says, “calcium and barium are not that different.”

    Just as calcium is important to neuroscientists, barium is important to Nygren. That’s because he is part of the Neutrino Experiment with Xenon TPC, also known as NEXT.

    2
    Particle physics experiments at International Neutrino Experiment Xenon TPC advances with new ultra-high vacuum vessel

    NEXT is searching for proof of a theoretical process called neutrinoless double beta decay. During a double-beta decay, two neutrons within one nucleus transform into two protons, two electrons and two neutrinos. For a double-beta decay to be called “neutrinoless,” the two neutrinos created would need to annihilate one another.

    If this happened, scientists would have the answer to an important question in particle physics: Are neutrinos their own antiparticles? If two neutrinos canceled one another, physicists would know that neutrinos and antineutrinos are one and the same.

    NEXT looks for neutrinoless double-beta decay in xenon. If xenon went through the process of neutrinoless double-beta decay, its nucleus would transform into barium. Nygren’s neurons were firing because he realized he might be able to use single-molecule fluorescence imaging to search for that new barium nucleus.

    Neutrinoless double-beta decay would be an extremely rare process. In a ton of xenon this decay might happen only a few times in a year.

    Most radioactive processes happen far more often than neutrinoless double-beta decay. And most materials on Earth include a small amount of naturally occurring radioactive elements. All of this creates a sea of ambient background radiation that scientists must find a way to filter out if they ever want to see evidence of neutrinoless double-beta decay.

    Using single-molecule fluorescence imaging could make neutrinoless double-beta decay stand out from the crowd.

    “If they succeed in proving the principle of their detector concept, they will eliminate all background except for normal double-beta decay,” says Steven Elliot, scientist at the Majorana Demonstrator at Sanford Underground Research Facility. “This would be a great leap for our field and our understanding of the universe.”

    Nygren took his idea to a group of fellow scientists at Arlington.

    “We all had no background in biochemistry,” says Nygren’s colleague Ben Jones, an assistant professor. “But we dove into the topic to explore and adapt the technique for our needs.”

    The group at Arlington released a calcium-tagging dye in an aqueous environment and found that it was able to grab barium ions and glow. The next step will be to test the technique in a dry environment; they want to eventually employ the dye in a xenon gas chamber.

    “We are still at the beginning, but so far the idea to use this technique for our detector looks less crazy every single day,” says Austin McDonald, a research assistant at the University of Texas at Arlington. “We really hope we can realize this in a large-scale project.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 1:55 pm on October 25, 2016 Permalink | Reply
    Tags: , , , Symmetry Magazine   

    From Symmetry: “A primer on gravitational-wave detectors” 

    Symmetry Mag
    Symmetry

    10/25/16
    Diana Kwon

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Physicists are searching for gravitational waves all across the spectrum.

    Gravitational waves, or ripples in the fabric of space-time, have captured the imagination of physicists since Albert Einstein first predicted them in 1916. But it wasn’t until the 1960s that Joseph Weber, an experimental physicist at the University of Maryland, built the first machine meant to find them.

    About 50 years later, scientists finally did it; the Laser Interferometer Gravitational-Wave Observatory detected gravitational waves coming from the merger of two black holes.

    LSC LIGO Scientific Collaboration
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    The merging black holes LIGO discovered emit gravitational waves at relatively high frequencies. But more massive objects, such as supermassive black holes and merging galaxies, produce waves with longer periods and lower frequencies.

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project
    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Astronomers are using a wide variety of instruments to seek out gravitational waves at these different frequencies to detect the cosmic events that produce them.

    Resonant-mass detectors

    Weber’s first gravitational wave detector was a resonant bar detector, or Weber bar. These detectors are big cylindrical metal bars that vibrate at their resonant frequencies when a gravitational wave passes by, a bit like massive tuning forks.

    After the many generations of following Weber’s first attempts, most resonant-mass detectors are now out of commission. Physicists use them to search for gravitational waves around the 700- to 3000-hertz region, where they expect to find supernovae, merging neutron stars and possibly even mini black holes. The major limitation of these instruments is that they are sensitive to a very small frequency range.

    To increase their chances, some physicists decided to switch from a bar-shaped resonant-mass detector to a spherical one that could detect gravitational waves in all directions and with any polarization, not just some.

    One of the most recently built spherical detectors is the Mario Schenberg gravitational-wave detector, which is now at the National Institute for Space Research (INPE) in Brazil. The sphere is around 65 centimeters in diameter and weighs around 1150 kilograms.

    3
    Development of superconducting Klystron cavity for the Mario Schenberg gravitational wave detector [IMA]

    This project is still active, though its members are now part of the LIGO collaboration and devote most of their time there.

    “We keep going, slowly, but our objective is to make these detectors run perhaps five or 10 years from now,” says Odylio Denys Aguiar, a physicist at INPE and the leader of the project.

    Ground-based interferometers

    Ground-based interferometers are probably the most well known gravitational-wave detectors, thanks to LIGO’s breakthrough. These detectors have two arms that form the shape of an L. In LIGO’s case, each arm is 4 kilometers long.

    In ground-based interferometers, physicists split a laser beam and send it down each arm. The beam bounces off mirrors at each end, travelling back and forth. A passing gravitational wave changes the relative lengths of the arms slightly and shifts the beam’s path, creating a change that physicists can identify.

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib
    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    These observatories can detect short wavelengths, primarily with frequencies in the hundreds of hertz range, making them sensitive to mergers of neutron stars and black holes that are between a few times to tens of times the mass of the sun.

    There are a number of ground-based interferometers, both active and under construction. LIGO operates out of twin observatories in Louisiana and Washington state. There are plans to build a third LIGO observatory in India. Virgo and GEO600, which have similar set-ups but shorter arms, are located in Italy and Germany, respectively. KAGRA, an underground interferometer, is under construction in Japan.

    These detectors are sensitive to a similar range of frequencies, but there is a key benefit to having many detectors in different parts of the world. Gravitational-wave detectors act like microphones, surveying massive patches of the cosmos from all directions. This increases their chances of finding signs of gravitational waves, but it also makes it difficult to see where exactly they came from. Having more than one detector allows physicists to triangulate a signal to better locate its position on the sky.

    Space-based interferometers

    Some astronomers plan to bring gravitational-wave astronomy to space. The Laser Interferometer Space Antenna (LISA) has a set-up similar to LIGO, except with three arms over a million kilometers long.

    ESA/eLISA
    ESA/eLISA

    Instead of an L-shape, LISA would form an equilateral triangle orbiting the sun, with a satellite placed at each of the vertices. Like in LIGO, a laser beam would go back and forth along the arms, and physicists could detect changes in the length of the arms as a gravitational wave passed through.

    The LISA collaboration hopes to launch a space-based observatory around 2034. So far, they have launched the LISA Pathfinder, a short version of one of the arms of the observatory, to test how well it works.

    ESA/LISA Pathfinder
    ESA/LISA Pathfinder

    “With the success of LISA Pathfinder, we already know that we can do large parts of the mission,” says Martin Hewitson, a physicist at the Max Planck Institute for Gravitational Physics working on both LISA and LISA Pathfinder. “So there is a lot of scientific and political momentum to make this mission happen earlier.”

    In space, the detector will be sensitive to much lower frequencies than the ground-based ones—in LISA’s case, frequencies in the millihertz range. Here, astronomers expect to see gravitational waves from mergers of the supermassive black holes at the center of galaxies. “By looking at these and how they evolve, there is a hope to trace how these galaxies merged and how these black holes have grown over the whole cosmic time,” Hewitson says.

    Pulsar timing arrays

    Pulsars, spinning neutron stars that constantly emit beams of electromagnetic radiation, are natural timekeepers. Of these, millisecond pulsars are the most regular—to the point that astronomers can predict the time they will arrive on Earth with nanosecond precision.

    Physicists use pulsar timing arrays to search for gravitational waves. When a gravitational wave passes, space-time warps between the pulsar and earth. This changes the time of arrival of the pulses, which physicists can then detect with radio telescopes.

    “With LIGO, they are trying to detect a deformation much smaller than the diameter of a proton across an instrument that is many kilometers in length—an incredibly tiny signature,” explains Shami Chatterjee, an astronomer at Cornell University working on the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).

    NANOGrave Gravitational waves JPL-Caltech  David Champion
    NANOGrave Gravitational waves JPL-Caltech David Champion

    “For pulsar timing arrays, it’s the same scaling—our arms are hundreds or thousands of light years long, but we’re trying to measure the same kind of fractional change.”

    This technique is sensitive to even lower frequencies than LISA, in the nanohertz range. Here, scientists expect to see a stochastic background of merging supermassive black holes (the sum of all the mergers), binary supermassive black holes, as well as more exotic sources such as cosmic strings and memory bursts, the permanent imprint on space-time left behind by merging supermassive black holes.

    There are three major pulsar timing array experiments in operation: NANOGrav, the European Pulsar Timing Array, and the Parkes Pulsar Timing Array in Australia.

    Cosmic microwave background detectors

    Finally, astronomers are also looking for primordial gravitational waves. These are waves created in the chaos of the very early universe.

    One of the ways astronomers do this is quite different from the techniques described above. Rather than watching moving light originating from a laser or a pulsar, they look at a still image of light left over from the time just after the Big Bang—the cosmic microwave background—and try to see evidence of gravitational waves imprinted in it.

    “It’s the difference between finding something bobbing up and down in the ocean and taking a snapshot of the ocean and seeing the crests and troughs,” Chatterjee says.

    This is extremely difficult because there are many sources of noise, making the feat a bit like finding a specific small ripple in a pool while people are splashing around in it.

    Interferometers and pulsar timing arrays are searching for these ancient waves as well. “The primordial gravitational wave background can, in principle, be observed in a very broad range of frequencies, from very low to very high ones,” says Pablo Rosado, an astrophysicist at Monash University studying gravitational wave detection. But according to Rosado, detectors like LIGO might not be able to see this signal because there may be too many binary black holes masking it.

    LIGO’s discovery was just the beginning. Just as the electromagnetic spectrum spans everything from long radio waves to short gamma rays, the gravitational wave spectrum extends across a huge range of frequencies that require very different instruments to find. Astronomers hope that together, these detectors will find the invisible signals that will help them understand the universe in a whole new light.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 3:13 pm on October 20, 2016 Permalink | Reply
    Tags: , , , Symmetry Magazine   

    From Symmetry: “99 percent invisible” 

    Symmetry Mag
    Symmetry

    10/20/16
    Laura Dattaro

    1
    Dragonfly. Pieter van Dokkum

    With a small side project, astronomers discover a new type of galaxy.

    In 2011, astronomers Pieter van Dokkum and Roberto “Bob” Abraham found themselves in a restaurant in Toronto nursing something of a mid-life crisis. Abraham, a professor at the University of Toronto, and van Dokkum, at Yale, had become successful scientists, but they discovered that often meant doing less and less science and more and more managing large, complex projects.

    “They’re important and they’re great and you feel this tremendous obligation once you’ve reached a certain age to serve on these committees because you have to set things up for the next generation,” Abraham says. “At the same time, it was no longer very much fun.”

    The two friends fantasized about finding a small, manageable project that might still have some impact. By the time a few hours had passed, they picked an idea: using new camera lenses to find objects in the sky that emit very little light.

    They had no way of knowing then that within the next five years, they’d discover an entirely new class of galactic object.

    From the handmade telescopes of Galileo to spacefaring technological marvels like Hubble, all telescopes are designed for one basic task: gathering light. Telescope technology has advanced far enough that Hubble can pick up light from stars that were burning just 400 million years after the universe first popped into existence.

    But telescopes often miss objects with light that’s spread out, or diffuse, which astronomers describe as having low surface brightness. Telescopes like Hubble have large mirrors that scatter light from bright objects in the sky, masking anything more diffuse. “There’s this bit of the universe that’s really quite unexplored because our telescope designs are not good at detecting these things,” Abraham says.

    When van Dokkum and Abraham sat down at that bar, they decided to try their hands at studying these cosmic castaways. The key turned out to be van Dokkum’s hobby as an amateur insect photographer. He had heard of new camera lenses developed by Canon that were coated with nanoparticles designed to prevent light scattering. Although they were intended for high-contrast photography—say, snapping a photo of a boat in a sunny bay—van Dokkum thought these lenses might be able to spot diffuse objects in the sky.

    Abraham was skeptical at first: “Yeah, I’m sure the Canon corporation has come up with a magical optical coating,” he recalls thinking. But when the pair took one to a parking lot in a dark sky preserve in Quebec, they were sold on its capabilities. They acquired more and more lenses—not an easy task, at $12,000 a pop—eventually gathering 48 of them, and arranged them in an ever-growing honeycomb shape to form what can rightly be called a telescope. They named it Dragonfly.

    In 2014, both van Dokkum and Abraham were at a conference in Oxford when van Dokkum examined an image that had come in from Dragonfly. (At the time, it had just eight lenses.) It was an image of the Coma Cluster, one of the most photographed galaxy clusters in the universe, and it was dotted with faint smudges that didn’t match any objects in Coma Cluster catalogs.

    Van Dokkum realized these smudges were galaxies, and that they were huge, despite their hazy light. They repeated their observations using the Keck telescope, which enabled them to calculate the velocities of the stars inside their mysterious galaxies. One was measured at 50 kilometers per second, 10 times the speed the galaxy should be moving based on the mass of its stars alone.

    “We realized that for these extremely tenuous objects to survive as galaxies and not be ripped apart by their movement through space and interactions with other galaxies, there must be much more than meets the eye,” van Dokkum says.

    The galaxy, dubbed Dragonfly 44, has less than 1 percent as many stars as the Milky Way, and yet it has to be just as massive.

    1
    Dragonfly 44

    That means that the vast majority of its matter is not the matter that makes up stars and planets and people—everything we can see—but dark matter, which seems to interact with regular matter through gravity alone.

    Astronomers have known for decades that galaxies can be made almost entirely of dark matter. But those galaxies were always small, a class known as dwarf galaxies, which have between 100 million and a few billion stars. A dark-matter-dominated galaxy as large as the Milky Way, with its 200 billion or more stars, needed an entirely new category. Van Dokkum and Abraham coined a term for them: ultradiffuse.

    “You look at a galaxy and you see this beautiful spiral structure and they’re gorgeous. I love galaxies,” Abraham says. “But what you see is really just kind of the frosting on the cake. The cake is the dark matter.”

    No one knows how many of these galaxies might exist, or whether they can have an even larger percentage of dark matter than Dragonfly 44. Perhaps there are galaxies that have no luminous matter at all, simply massive dark blobs hurtling through empty space. Though such galaxies have thus far evaded observation, evidence of their existence may be lurking in unexamined data from the past.

    And Dragonfly could be the key for finding them. “When people knew they were real and that these things could exist and could be part of these galaxy clusters, suddenly they turned up in large numbers,” van Dokkum says. “They just escaped attention for all these decades.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 1:11 pm on October 12, 2016 Permalink | Reply
    Tags: , , , Symmetry Magazine,   

    From Symmetry: “Citizen scientists join search for gravitational waves” 

    Symmetry Mag
    Symmetry

    10/12/16
    Amanda Solliday

    1
    Artwork by Sandbox Studio, Chicago with Ana Kova

    A new project pairs volunteers and machine learning to sort through data from LIGO.

    Barbara Téglás was looking to try something different while on a break from her biotechnology work.

    So she joined Zooniverse, a website dedicated to citizen science projects, and began to hunt pulsars and classify cyclones from her home computer.

    “It’s a great thing that scientists share data and others can analyze it and participate,” Téglás says. “The project helps me stay connected with science in other fields, from anywhere.”

    In April, at her home in the Caribbean Islands, Téglás saw a request for volunteers to help with a new gravitational-wave project called Gravity Spy. Inspired by the discovery of gravitational waves by the Laser Interferometer Gravitational-wave Observatory, or LIGO, she signed up the same day.

    LSC LIGO Scientific Collaboration
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    “To be a complete outsider and have the opportunity to contribute to an astrophysics project such as LIGO, it’s extraordinary,” Téglás says.

    Tuning out the noise

    It took a century after Albert Einstein predicted the existence of gravitational waves—or ripples in space-time—for scientists to build an instrument sophisticated enough to see them. LIGO observed these ripples for the first (and second) time, using two L-shaped detectors called interferometers designed to measure infinitesimal changes in distance. These changes were generated by two black holes that collided a billion years in the past, giving off gravitational waves that eventually passed through Earth. As they traveled through our planet, these gravitational waves stretched and shrank the 4-kilometer arms of the detectors.

    The LIGO detectors can measure a change in distance about 10,000 times smaller than the diameter of a proton. Because the instruments are so sensitive, this also makes them prone to capturing other vibrations, such as earthquakes or heavy vehicles driving near the detectors. Equipment fluctuations can also create noise.

    The noise, also called a glitch, can move the arms of the detector and potentially mimic an astrophysical signal.

    The two detectors are located nearly 2000 miles apart, one in Louisiana and the other in Washington state. Gravitational waves from astrophysical events will hit both detectors at nearly the same time, since gravitational waves travel straight through Earth at the speed of light. However, the distance between the two makes it unlikely that other types of vibrations will be felt simultaneously.

    “But that’s really not enough,” says Mike Zevin, a physics and astronomy graduate student at Northwestern University and a member of the Gravity Spy science team. “Glitches happen often enough that similar vibrations can appear in both detectors at nearly the same time. The glitches can tarnish the data and make it unusable.”

    Gravity Spy enlists the help of volunteers to analyze noise that appears in LIGO detectors.

    This information is converted to an image called spectrogram, and the patterns show the time and frequencies of the noise. Shifts in blue, green and yellow indicate the loudness of the glitch, or how much the noise moved the arms of the detector. The glitches show up frequently in the large amount of information generated by the detectors.

    “Some of these glitches in the spectrograms are easily identified by computers, while others aren’t,” Zevin says. “Humans are actually better at spotting new patterns in the images.”

    The Gravity Spy volunteers are tasked with labeling these hard-to-identify categories of glitches. In addition, the information is used to create training sets for computer algorithms.

    As the training sets grow larger, the computers become better at classifying glitches. That can help scientists eliminate the noise from the detectors or find ways to account for glitches as they look at the data.

    “One of our goals is to create a new way of doing citizen science that scales with the big-data era we live in now,” Zevin says.

    Gravity Spy is a collaboration between Adler Planetarium, California State University-Fullerton, Northwestern University, Syracuse University, University of Alabama at Huntsville, and Zooniverse. The project is supported by an interdisciplinary grant from the National Science Foundation.

    About 1400 people volunteered for initial tests of Gravity Spy. Once the beta testing of Gravity Spy is complete, the volunteers will look at new images created when LIGO begins to collect data during its second observing run.

    2
    Artwork by Sandbox Studio, Chicago with Ana Kova

    A human endeavor

    The project also provides an avenue for human-computer interaction research.

    Another goal for Gravity Spy is to learn the best ways to keep citizen scientists motivated while looking at immense data sets, says Carsten Oesterlund, information studies professor at Syracuse University and member of the Gravity Spy research team.

    “What is really exciting from our perspective is that we can look at how human learning and machine learning can go hand-in-hand,” Oesterlund says. “While the humans are training the machines, how can we organize the task to also facilitate human learning? We don’t want them simply looking at image after image. We want developmental opportunities for the volunteers.”

    The researchers are examining how to encourage the citizen scientists to collaborate as a team. They also want to support new discoveries, or make it easier for people to find unique sets of glitches.

    One test involves incentives—in an earlier study, the computing researchers found if a volunteer knows that they are the first to classify an image, they go on to classify more images.

    “We’ve found that the sense of novelty is actually quite motivating,” says Kevin Crowston, a member of the Gravity Spy science team and associate dean for research at Syracuse University’s School of Information Studies.

    Almost every day, Téglás works on the Gravity Spy project. When she has spare time, she sits down at her computer and looks at glitches. Since April, she’s classified nearly 15,000 glitches and assisted other volunteers with hundreds of additional images through talk forums on Zooniverse.

    She’s pleased that her professional skills developed while inspecting genetics data can also help many citizen science projects.

    On her first day with Gravity Spy, Téglás helped identify a new type of glitch. Later, she classified another unique glitch called “paired doves” after its repeating, chirp-like patterns, which closely mimic the signal created by binary black holes. She’s also found several new variations of known glitches. Her work is recognized in LIGO’s log, and the newly found glitches are now part of the official workflow for the experiment.

    Different experiences, backgrounds and ways of thinking can make citizen science projects stronger, she says.

    “For this project, you’re not only using your eyes,” Téglás says. “It’s also an opportunity to understand an important experiment in modern science.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 8:09 am on October 12, 2016 Permalink | Reply
    Tags: , , , , , , , Symmetry Magazine   

    From Symmetry: “Recruiting team geoneutrino” 

    Symmetry Mag
    Symmetry

    10/11/16
    Leah Crane

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Physicists and geologists are forming a new partnership to study particles from inside the planet.

    The Earth is like a hybrid car.

    Deep under its surface, it has two major fuel tanks. One is powered by dissipating primordial energy left over from the planet’s formation. The other is powered by the heat that comes from radioactive decay.

    We have only a shaky understanding of these heat sources, says William McDonough, a geologist at the University of Maryland. “We don’t have a fuel gauge on either one of them. So we’re trying to unravel that.”

    One way to do it is to study geoneutrinos, a byproduct of the process that burns Earth’s fuel. Neutrinos rarely interact with other matter, so these particles can travel straight from within the Earth to its surface and beyond.

    Geoneutrinos hold clues as to how much radioactive material the Earth contains. Knowing that could lead to insights about how our planet formed and its modern-day dynamics. In addition, the heat from radioactive decay plays a key role in driving plate tectonics.

    The tectonic plates of the world were mapped in 1996, USGS.
    The tectonic plates of the world were mapped in 1996, USGS

    Understanding the composition of the planet and the motion of the plates could help geologists model seismic activity.

    To effectively study geoneutrinos, scientists need knowledge both of elementary particles and of the Earth itself. The problem, McDonough says, is that very few geologists understand particle physics, and very few particle physicists understand geology. That’s why physicists and geologists have begun coming together to build an interdisciplinary community.

    “There’s really a need for a beyond-superficial understanding of the physics for the geologists and likewise a nonsuperficial understanding of the Earth by the physicists,” McDonough says, “and the more that we talk to each other, the better off we are.”

    There are hurdles to overcome in order to get to that conversation, says Livia Ludhova, a neutrino physicist and geologist affiliated with Forschungzentrum Jülich and RWTH Aachen University in Germany. “I think the biggest challenge is to make a common dictionary and common understanding—to get a common language. At the basic level, there are questions on each side which can appear very naïve.”

    In July, McDonough and Gianpaolo Bellini, emeritus scientist of the Italian National Institute of Nuclear Physics and retired physics professor at the University of Milan, led a summer institute for geology and physics graduate students to bridge the divide.

    “In general, geology is more descriptive,” Bellini says. “Physics is more structured.”

    This can be especially troublesome when it comes to numerical results, since most geologists are not used to working with the defined errors that are so important in particle physics.

    At the summer institute, students began with a sort of remedial “preschool,” in which geologists were taught how to interpret physical uncertainty and the basics of elementary particles and physicists were taught about Earth’s interior. Once they gained basic knowledge of one another’s fields, the scientists could begin to work together.

    This is far from the first interdisciplinary community within science or even particle physics. Ludhova likens it to the field of radiology: There is one expert to take an X-ray and another to determine a plan of action once all the information is clear. Similarly, particle physicists know how to take the necessary measurements, and geologists know what kinds of questions they could answer about our planet.

    Right now, only two major experiments are looking for geoneutrinos: KamLAND at the Kamioka Observatory in Japan and Borexino at the Gran Sasso National Laboratory in Italy. Between the two of them, these observatories detect fewer than 20 geoneutrinos a year.

    KamLAND
    KamLAND at the Kamioka Observatory in Japan

    INFN/Borexino Solar Neutrino detector, Gran Sasso, Italy
    INFN/Borexino Solar Neutrino detector, Gran Sasso, Italy

    Between the two of them, these observatories detect fewer than 20 geoneutrinos a year.

    Because of the limited results, geoneutrino physics is by necessity a small discipline: According to McDonough, there are only about 25 active neutrino researchers with a deep knowledge of both geology and physics.

    Over the next decade, though, several more neutrino detectors are anticipated, some of which will be much larger than KamLAND or Borexino. The Jiangmen Underground Neutrino Observatory (JUNO) in China, for example, should be ready in 2020.

    JUNO Neutrino detector China
    JUNO Neutrino detector China

    Whereas Borexino’s detector is made up of 300 tons of active material, and KamLAND’s contains 1000, JUNO’s will have 20,000 tons.

    The influx of data over the next decade will allow the community to emerge into the larger scientific scene, Bellini says. “There are some people who say ‘now this is a new era of science’—I think that is exaggerated. But I do think that we have opened a new chapter of science in which we use the methods of particle physics to study the Earth.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: