Tagged: Physics Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:03 pm on August 29, 2016 Permalink | Reply
    Tags: , , Physics, Quantum metrology, Rayleigh criterion   

    From Physics: “Unlocking the Hidden Information in Starlight” 

    Physics LogoAbout Physics

    Physics Logo 2

    Physics

    August 29, 2016
    Gabriel Durkin, Berkeley Quantum Information and Computation Center, University of California, Berkeley

    Quantum metrology shows that it is always possible to estimate the separation of two stars, no matter how close together they are.

    1
    The Rayleigh criterion states that in direct imaging, two light sources are only discernable when the centers of their diffraction patterns, or peaks of their point spread functions, are farther apart than their widths. (Top) The sources are farther apart than the Rayleigh criterion distance. (Middle) The sources meet the Rayleigh criterion distance. (Bottom) The sources are closer than the Rayleigh criterion distance. Tsang and collaborators [1] used quantum metrology techniques to show that the Rayleigh criterion is not a fundamental limitation, finding that the separation between two objects can always be estimated with a precision that is independent of the size of the separation.

    A provocative new result [1] by Mankei Tsang, Ranjith Nair, and Xiao-Ming Lu of the National University of Singapore suggests that a long-standing limitation to the precision of astronomical imaging, the Rayleigh criterion, proposed in 1879 [2] is itself only an apparition. Using quantum metrology techniques, the researchers have shown that two uncorrelated point-like light sources, such as stars, can be discriminated to arbitrary precision even as their separation decreases to zero.

    Quantum metrology, a field that has existed since the late 1960s with the pioneering work of Carl Helstrom [3], is a peculiar hybrid of quantum mechanics and the classical estimation theory developed by statisticians in the 1940s. The methodology is a powerful one, quantifying resources needed for optimal estimation of elementary variables and fundamental constants. These resources include preparation of quantum systems in a characteristic (entangled) state, followed by judiciously chosen measurements, from which a desired parameter, itself not directly measurable, may be inferred.

    In the context of remote sensing, for example, in the imaging of objects in the night sky, the ability to prepare a physical system in an optimal state does not exist. In the case of starlight, the typical assumption is that the source is classical thermal light, the state of maximum entropy or “uninformativeness.” Imaging such sources is plagued by the limits of diffraction when the objects are in close proximity. The wave-like nature of light causes it to spread as it moves through space, bending around obstacles, for example when traversing a telescope aperture. This results in a diffraction pattern described by a so-called point spread function (PSF) in the image plane. The Rayleigh criterion states that two closely spaced objects are just resolvable—that is, discernable from one another—when the center of the diffraction pattern, or peak of the PSF, of one object is directly over the first minimum of the diffraction pattern of the other. Roughly, the PSF maxima must be farther apart than their widths (Fig. 1).

    Some astronomers say they are able to resolve objects that are slightly closer than the Rayleigh limit allows. Yet inevitably, as the angular separation between the objects decreases, the information that can be obtained about that separation using direct detection becomes negligible, and even the most optimistic astronomer, utilizing the most sophisticated signal-processing techniques, must admit defeat. Correspondingly, as the separation approaches zero, the minimum error on any unbiased estimation of the separation blows up to infinity, which has limited angular resolution in imaging since the time of Galileo. Typically, the mean-squared error on the estimation of a parameter scales with the number of repeated measurements or data points, ν
    , as 1∕ν. Even for a large error per measurement, any desired precision is attained by taking multiple data points. When, however, the lower bound on direct estimation of the separation is divergent because of the Rayleigh limit, the 1∕ν factor makes no impact. This is what Tsang and collaborators call Rayleigh’s curse.

    Using a quantum metrology formalism to minimize the estimation error, the initial achievement of their work has been to show that there is no fundamental obstacle to the estimation of the separation of two PSFs in one dimension (that is, for sources that sit on a line). As the separation of two PSFs decreases to zero, the amount of obtainable information stays constant. This discovery is nicely summed up by Tsang, who says we should apologize to the starlight “as it travels billions of light years to reach us, yet our current technology and even our space telescopes turn out to be wasting a lot of the information it carries.” [4]

    It could be suggested that this is merely a theoretical proof; the quantum metrology formalism indicates that there is always an optimal measurement, which minimizes the estimation error for the separation parameter. Paradoxically, this optimal measurement can, however, depend on the value of the parameter. To obviate such concerns, Tsang and his colleagues propose a strategy, based on state-of-the-art quantum optics technology, that produces a minimal error in the estimation of the separation variable—counterintuitively, this error remains constant for all separation values, under the assumption that the PSFs have a Gaussian shape. The method, which the authors call spatial mode demultiplexing (SPADE), splits the light from the two sources into optical waveguides that have a quadratic refractive-index lateral profile. Mathematically, this SPADE measurement decomposes the overlapping PSFs (a real function in one dimension) into the complete basis of Hermite functions, just as a Fourier transform provides a decomposition of a real function into a superposition of sine and cosine terms. A posteriori, one may be tempted to use intuition to explain why this Hermite basis measurement seems not to suffer Rayleigh’s curse, but then again, were intuition forthcoming, the result may not have been hidden from view for so long. (This elusiveness relates to subtleties in the estimation of a single parameter extracted from the joint statistics of two incoherent light sources.)

    One minor caveat of the approach is that full imaging of two point sources at positions X1
    and X2 requires estimation of both separation X1−X2 and centroid (X1+X2)∕2 parameters. SPADE is only optimal when the centroid parameter is already known to high precision. Centroid estimation, however, has no equivalent analog to the Rayleigh curse; it may be made via direct imaging. Errors can be reduced appropriately via the factor 1∕ν for data points with ν much greater than 1.

    A second detail worth pondering is that this result utilized techniques from the quantum domain to reveal a classical result. (All of the physical assumptions about starlight admit a classical model.) The quantum metrology formalism has been used to optimally estimate a parameter, but no quantum correlations exist in the system for any value of that parameter, that is, for any angular separation of two stars. When no quantum correlations are present, the formalism will still indicate the best possible measurement strategy and the smallest achievable estimation error.

    An added blessing of quantum metrology is that it allows the development of generalized uncertainty relationships, for example between temperature and energy for a system at equilibrium [5], or photon number and path-length difference between the two arms of an interferometer. The result of Tsang and his colleagues can be presented as another type of generalized uncertainty, between source separation and “momentum.” The mean-squared error associated with separation estimation scales inversely with the momentum (Fourier) space variance of the overlapping PSFs.

    Regarding impact on the field, the authors’ study produced a flurry of generalizations and other experimental proposals. During the past six months there have been four proof-of-principle experiments, first in Singapore by Tsang’s colleague Alex Ling and collaborators [6], and then elsewhere in Canada and Europe [7–9]. A subsequent theory paper from researchers at the University of York [10] extends Tsang and colleagues’ theory result, which was for incoherent thermal sources such as starlight, to any general quantum state existing jointly between the two sources. This work exploits the roles of squeezing (of quantum fluctuations) and of quantum entanglement to improve measurement precision, extending applicability to domains in which control of the source light is possible, such as microscopy.

    Tsang and his colleagues have provided a new perspective on the utility of quantum metrology, and they have reminded us that even in observational astronomy—one of the oldest branches of science—there are (sometimes) still new things to be learned, at the most basic level.

    This research is published in Physical Review X.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments (physics@aps.org).

     
  • richardmitnick 4:51 pm on August 23, 2016 Permalink | Reply
    Tags: , , , Physics,   

    From Symmetry: “Five facts about the Big Bang” 

    Symmetry Mag

    Symmetry

    08/23/16
    Matthew R. Francis

    1
    Illustration by Sandbox Studio, Chicago with Corinne Mucha

    Inflationary Universe. NASA/WMAP
    Inflationary Universe. NASA/WMAP

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey
    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    It’s the cornerstone of cosmology, but what is it all about?

    Astronomers Edwin Hubble and Milton Humason in the early 20th century discovered that galaxies are moving away from the Milky Way. More to the point: Every galaxy is moving away from every other galaxy on average, which means the whole universe is expanding. In the past, then, the whole cosmos must have been much smaller, hotter and denser.

    That description, known as the Big Bang model, has stood up against new discoveries and competing theories for the better part of a century. So what is this “Big Bang” thing all about?

    The Big Bang happened everywhere at once.

    The universe has no center or edge, and every part of the cosmos is expanding. That means if we run the clock backward, we can figure out exactly when everything was packed together—13.8 billion years ago. Because every place we can map in the universe today occupied the same place 13.8 billion years ago, there wasn’t a location for the Big Bang: Instead, it happened everywhere simultaneously.

    The Big Bang may not describe the actual beginning of everything.

    “Big Bang” broadly refers to the theory of cosmic expansion and the hot early universe. However, sometimes even scientists will use the term to describe a moment in time—when everything was packed into a single point. The problem is that we don’t have either observations or theory that describes that moment, which is properly (if clumsily) called the “initial singularity.”

    The initial singularity is the starting point for the universe we observe, but there might have been something that came before.

    The difficulty is that the very hot early cosmos and the rapid expansion called “inflation” that likely happened right after the singularity wiped out most—if not all—of the information about any history that preceded the Big Bang. Physicists keep thinking of new ways to check for signs of an earlier universe, and though we haven’t seen any of them so far, we can’t rule it out yet.

    The Big Bang theory explains where all the hydrogen and helium in the universe came from.

    In the 1940s, Ralph Alpher and George Gamow calculated that the early universe was hot and dense enough to make virtually all the helium, lithium and deuterium (hydrogen with a neutron attached) present in the cosmos today; later research showed where the primordial hydrogen came from. This is known as “Big Bang nucleosynthesis,” and it stands as one of the most successful predictions of the theory. The heavier elements (such as oxygen, iron and uranium) were formed in stars and supernova explosions.

    The best evidence for the Big Bang is in the form of microwaves. Early on, the whole universe was dense enough to be completely opaque. But at a time roughly 380,000 years after the Big Bang, expansion spread everything out enough to make the universe transparent.

    The light released from this transition, known as the cosmic microwave background (CMB), still exists.

    Cosmic Microwave Background per ESA/Planck
    Cosmic Microwave Background per ESA/Planck

    It was first observed in the 1960s by Arno Penzias and Robert Wilson.

    Big Ear, Arno Penzias and Robert Wilson, AT&T, Holmdel, NJ USA
    Big Ear, Arno Penzias and Robert Wilson, AT&T, Holmdel, NJ USA

    That discovery cemented the Big Bang theory as the best description of the universe; since then, observatories such WMAP and Planck have used the CMB to tell us a lot about the total structure and content of the cosmos.

    One of the first people to think scientifically about the origin of the universe was a Catholic priest.

    In addition to his religious training and work, Georges Lemaître was a physicist who studied the general theory of relativity and worked out some of the conditions of the early cosmos in the 1920s and ’30s.

    4

    His preferred metaphors for the origin of the universe were “cosmic egg” and “primeval atom,” but they never caught on, which is too bad, because …

    It seems nobody likes the name “Big Bang.”

    Until the 1960s, the idea of a universe with a beginning was controversial among physicists. The name “Big Bang” was actually coined by astronomer Fred Hoyle, who was the leading proponent of an alternative theory, where universe continues forever without a beginning.

    His shorthand for the theory caught on, and now we’re kind of stuck with it. Calvin and Hobbes’ attempt to get us to adopt “horrendous space kablooie” has failed so far.

    The Big Bang is the cornerstone of cosmology, but it’s not the whole story. Scientists keep refining the theory of the universe, motivated by our observation of all the weird stuff out there. Dark matter (which holds galaxies together) and dark energy (which makes the expansion of the universe accelerate) are the biggest mysteries that aren’t described by the Big Bang theory by itself.

    Our view of the universe, like the cosmos itself, keeps evolving as we discover more and more new things. But rather than fading away, our best explanation for why things are the way they are has remained—the fire at the beginning of the universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 9:55 am on August 23, 2016 Permalink | Reply
    Tags: , , , NuMI horn, , Physics   

    From FNAL: “Funneling fundamental particles” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    August 22, 2016
    Molly Olmstead

    1
    The NuMI horn in the Main Injector brings particles into focus. Photo: Reidar Hahn

    Neutrinos are tricky. Although trillions of these harmless, neutral particles pass through us every second, they interact so rarely with matter that, to study them, scientists send a beam of neutrinos to giant detectors. And to be sure they have enough of them, scientists have to start with a very concentrated beam of neutrinos.

    To concentrate the beam, an experiment needs a special device called a neutrino horn.

    An experiment’s neutrino beam is born from a shower of short-lived particles, created when protons traveling close to the speed of light slam into a target. But that shower doesn’t form a tidy beam itself: That’s where the neutrino horn comes in.

    Once the accelerated protons smash into the target to create pions and kaons — the short-lived charged particles that decay into neutrinos — the horn has to catch and focus them by using a magnetic field. The pions and kaons have to be focused immediately, before they decay into neutrinos: Unlike the pions and kaons, neutrinos don’t interact with magnetic fields, which means we can’t focus them directly.

    Without the horn, an experiment would lose 95 percent of the neutrinos in its beam. Scientists need to maximize the number of neutrinos in the beam because neutrinos interact so rarely with matter. The more you have, the more opportunities you have to study them.

    “You have to have tremendous numbers of neutrinos,” said Jim Hylen, a beam physicist at Fermilab. “You’re always fighting for more and more.”

    Also known as magnetic horns, neutrino horns were invented at CERN by the Nobel Prize-winning physicist Simon van der Meer in 1961. A few different labs used neutrino horns over the following years, and Fermilab and J-PARC in Japan are the only major laboratories now hosting experiments with neutrino horns. Fermilab is one of the few places in the world that makes neutrino horns.

    “Of the major labs, we currently have the most expertise in horn construction here at Fermilab,” Hylen said.

    How they work

    The proton beam first strikes the target that sits inside or just upstream of the horn. The powerful proton beam would punch through the aluminum horn if it hit it, but the target, which is made of graphite or beryllium segments, is built to withstand the beam’s full power. When the target is struck by the beam, its temperature jumps by more than 700 degrees Fahrenheit, making the process of keeping the target-horn system cool a challenge involving a water-cooling system and a wind stream.

    Once the beam hits the target, the neutrino horn directs resulting particles that come out at wide angles back toward the detector. To do this, it uses magnetic fields, which are created by pulsing a powerful electrical current — about 200,000 amps — along the horn’s surfaces.

    “It’s essentially a big magnet that acts as a lens for the particles,” said physicist Bob Zwaska.

    The horns come in slightly different shapes, but they generally look on the outside like a metal cylinder sprouting a complicated network of pipes and other supporting equipment. On the inside, an inner conductor leaves a hollow tunnel for the beam to travel through.

    Because the current flows in one direction on the inner conductor and the opposite direction on the outer conductor, a magnetic field forms between them. A particle traveling along the center of the beamline will zip through that tunnel, escaping the magnetic field between the conductors and staying true to its course. Any errant particles that angle off into the field between the conductors are kicked back in toward the center.

    The horn’s current flows in a way that funnels positively charged particles that decay into neutrinos toward the beam and deflects negatively charged particles that decay into antineutrinos outward. Reversing the current can swap the selection, creating an antimatter beam. Experiments can run either beam and compare the data from the two runs. By studying neutrinos and antineutrinos, scientists try to determine whether neutrinos are responsible for the matter-antimatter asymmetry in the universe. Similarly, experiments can control what range of neutrino energies they target most by tuning the strength of the field or the shape or location of the horn.

    Making and running a neutrino horn can be tricky. A horn has to be engineered carefully to keep the current flowing evenly. And the inner conductor has to be as slim as possible to avoid blocking particles. But despite its delicacy, a horn has to handle extreme heat and pressure from the current that threaten to tear it apart.

    “It’s like hitting it with a hammer 10 million times a year,” Hylen said.

    Because of the various pressures acting on the horn, its design requires extreme attention to detail, down to the specific shape of the washers used. And as Fermilab is entering a precision era of neutrino experiments running at higher beam powers, the need for the horn engineering to be exact has only grown.

    “They are structural and electrical at the same time,” Zwaska said. “We go through a huge amount of effort to ensure they are made extremely precisely.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    FNAL Icon
    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 2:41 pm on August 16, 2016 Permalink | Reply
    Tags: , , , Pan Jian-Wei, Physics,   

    From Nature- “China’s quantum space pioneer: We need to explore the unknown” 

    Nature Mag
    Nature

    14 January 2016 [Just appeared in social media, probably because of new Chinese spacecraft that went up today.]
    Celeste Biever

    1
    Pan Jian-Wei is leading a satellite project that will probe quantum entanglement. Tengyun Chen

    Physicist Pan Jian-Wei is the architect of the world’s first attempt to set up a quantum communications link between Earth and space — an experiment that is set to begin with the launch of a satellite in June.

    The satellite will test whether the quantum property of entanglement extends over record-breaking distances of more than 1,000 kilometres, by beaming individual entangled photons between space and various ground stations on Earth. It will also test whether it is possible, using entangled photons, to teleport information securely between Earth and space.

    On 8 January, Pan, who works at the University of Science and Technology of China in Hefei, won a major national Chinese science prize (worth 200,000 yuan, or US$30,000) for his contributions to quantum science. He spoke to Nature about why his experiments are necessary and about the changing nature of Chinese space-science missions.

    How are preparations for the launch going?

    We always have two feelings. We feel, “Yes, everything is all right,” and then we are happy and excited. But we have, a couple of times, thought, “Probably our project will collapse and never work.” I think the satellite should be launched on time.
    What technical challenges do you face?

    The satellite will fly so fast (it takes just 90 minutes to orbit Earth) and there will be turbulence and other problems — so the single-photon beam can be seriously affected. Also we have to overcome background noise from sunlight, the Moon and light noise from cities, which are much stronger than our single photon.

    What is the aim of the satellite?

    Our first mission is to see if we can establish quantum key distribution [the encoding and sharing of a secret cryptographic key using the quantum properties of photons] between a ground station in Beijing and the satellite, and between the satellite and Vienna. Then we can see whether it is possible to establish a quantum key between Beijing and Vienna, using the satellite as a relay.

    The second step will be to perform long-distance entanglement distribution, over about 1,000 kilometres. We have technology on the satellite that can produce pairs of entangled photons. We beam one photon of an entangled pair to a station in Delingha, Tibet, and the other to a station in Lijiang or Nanshan. The distance between the two ground stations is about 1,200 kilometres. Previous tests were done on the order of 100 kilometres.

    Does anyone doubt that entanglement happens no matter how far apart two particles are?

    Not too many people doubt quantum mechanics, but if you want to explore new physics, you must push the limit. Sure, in principle, quantum entanglement can exist for any distance. But we want to see if there is some physical limit. People ask whether there is some sort of boundary between the classical world and the quantum world: we hope to build some sort of macroscopic system in which we can show that the quantum phenomena can still exist.

    In future, we also want to see if it is possible to distribute entanglement between Earth and the Moon. We hope to use the Chang’e programme (China’s Moon programme) to send a quantum satellite to one of the gravitationally-stable points [Lagrangian points] in the Earth-Moon system.

    How does entanglement relate to quantum teleportation?

    We will beam one photon from an entangled pair created at a ground station in Ali, Tibet, to the satellite. The quantum state of a third photon in Ali can then be teleported to the particle in space, using the entangled photon in Ali as a conduit.
    The quantum satellite is a basic-science space mission, as is the Dark Matter Particle Explorer (DAMPE), which China launched in December.

    Are basic-research satellites a new trend for China?

    Yes, and my colleagues at the Chinese Academy of Sciences (CAS) and I helped to force things in this direction. In the past, China had only two organizations that could launch satellites: the army and the Ministry of Industry and Information Technology. So scientists had no way to launch a satellite for scientific research. One exception is the Double Star probe, launched in collaboration with the European Space Agency in 2003 to study magnetic storms on Earth.

    What changed?

    We at CAS really worked hard to convince our government that it is important that we have a way to launch science satellites. In 2011, the central government established the Strategic Priority Program on Space Science, which DAMPE and our quantum satellite are part of. This is a very important step.

    I think China has an obligation not just to do something for ourselves — many other countries have been to the Moon, have done manned spaceflight — but to explore something unknown.

    Will scientists also be involved in China’s programme to build a space station, Tiangong?

    The mechanism to make decisions for which projects can go to the space station has been significantly changed. Originally, the army wanted to take over the responsibility, but it was finally agreed that CAS is the right organization.

    We will have a quantum experiment on the space station and it will make our studies easier because we can from time to time upgrade our experiment (unlike on the quantum satellite). We are quite happy with this mechanism. We need only talk to the leaders of CAS — and they are scientists, so you can communicate with them much more easily.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 7:32 am on August 16, 2016 Permalink | Reply
    Tags: , Physics, U California Irvine, UCI physicists confirm possible discovery of fifth force of nature   

    From UC Irvine: ” UCI physicists confirm possible discovery of fifth force of nature” 

    UC Irvine bloc

    UC Irvine

    August 15, 2016
    Brian Bell
    949-824-8249
    bpbell@uci.edu

    1
    “If confirmed by further experiments, this discovery of a possible fifth force would completely change our understanding of the universe,” says UCI professor of physics & astronomy Jonathan Feng, including what holds together galaxies such as this spiral one, called NGC 6814. ESA/Hubble & NASA; Acknowledgement: Judy Schmidt

    Recent findings indicating the possible discovery of a previously unknown subatomic particle may be evidence of a fifth fundamental force of nature, according to a paper published in the journal Physical Review Letters by theoretical physicists at the University of California, Irvine.

    “If true, it’s revolutionary,” said Jonathan Feng, professor of physics & astronomy. “For decades, we’ve known of four fundamental forces: gravitation, electromagnetism, and the strong and weak nuclear forces. If confirmed by further experiments, this discovery of a possible fifth force would completely change our understanding of the universe, with consequences for the unification of forces and dark matter.”

    The UCI researchers came upon a mid-2015 study by experimental nuclear physicists at the Hungarian Academy of Sciences who were searching for “dark photons,” particles that would signify unseen dark matter, which physicists say makes up about 85 percent of the universe’s mass. The Hungarians’ work uncovered a radioactive decay anomaly that points to the existence of a light particle just 30 times heavier than an electron.

    “The experimentalists weren’t able to claim that it was a new force,” Feng said. “They simply saw an excess of events that indicated a new particle, but it was not clear to them whether it was a matter particle or a force-carrying particle.”

    The UCI group studied the Hungarian researchers’ data as well as all other previous experiments in this area and showed that the evidence strongly disfavors both matter particles and dark photons. They proposed a new theory, however, that synthesizes all existing data and determined that the discovery could indicate a fifth fundamental force. Their initial analysis was published in late April on the public arXiv online server, and a follow-up paper amplifying the conclusions of the first work was released Friday on the same website.

    The UCI work demonstrates that instead of being a dark photon, the particle may be a “protophobic X boson.” While the normal electric force acts on electrons and protons, this newfound boson interacts only with electrons and neutrons – and at an extremely limited range. Analysis co-author Timothy Tait, professor of physics & astronomy, said, “There’s no other boson that we’ve observed that has this same characteristic. Sometimes we also just call it the ‘X boson,’ where ‘X’ means unknown.”

    Feng noted that further experiments are crucial. “The particle is not very heavy, and laboratories have had the energies required to make it since the ’50s and ’60s,” he said. “But the reason it’s been hard to find is that its interactions are very feeble. That said, because the new particle is so light, there are many experimental groups working in small labs around the world that can follow up the initial claims, now that they know where to look.”

    Like many scientific breakthroughs, this one opens entirely new fields of inquiry.

    One direction that intrigues Feng is the possibility that this potential fifth force might be joined to the electromagnetic and strong and weak nuclear forces as “manifestations of one grander, more fundamental force.”

    Citing physicists’ understanding of the standard model, Feng speculated that there may also be a separate dark sector with its own matter and forces. “It’s possible that these two sectors talk to each other and interact with one another through somewhat veiled but fundamental interactions,” he said. “This dark sector force may manifest itself as this protophobic force we’re seeing as a result of the Hungarian experiment. In a broader sense, it fits in with our original research to understand the nature of dark matter.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC Irvine Campus

    Since 1965, the University of California, Irvine has combined the strengths of a major research university with the bounty of an incomparable Southern California location. UCI’s unyielding commitment to rigorous academics, cutting-edge research, and leadership and character development makes the campus a driving force for innovation and discovery that serves our local, national and global communities in many ways.

    With more than 29,000 undergraduate and graduate students, 1,100 faculty and 9,400 staff, UCI is among the most dynamic campuses in the University of California system. Increasingly a first-choice campus for students, UCI ranks among the top 10 U.S. universities in the number of undergraduate applications and continues to admit freshmen with highly competitive academic profiles.

    UCI fosters the rigorous expansion and creation of knowledge through quality education. Graduates are equipped with the tools of analysis, expression and cultural understanding necessary for leadership in today’s world.

    Consistently ranked among the nation’s best universities – public and private – UCI excels in a broad range of fields, garnering national recognition for many schools, departments and programs. Times Higher Education ranked UCI No. 1 among universities in the U.S. under 50 years old. Three UCI researchers have won Nobel Prizes – two in chemistry and one in physics.

    The university is noted for its top-rated research and graduate programs, extensive commitment to undergraduate education, and growing number of professional schools and programs of academic and social significance. Recent additions include highly successful programs in public health, pharmaceutical sciences and nursing science; an expanding education school; and a law school already ranked among the nation’s top 10 for its scholarly impact.

     
  • richardmitnick 7:08 pm on August 15, 2016 Permalink | Reply
    Tags: Physics, Programmable memory cells,   

    From SURF: “Testing the sensitivity of memory cells” 

    SURF logo
    Sanford Underground levels

    Sanford Underground Research facility

    August 15, 2016
    Constance Walter

    Particle physics researchers go deep underground to escape the constant bombardment of cosmic radiation that creates background “noise” in their sensitive experiments. And what’s good for particle physics, it turns out, is also good for programmable memory cells.

    1

    Xilinx is one of the world’s leading providers of semi- conductor devices called eld programmable gate arrays (FPGA). Based around a matrix of con gurable logic blocks (CLBs) connected through programmable interconnects, FPGAs are designed and built using tens of millions of SRAM (static random access memory) cells, which can be sensitive to single event upsets (SEU). So, for the past year, Xilinx has been running tests on its FPGAs on the 4850 Level of Sanford Lab.

    SEUs occur when the logic state of a SRAM memory cell is changed by ionized radiation. “When a neutron, proton, or alpha particle hits the silicon in the semiconductor, it leaves a trail of charged particles, which in some cases can cause a transistor of a SRAM cell to change its logic state,” said John Latimer, senior director with Xilinx’s Customer Quality Engineering division. “When that happens, it can potentially a ect the design programmed into the FPGA, in rare cases causing it to function improperly.”

    FPGAs are used in such applications as 3-D video record- ing and movie projection, driver assistance, datacenters, cellular communication and networking, smart electricity grid management, avionics instrumentation, satellite instruments, and space vehicles like the Mars Rover. It’s important that they function correctly.

    What Xilinx is trying to understand is just what causes the single event upsets. “We need to separate out the cosmic radiation e ects from the e ects of the alpha particle-producing package material of the FPGA,” Latimer said. “Placing the arrays deep underground allows us to block the cosmic radiation and only measure SEU events that are caused by the package material itself.”

    Latimer said that so far, “the results have been excellent with the alpha upset rates right in line with our predictions. We are very happy to be able use the Sanford Lab facilities and look forward to working with the ne sta at Sanford for many years to come.”

    The company plans to install and begin testing additional arrays over the next couple of months.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    About us.
    The Sanford Underground Research Facility in Lead, South Dakota, advances our understanding of the universe by providing laboratory space deep underground, where sensitive physics experiments can be shielded from cosmic radiation. Researchers at the Sanford Lab explore some of the most challenging questions facing 21st century physics, such as the origin of matter, the nature of dark matter and the properties of neutrinos. The facility also hosts experiments in other disciplines—including geology, biology and engineering.

    The Sanford Lab is located at the former Homestake gold mine, which was a physics landmark long before being converted into a dedicated science facility. Nuclear chemist Ray Davis earned a share of the Nobel Prize for Physics in 2002 for a solar neutrino experiment he installed 4,850 feet underground in the mine.

    Homestake closed in 2003, but the company donated the property to South Dakota in 2006 for use as an underground laboratory. That same year, philanthropist T. Denny Sanford donated $70 million to the project. The South Dakota Legislature also created the South Dakota Science and Technology Authority to operate the lab. The state Legislature has committed more than $40 million in state funds to the project, and South Dakota also obtained a $10 million Community Development Block Grant to help rehabilitate the facility.

    In 2007, after the National Science Foundation named Homestake as the preferred site for a proposed national Deep Underground Science and Engineering Laboratory (DUSEL), the South Dakota Science and Technology Authority (SDSTA) began reopening the former gold mine.

    In December 2010, the National Science Board decided not to fund further design of DUSEL. However, in 2011 the Department of Energy, through the Lawrence Berkeley National Laboratory, agreed to support ongoing science operations at Sanford Lab, while investigating how to use the underground research facility for other longer-term experiments. The SDSTA, which owns Sanford Lab, continues to operate the facility under that agreement with Berkeley Lab.

    The first two major physics experiments at the Sanford Lab are 4,850 feet underground in an area called the Davis Campus, named for the late Ray Davis. The Large Underground Xenon (LUX) experiment is housed in the same cavern excavated for Ray Davis’s experiment in the 1960s.
    LUX/Dark matter experiment at SURFLUX/Dark matter experiment at SURF

    In October 2013, after an initial run of 80 days, LUX was determined to be the most sensitive detector yet to search for dark matter—a mysterious, yet-to-be-detected substance thought to be the most prevalent matter in the universe. The Majorana Demonstrator experiment, also on the 4850 Level, is searching for a rare phenomenon called “neutrinoless double-beta decay” that could reveal whether subatomic particles called neutrinos can be their own antiparticle. Detection of neutrinoless double-beta decay could help determine why matter prevailed over antimatter. The Majorana Demonstrator experiment is adjacent to the original Davis cavern.

    Another major experiment, the Long Baseline Neutrino Experiment (LBNE)—a collaboration with Fermi National Accelerator Laboratory (Fermilab) and Sanford Lab, is in the preliminary design stages. The project got a major boost last year when Congress approved and the president signed an Omnibus Appropriations bill that will fund LBNE operations through FY 2014. Called the “next frontier of particle physics,” LBNE will follow neutrinos as they travel 800 miles through the earth, from FermiLab in Batavia, Ill., to Sanford Lab.

    Fermilab LBNE
    LBNE

     
  • richardmitnick 4:05 pm on August 14, 2016 Permalink | Reply
    Tags: , Jets, , Physics   

    From particlebites: “Jets: More than Riff, Tony, and a rumble” 

    particlebites bloc

    particlebites

    July 26, 2016 [Just today in social media.]
    Reggie Bain

    Ubiquitous in the LHC’s ultra-high energy collisions are collimated sprays of particles called jets. The study of jet physics is a rapidly growing field where experimentalists and theorists work together to unravel the complex geometry of the final state particles at LHC experiments. If you’re totally new to the idea of jets…this bite from July 18th, 2016 by Julia Gonski is a nice experimental introduction to the importance of jets. In this bite, we’ll look at the basic ideas of jet physics from a more theoretical perspective. Let’s address a few basic questions:

    1. What is a jet? Jets are highly collimated collections of particles that are frequently observed in detectors. In visualizations of collisions in the ATLAS detector, one can often identify jets by eye.

    1
    A nicely colored visualization of a multi-jet event in the ATLAS detector. Reason #172 that I’m not an experimentalist…actually sifting out useful information from the detector (or even making a graphic like this) is insanely hard.

    Jets are formed in the final state of a collision when a particle showers off radiation in such a way as to form a focused cone of particles. The most commonly studied jets are formed by quarks and gluons that fragment into hadrons like pions, kaons, and sometimes more exotic particles like the $latex J/Ψ, Υ, χc and many others. This process is often referred to as hadronization.

    2. Why do jets exist? Jets are a fundamental prediction of Quantum Field Theories like Quantum Chromodynamics (QCD). One common process studied in field theory textbooks is electron–positron annihilation into a pair of quarks, e+e– → q q. In order to calculate the cross-section of this process, it turns out that one has to consider the possibility that additional gluons are produced along with the qq. Since no detector has infinite resolution, it’s always possible that there are gluons that go unobserved by your detector. This could be because they are incredibly soft (low energy) or because they travel almost exactly collinear to the q or q itself. In this region of momenta, the cross-section gets very large and the process favors the creation of this extra radiation. Since these gluons carry color/anti-color, they begin to hadronize and decay so as to become stable, colorless states. When the q, q have high momenta, the zoo of particles that are formed from the hadronization all have momenta that are clustered around the direction of the original q,q and form a cone shape in the detector…thus a jet is born! The details of exactly how hadronization works is where theory can get a little hazy. At the energy and distance scales where quarks/gluons start to hadronize, perturbation theory breaks down making many of our usual calculational tools useless. This, of course, makes the realm of hadronization—often referred to as parton fragmentation in the literature—a hot topic in QCD research.

    3. How do we measure/study jets? Now comes the tricky part. As experimentalists will tell you, actually measuring jets can a messy business. By taking the signatures of the final state particles in an event (i.e. a collision), one can reconstruct a jet using a jet algorithm. One of the first concepts of such jet definitions was introduced by Geroge Sterman and Steven Weinberg in 1977. There they defined a jet using two parameters θ, E. These restricted the angle and energy of particles that are in or out of a jet. Today, we have a variety of jet algorithms that fall into two categories:

    Cone Algorithms — These algorithms identify stable cones of a given angular size. These cones are defined in such a way that if one or two nearby particles are added to or removed from the jet cone, that it won’t drastically change the cone location and energy
    Recombination Algorithms — These look pairwise at the 4-momenta of all particles in an event and combine them, according to a certain distance metric (there’s a different one for each algorithm), in such a way as to be left with distinct, well-separated jets.

    2
    Figure 2: From Cacciari and Salam’s original paper on the “Anti-kT” jet algorithm (See arXiv:0802.1189). The picture shows the application of 4 different jet algorithms: the kT, Cambridge/Aachen, Seedless-Infrared-Safe Cone, and anti-kT algorithms to a single set of final state particles in an event. You can see how each algorithm reconstructs a slightly different jet structure. These are among the most commonly used clustering algorithms on the market (the anti-kT being, at least in my experience, the most popular)

    4. Why are jets important? On the frontier of high energy particle physics, CERN leads the world’s charge in the search for new physics. From deepening our understanding of the Higgs to observing never before seen particles, projects like ATLAS,

    3
    An illustration of an interesting type of jet substructure observable called “N-subjettiness” from the original paper by Jesse Thaler and Ken van Tilburg (see arXiv:1011.2268). N-subjettiness aims to study how momenta within a jet are distributed by dividing them up into n sub-jets. The diagram on the left shows an example of 2-subjettiness where a jet contains two sub-jets. The diagram on the right shows a jet with 0 sub-jets.

    CMS, and LHCb promise to uncover interesting physics for years to come. As it turns out, a large amount of Standard Model background to these new physics discoveries comes in the form of jets. Understanding the origin and workings of these jets can thus help us in the search for physics beyond the Standard Model.

    Additionally, there are a number of interesting questions that remain about the Standard Model itself. From studying the production of heavy hadron production/decay in pp and heavy-ion collisions to providing precision measurements of the strong coupling, jets physics has a wide range of applicability and relevance to Standard Model problems. In recent years, the physics of jet substructure, which studies the distributions of particle momenta within a jet, has also seen increased interest. By studying the geometry of jets, a number of clever observables have been developed that can help us understand what particles they come from and how they are formed. Jet substructure studies will be the subject of many future bites!

    Going forward…With any luck, this should serve as a brief outline to the uninitiated on the basics of jet physics. In a world increasingly filled with bigger, faster, and stronger colliders, jets will continue to play a major role in particle phenomenology. In upcoming bites, I’ll discuss the wealth of new and exciting results coming from jet physics research. We’ll examine questions like:

    How do theoretical physicists tackle problems in jet physics?
    How does the process of hadronization/fragmentation of quarks and gluons really work?
    Can jets be used to answer long outstanding problems in the Standard Model?

    I’ll also bite about how physicists use theoretical smart bombs called “effective field theories” to approach these often nasty theoretical calculations. But more on that later…

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    What is ParticleBites?

    ParticleBites is an online particle physics journal club written by graduate students and postdocs. Each post presents an interesting paper in a brief format that is accessible to undergraduate students in the physical sciences who are interested in active research.

    The papers are accessible on the arXiv preprint server. Most of our posts are based on papers from hep-ph (high energy phenomenology) and hep-ex (high energy experiment).

    Why read ParticleBites?

    Reading a technical paper from an unfamiliar subfield is intimidating. It may not be obvious how the techniques used by the researchers really work or what role the new research plays in answering the bigger questions motivating that field, not to mention the obscure jargon! For most people, it takes years for scientific papers to become meaningful.

    Our goal is to solve this problem, one paper at a time. With each brief ParticleBite, you should not only learn about one interesting piece of current work, but also get a peek at the broader picture of research in particle physics.

    Who writes ParticleBites?

    ParticleBites is written and edited by graduate students and postdocs working in high energy physics. Feel free to contact us if you’re interested in applying to write for ParticleBites.

    ParticleBites was founded in 2013 by Flip Tanedo following the Communicating Science (ComSciCon) 2013 workshop.

    2
    Flip Tanedo UCI Chancellor’s ADVANCE postdoctoral scholar in theoretical physics. As of July 2016, I will be an assistant professor of physics at the University of California, Riverside

    It is now organized and directed by Flip and Julia Gonski, with ongoing guidance from Nathan Sanders.

     
  • richardmitnick 12:00 pm on August 13, 2016 Permalink | Reply
    Tags: , , , Light sources, , Physics,   

    From CERN Courier: “MAX IV paves the way for ultimate X-ray microscope” 

    CERN Courier

    Sweden’s MAX IV facility is the first storage ring to employ a multi-bend achromat. Mikael Eriksson and Dieter Einfeld describe how this will produce smaller and more stable X-ray beams, taking synchrotron science closer to the X-ray diffraction limit.

    Aug 12, 2016

    Mikael Eriksson, Maxlab, Lund, Sweden,
    Dieter Einfeld, ESRF, Grenoble, France.

    1
    http://www.lightsources.org/facility/maxiv

    Since the discovery of X-rays by Wilhelm Röntgen more than a century ago, researchers have striven to produce smaller and more intense X-ray beams. With a wavelength similar to interatomic spacings, X-rays have proved to be an invaluable tool for probing the microstructure of materials. But a higher spectral power density (or brilliance) enables a deeper study of the structural, physical and chemical properties of materials, in addition to studies of their dynamics and atomic composition.

    For the first few decades following Röntgen’s discovery, the brilliance of X-rays remained fairly constant due to technical limitations of X-ray tubes. Significant improvements came with rotating-anode sources, in which the heat generated by electrons striking an anode could be distributed over a larger area. But it was the advent of particle accelerators in the mid-1900s that gave birth to modern X-ray science. A relativistic electron beam traversing a circular storage ring emits X-rays in a tangential direction. First observed in 1947 by researchers at General Electric in the US, such synchrotron radiation has taken X-ray science into new territory by providing smaller and more intense beams.

    Generation game

    First-generation synchrotron X-ray sources were accelerators built for high-energy physics experiments, which were used “parasitically” by the nascent synchrotron X-ray community. As this community started to grow, stimulated by the increased flux and brilliance at storage rings, the need for dedicated X-ray sources with different electron-beam characteristics resulted in several second-generation X-ray sources. As with previous machines, however, the source of the X-rays was the bending magnets of the storage ring.

    The advent of special “insertion devices” led to present-day third-generation storage rings – the first being the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, and the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory in Berkeley, California, which began operation in the early 1990s.

    ESRF. Grenoble, France
    ESRF. Grenoble, France

    2
    LBL/ALS

    Instead of using only the bending magnets as X-ray emitters, third-generation storage rings have straight sections that allow periodic magnet structures called undulators and wigglers to be introduced. These devices consist of rows of short magnets with alternating field directions so that the net beam deflection cancels out. Undulators can house 100 or so permanent short magnets, each emitting X-rays in the same direction, which boosts the intensity of the emitted X-rays by two orders of magnitude. Furthermore, interference effects between the emitting magnets can concentrate X-rays of a given energy by another two orders of magnitude.

    Third-generation light sources have been a major success story, thanks in part to the development of excellent modelling tools that allow accelerator physicists to produce precise lattice designs. Today, there are around 50 third-generation light sources worldwide, with a total number of users in the region of 50,000. Each offers a number of X-ray beamlines (up to 40 at the largest facilities) that fan out from the storage ring: X-rays pass through a series of focusing and other elements before being focused on a sample positioned at the end station, with the longest beamlines (measuring 150 m or more) at the largest light sources able to generate X-ray spot sizes a few tens of nanometres in diameter. Facilities typically operate around the clock, during which teams of users spend anywhere between a few hours to a few days undertaking experimental shifts, before returning to their home institutes with the data.

    Although the corresponding storage-ring technology for third-generation light sources has been regarded as mature, a revolutionary new lattice design has led to another step up in brightness. The MAX IV facility at Maxlab in Lund, Sweden, which was inaugurated in June, is the first such facility to demonstrate the new lattice. Six years in construction, the facility has demanded numerous cutting-edge technologies – including vacuum systems developed in conjunction with CERN – to become the most brilliant source of X-rays in the world.

    3
    Iron-block magnets

    Initial ideas for the MAX IV project started at the end of the 20th century. Although the flagship of the Maxlab laboratory, the low-budget MAX II storage ring, was one of the first third-generation synchrotron radiation sources, it was soon outcompeted by several larger and more powerful sources entering operation. Something had to be done to maintain Maxlab’s accelerator programme.

    The dominant magnetic lattice at third-generation light sources consists of double-bend achromats (DBAs), which have been around since the 1970s.

    DBAs
    4
    MAX IV undulator

    A typical storage ring contains 10–30 achromats, each consisting of two dipole magnets and a number of magnet lenses: quadrupoles for focusing and sextupoles for chromaticity correction (at MAX IV we also added octupoles to compensate for amplitude-dependent tune shifts). The achromats are flanked by straight sections housing the insertion devices, and the dimensions of the electron beam in these sections is minimised by adjusting the dispersion of the beam (which describes the dependence of an electron’s transverse position on its energy) to zero. Other storage-ring improvements, for example faster correction of the beam orbit, have also helped to boost the brightness of modern synchrotrons. The key quantity underpinning these advances is the electron-beam emittance, which is defined as the product of the electron-beam size and its divergence.

    Despite such improvements, however, today’s third-generation storage rings have a typical electron-beam emittance of between 2–5 nm rad, which is several hundred times larger than the diffraction limit of the X-ray beam itself. This is the point at which the size and spread of the electron beam approaches the diffraction properties of X-rays, similar to the Abbe diffraction limit for visible light. Models of machine lattices with even smaller electron-beam emittances predict instabilities and/or short beam lifetimes that make the goal of reaching the diffraction limit at hard X-ray energies very distant.

    Although it had been known for a long time that a larger number of bends decreases the emittance (and therefore increases the brilliance) of storage rings, in the early 1990s, one of the present authors (DE) and others recognised that this could be achieved by incorporating a higher number of bends into the achromats. Such a multi-bend achromat (MBA) guides electrons around corners more smoothly, therefore decreasing the degradation in horizontal emittance. A few synchrotrons already employ triple-bend achromats, and the design has also been used in several particle-physics machines, including PETRA at DESY, PEP at SLAC and LEP at CERN, proving that a storage ring with an energy of a few GeV produces a very low emittance.

    DESY Petra III interior
    DESY Petra III

    4
    PEP II at SLAC. http://www.sciencephoto.com/media/613/view

    5
    CERN LEP

    To avoid prohibitively large machines, however, the MBA demands much smaller magnets than are currently employed at third-generation synchrotrons.

    In 1995, our calculations showed that a seven-bend achromat could yield an emittance of 0.4 nm rad for a 400 m-circumference machine – 10 times lower than the ESRF’s value at the time. The accelerator community also considered a six-bend achromat for the Swiss Light Source and a five-bend achromat for a Canadian light source, but the small number of achromats in these lattices meant that it was difficult to make significant progress towards a diffraction-limited source. One of us (ME) took the seven-bend achromat idea and turned it into a real engineering proposal for the design of MAX IV. But the design then went through a number of evolutions. In 2002, the first layout of a potential new source was presented: a 277 m-circumference, seven-bend lattice that would reach an emittance of 1 nm rad for a 3 GeV electron beam. By 2008, we had settled on an improved design: a 520 m-circumference, seven-bend lattice with an emittance of 0.31 nm rad, which will be reduced by a factor of two once the storage ring is fully equipped with undulators. This is more or less the design of the final MAX IV storage ring.

    In total, the team at Maxlab spent almost a decade finding ways to keep the lattice circumference at a value that was financially realistic, and even constructed a 36 m-circumference storage ring called MAX III to develop the necessary compact magnet technology. There were tens of problems that we had to overcome. Also, because the electron density was so high, we had to elongate the electron bunches by a factor of four by using a second radio-frequency (RF) cavity system.

    Block concept

    MAX IV stands out in that it contains two storage rings operated at an energy of 1.5 and 3 GeV. Due to the different energies of each, and because the rings share an injector and other infrastructure, high-quality undulator radiation can be produced over a wide spectral range with a marginal additional cost. The storage rings are fed electrons by a 3 GeV S-band linac made up of 18 accelerator units, each comprising one SLAC Energy Doubler RF station. To optimise the economy over a potential three-decade-long operation lifetime, and also to favour redundancy, a low accelerating gradient is used.

    The 1.5 GeV ring at MAX IV consists of 12 DBAs, each comprising one solid-steel block that houses all the DBA magnets (bends and lenses). The idea of the magnet-block concept, which is also used in the 3 GeV ring, has several advantages. First, it enables the magnets to be machined with high precision and be aligned with a tolerance of less than 10 μm without having to invest in aligning laboratories. Second, blocks with a handful of individual magnets come wired and plumbed direct from the delivering company, and no special girders are needed because the magnet blocks are rigidly self-supporting. Last, the magnet-block concept is a low-cost solution.

    We also needed to build a different vacuum system, because the small vacuum tube dimensions (2 cm in diameter) yield a very poor vacuum conductance. Rather than try to implement closely spaced pumps in such a compact geometry, our solution was to build 100% NEG-coated vacuum systems in the achromats. NEG (non-evaporable getter) technology, which was pioneered at CERN and other laboratories, uses metallic surface sorption to achieve extreme vacuum conditions. The construction of the MAX IV vacuum system raised some interesting challenges, but fortunately CERN had already developed the NEG coating technology to perfection. We therefore entered a collaboration that saw CERN coat the most intricate parts of the system, and licences were granted to companies who manufactured the bulk of the vacuum system. Later, vacuum specialists from the Budker Institute in Novosibirsk, Russia, mounted the linac and 3 GeV-ring vacuum systems.

    Due to the small beam size and high beam current, intra beam scattering and “Touschek” lifetime effects must also be addressed. Both are due to a high electron density at small-emittance/high-current rings in which electrons are brought into collisions with themselves. Large energy changes among the electrons bring some of them outside of the energy acceptance of the ring, while smaller energy deviations cause the beam size to increase too much. For these reasons, a low-frequency (100 MHz) RF system with bunch-elongating harmonic cavities was introduced to decrease the electron density and stabilise the beam. This RF system also allows powerful commercial solid-state FM-transmitters to be used as RF sources.

    When we first presented the plans for the radical MAX IV storage ring in around 2005, people working at other light sources thought we were crazy. The new lattice promised a factor of 10–100 increase in brightness over existing facilities at the time, offering users unprecedented spatial resolutions and taking storage rings within reach of the diffraction limit. Construction of MAX IV began in 2010 and commissioning began in August 2014, with regular user operation scheduled for early 2017.

    On 25 August 2015, an amazed accelerator staff sat looking at the beam-position monitor read-outs at MAX IV’s 3 GeV ring. With just the calculated magnetic settings plugged in, and the precisely CNC-machined magnet blocks, each containing a handful of integrated magnets, the beam went around turn after turn with proper behaviour. For the 3 GeV ring, a number of problems remained to be solved. These included dynamic issues – such as betatron tunes, dispersion, chromaticity and emittance – in addition to more trivial technical problems such as sparking RF cavities and faulty power supplies.

    As of MAX IV’s inauguration on 21 June, the injector linac and the 3 GeV ring are operational, with the linac also delivering X-rays to the Short Pulse Facility. A circulating current of 180 mA can be stored in the 3 GeV ring with a lifetime of around 10 h, and we have verified the design emittance with a value in the region of 300 pm rad. Beamline commissioning is also well under way, with some 14 beamlines under construction and a goal to increase that number to more than 20.

    Sweden has a well-established synchrotron-radiation user community, although around half of MAX IV users will come from other countries. A variety of disciplines and techniques are represented nationally, which must be mirrored by MAX IV’s beamline portfolio. Detailed discussions between universities, industry and the MAX IV laboratory therefore take place prior to any major beamline decisions. The high brilliance of the MAX IV 3 GeV ring and the temporal characteristics of the Short Pulse Facility are a prerequisite for the most advanced beamlines, with imaging being one promising application.

    Towards the diffraction limit

    MAX IV could not have reached its goals without a dedicated staff and help from other institutes. As CERN has helped us with the intricate NEG-coated vacuum system, and the Budker Institute with the installation of the linac and ring vacuum systems, the brand new Solaris light source in Krakow, Poland (which is an exact copy of the MAX IV 1.5 GeV ring) has helped with operations, and many other labs have offered advice. The MAX IV facility has also been marked out for its environmental credentials: its energy consumption is reduced by the use of high-efficiency RF amplifiers and small magnets that have a low power consumption. Even the water-cooling system of MAX IV transfers heat energy to the nearby city of Lund to warm houses.

    The MAX IV ring is the first of the MBA kind, but several MBA rings are now in construction at other facilities, including the ESRF, Sirius in Brazil and the Advanced Photon Source (APS) at Argonne National Laboratory [ANL] in the US.

    ANL APS
    ANL/APS

    The ESRF is developing a hybrid MBA lattice that would enter operation in 2019 and achieve a horizontal emittance of 0.15 nm rad. The APS has decided to pursue a similar design that could enter operation by the end of the decade and, being larger than the ESRF, the APS can strive for an even lower emittance of around 0.07 nm rad. Meanwhile, the ALS in California is moving towards a conceptual design report, and Spring-8 in Japan is pursuing a hybrid MBA that will enter operation on a similar timescale.

    Indeed, a total of some 10 rings are currently in construction or planned. We can therefore look forward to a new generation of synchrotron storage rings with very high transverse-coherent X-rays. We will then have witnessed an increase of 13–14 orders of magnitude in the brightness of synchrotron X-ray sources in a period of seven decades, and put the diffraction limit at high X-ray energies firmly within reach.

    One proposal would see such a diffraction-limited X-ray source installed in the 6.3 km-circumference tunnel that once housed the Tevatron collider at Fermilab, Chicago. Perhaps a more plausible scenario is PETRA IV at DESY in Hamburg, Germany. Currently the PETRA III ring is one of the brightest in the world, but this upgrade (if it is funded) could result in a 0.007 nm rad (7 pm rad) emittance or even lower. Storage rings will then have reached the diffraction limit at an X-ray wavelength of 1 Å. This is the Holy Grail of X-ray science, providing the highest resolution and signal-to-noise ratio possible, in addition to the lowest-radiation damage and the fastest data collection. Such an X-ray microscope will allow the study of ultrafast chemical reactions and other processes, taking us to the next chapter in synchrotron X-ray science.

    Further reading

    E Al-Dmour et al. 2014 J. Synchrotron Rad. 21 878.
    D Einfeld et al. 1995 Proceedings: PAC p177.
    M Eriksson et al. 2008 NIM-A 587 221.
    M Eriksson et al. 2016 IPAC 2016, MOYAA01, Busan, Korea.
    MAX IV Detailed Design Report http://www.maxlab.lu.se/maxlab/max4/index.html.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 4:00 pm on August 12, 2016 Permalink | Reply
    Tags: , , Mechanical chains made of soft matter that can transmit signals across long distances, Physics   

    From Caltech: “The Utility of Instability” 

    Caltech Logo

    Caltech

    08/08/2016
    Robert Perkins
    (626) 395-1862
    rperkins@caltech.edu

    1
    A 3-D–printed logic gate with bistable elements linked together by springs to transmit signals. Credit: Dennis Kochmann/Caltech

    A team of researchers from Caltech and Harvard has designed and created mechanical chains made of soft matter that can transmit signals across long distances. Because they are flexible, the circuits could be used in machines such as soft robots or lightweight aircraft constructed from pliable, nonmetallic materials.

    Unlike hard materials, which transit signals readily, soft materials tend to absorb energy as it passes through them. An analogy is hitting a firm punching bag versus a soft one: with the firm bag, the energy of your punch moves through the bag and sends it swinging, but the soft bag deforms your fist like a lump of dough and therefore will swing less.

    To overcome that response, Caltech’s Dennis Kochmann, Chiara Daraio, and their colleagues created an unstable, “nonlinear” system. Their findings have appeared in three papers published over the past few months.

    “Engineers tend to shy away from instability. Instead, we take advantage of it,” says Kochmann, assistant professor of aerospace in the Division of Engineering and Applied Sciences, and one of the lead researchers on the project.

    Stable, or “linear,” systems are attractive to engineers because they are easy to model and predict. Take, for example, a spring: If you push on a spring, it will respond by pushing back with a force that is linearly proportional to how much force you apply. The response of a nonlinear system to that same push, by comparison, is not proportional, and can include sudden changes in the direction or amplitude of the responsive force.

    The nonlinear systems that Kochmann and his colleagues designed rely on bistable elements, or elements that can be stable in two distinct states. The bistable elements that the team developed consist of arches of an elastic material, each a few millimeters in size. The elements can be in either a convex or a concave position—and are stable in either configuration. However, if you push on the element in its convex position, it responds by pushing back against the direction of force until it snaps into a concave position, accompanied by a sudden release of energy in the opposite direction.

    “It’s an elastic response, and then a snap-through,” explains Daraio, professor of aeronautics and applied physics.

    Collaborating with Katia Bertoldi, Jennifer Lewis, and Jordan Raney of Harvard University, Kochmann, Daraio, and Caltech graduate student Neel Nadkarni designed chains of the bistable elements, connected to one another by springs. When one link “pops” from the concave to the convex state, its spring tugs at the link that is next downstream in the chain, popping it to a convex position as well. The signal travels unidirectionally down the chain. The energy released by the popping balances out any energy absorbed by the soft material, allowing the process to continue down the chain across long distances and at constant speed.

    A proof-of-concept version of the design constructed from 3-D printed elements is described in a paper published August 8, 2016 in the Proceedings of the National Academy of Sciences. This paper was the third in the series of publications outlining the new concept for transmitting signals. It outlined how the design can be used to build mechanical AND and OR logic gates such as those used in computer processors. Logic gates are the building blocks of circuits, allowing signals to be processed.

    “These systems could be used as actuators to control robotic limbs, while passively performing simple logic decisions,” Daraio says. Actuators use the transfer of energy to perform mechanical work, and in this case, the transfer of energy would occur via a mechanical rather than an electrical system.

    The first paper in the series was published in March in the journal Physical Review B, and it described Kochmann’s theoretical, mathematical framework for the system. The second paper was published in Physical Review Letters in June, and it describes Daraio’s first experimental model for the system.

    While springs can be employed between the bistable elements, the team also demonstrated in the Physical Review Letters paper how magnets could be used to connect the elements—potentially allowing the chain to be reset to its original position with a reversal of polarity.

    “Though there are many applications, the fundamental principles that we explore are most exciting to me,” Kochmann says. “These nonlinear systems show very similar behavior to materials at the atomic scale but these are difficult to access experimentally or computationally. Now we have built a simple macroscale analogue that mimics how they behave.”

    The PNAS paper is titled Stable propogation of mechanical signals in soft media using stored elastic energy. The authors are Nadkarni, Daraio, and Kochmann of Caltech and Jordan Raney, Jennifer Lewis, and Katia Bertoldi of Harvard University. The work was funded by the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”
    Caltech buildings

     
  • richardmitnick 10:04 am on August 12, 2016 Permalink | Reply
    Tags: , Physicists have calculated a whole new way to generate super-strong magnetic fields, Physics,   

    From Science Alert: “Physicists have calculated a whole new way to generate super-strong magnetic fields” 

    ScienceAlert

    Science Alert

    11 AUG 2016
    FIONA MACDONALD

    1
    designer_an/Shutterstock.com

    Researchers have proposed a new way to use lasers to generate magnetic fields that are at least one order of magnitude stronger than anything we can currently produce on Earth.
    In nature, such super-strong fields only exist in space, and they could be they key to harnessing the clean power of nuclear fusion and modelling astrophysical processes in the lab.

    It’s exciting stuff, but so far, physicists have only used theoretical calculations to show that the technique could work, and it hasn’t been experimentally verified as yet for a good reason – we currently don’t have lasers strong enough to test it out.

    But on paper, the premise works, thanks to something known as the Faraday effect, which is the result of a strange interaction between light and a magnetic field.

    It’s a little complicated, but basically the Faraday effect refers to the fact that if an electromagnetic wave, such as visible light, is travelling through a non-magnetic medium, then its polarisation plane will be rotating in the presence of a constant magnetic field.

    To break that down a bit further, when light is polarised, it means all the light waves are vibrating in a single plane. But the angle of that plane can rotate.

    And, because of the Faraday effect, as light passes through a medium, the polarisation plane will rotate according to a constant magnetic field.

    What does any of that have to do with lasers? Well, the spin off of the Faraday effect is that, if you mess with the polarisation of the visible light travelling through a magnetic medium, it will generate a magnetic field.

    The stronger the electromagnetic wave, the higher the magnetic field it can produce – so if you use really strong lasers, you should be able to produce a really badass field.

    This is an idea physicists toyed around with back in the 1960s, but the reason it never went anywhere is because the Faraday effect also requires absorption to take place – something that usually happens through electrons colliding.

    Once you get to a certain intensity of laser, the electrons become ultra-relativistic, which means they collide a whole lot less often, and conventional absorption eventually stops happening.

    Because of this, researchers have assumed that a laser powerful enough to generate a super-strong magnetic field would also stop the absorption process from happening, which would void the Faraday effect.

    But now researchers from Russia, Italy, and Germany have hypothesised that, at very high laser wave intensities, the absorption can be effectively provided by radiation friction, instead of electron collisions.

    And this specific type of friction, on paper at least, can lead to the generation of a super-strong magnetic field.

    According to the team’s calculations, a powerful enough laser would be able to produce magnetic fields with a strength of several giga-Gauss (Gauss is the unit used to measure magnetic fields).

    To put that into perspective, a giga-Gauss is 109 Gauss, or 1,000,000,000 Gauss. The crazy strong magnetic field produced by an MRI machine can only get up to 70,000 Gauss, whereas the surface of a neutron star is around 1012 Gauss.

    Magnetic fields that we can produce in the lab today max out at around 108 Gauss – and they struggle to efficiently control nuclear fusion for long periods of time, which is where this new technique would come in handy.

    It would also allow researchers to recreate the crazy strong magnetic conditions in space inside the lab.

    “A new research field – laboratory astrophysics – has emerged relatively recently, and now it is very fast-developing,” said one of the researchers, Sergey Popruzhenko from the Moscow Engineering Physics Institute in Russia. “Our work is of particular interest because it suggests new opportunities in this field.”

    The challenge will be to experimentally test this new technique, to see if it works in real life, just as it does on paper. But while Popruzhenko predicts we’ll be able to do this in the “near future”, we need to wait until we have a laser powerful enough.

    The good news is that three of them are now under construction as part of the European project, Extreme Light Infrastructure, being built in the Czech Republic, Romania, and Hungary, so we’re already making progress.

    “These laser facilities will be capable of the intensities required for the generation of super-strong magnetic fields due to radiation friction and also for the observation of many other fundamental strong-field effects,” said Popruzhenko.

    The research has been published in the New Journal of Physics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 598 other followers

%d bloggers like this: