Tagged: physicsworld.com Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:53 pm on January 17, 2018 Permalink | Reply
    Tags: , , , , , , physicsworld.com   

    From Physics World: “Neutrino hunter” 

    physicsworld
    physicsworld.com

    Nigel Lockyer

    Nigel Lockyer, director of Fermilab in the US, talks to Michael Banks about the future of particle physics – and why neutrinos hold the key.

    Fermilab is currently building the Deep Underground Neutrino Experiment (DUNE). How are things progressing?

    Construction began last year with a ground-breaking ceremony held in July at the Sanford Underground Research Facility, which is home to DUNE.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford



    SURF building in Lead SD USA

    By 2022 the first of four tanks of liquid argon, each 17,000 tonnes, will be in place detecting neutrinos from space. Then in 2026, when all four are installed, Fermilab will begin sending the first beam of neutrinos to DUNE, which is some 1300 km away.

    Why neutrinos?

    Neutrinos have kept throwing up surprises ever since we began studying them and we expect a lot more in the future. In many ways, the best method to study physics beyond the Standard Model is with neutrinos.

    Standard Model of Particle Physics from Symmetry Magazine

    What science do you plan when DUNE comes online?

    One fascinating aspect is detecting neutrinos from supernova explosions. Liquid argon is very good at picking up electron neutrinos and we would expect to see a signal if that occurred in our galaxy. We could then study how the explosion results in a neutron star or black hole. That would really be an amazing discovery.

    And what about when Fermilab begins firing neutrinos towards DUNE?

    One of the main goals is to investigate charge–parity (CP) violation in the lepton sector. We would be looking for the appearance of electron and antielectron neutrinos. If there is a statistical difference then this would be a sign of CP violation and could give us hints as to the reason why there is more matter than antimatter in the universe. Another aspect of the experiment is to search for proton decay.

    How will Fermilab help in the effort?

    To produce neutrinos, the protons smash into a graphite target that is currently the shape of a pencil. We are aiming to quadruple the proton beam power from 700 kW to 2.5 MW. Yet we can’t use graphite after the accelerator has been upgraded due to the high beam power so we need to have a rigorous R&D effort in materials physics.

    What kind of materials are you looking at?

    The issue we face is how to dissipate heat better. We are looking at alloys of beryllium to act as a target and potentially rotating it to cool it down better.

    What are some of the challenges in building the liquid argon detectors?

    So far the largest liquid argon detector is built in the US at Fermilab, which is 170 tonnes. As each full-sized tank at DUNE will be 17,000 tonnes, we face a challenge to scale up the technology. One particular issue is that the electronics are contained within the liquid argon and we need to do some more R&D in this area to make sure they can operate effectively. The other area is with the purity of the liquid argon itself. It is a noble gas and, if pure, an electron can drift forever within it. But if there are any impurities that will limit how well the detector can operate.

    How will you go about developing this technology?

    The amount of data you get out of liquid argon detectors is enormous, so we need to make sure we have all the technology tried and tested. We are in the process of building two 600 tonne prototype detectors, the first of which will be tested at CERN in June 2018.

    CERN Proto DUNE Maximillian Brice

    The UK recently announced it will contribute £65m towards DUNE, how will that be used?

    The UK is helping build components for the detector and contributing with the data-acquisition side. It is also helping to develop the new proton target, and to construct the new linear accelerator that will enable the needed beam power.

    3
    The APA being prepped for shipment at Daresbury Laboratory. (Credit: STFC)

    4
    First APA (Anode Plane Assembly) ready to be installed in the protoDUNE-SP detector Photograph: Ordan, Julien Marius

    Are you worried Brexit might derail such an agreement?

    I don’t think so. The agreement is between the UK and US governments and we expect the UK to maintain its support.

    Japan is planning a successor to its Super Kamiokande neutrino detector – Hyper Kamiokande – that would carry out similar physics. Is it a collaborator or competitor?

    Well, it’s not a collaborator. Like Super Kamiokande, Hyper Kamiokande would be a water-based detector, the technology of which is much more established than liquid argon. However, in the long run liquid argon is a much more powerful detector medium – you can get a lot more information about the neutrino from it. I think we are pursuing the right technology. We also have a longer baseline that would let us look for additional interactions between neutrinos and we will create neutrinos with a range of energies. Additionally, the DUNE detectors will be built a mile underground to shield them from cosmic interference.

    Super-Kamiokande experiment. located under Mount Ikeno near the city of Hida, Gifu Prefecture, Japan

    Hyper-Kamiokande, a neutrino physics laboratory located underground in the Mozumi Mine of the Kamioka Mining and Smelting Co. near the Kamioka section of the city of Hida in Gifu Prefecture, Japan.

    _____________________________________________________
    In the long run liquid argon is a much more powerful detector medium – you can get a lot more information about the neutrino from it.
    _____________________________________________________

    Regarding the future at the high-energy frontier, does the US support the International Linear Collider (ILC)?

    ILC schematic, being planned for the Kitakami highland, in the Iwate prefecture of northern Japan

    The ILC began as an international project and in recent years Japan has come forward with an interest to host it. We think that Japan now needs to take a lead on the project and give it the go-ahead. Then we can all get around the table and begin negotiations.

    And what about plans by China to build its own Higgs factory?

    The Chinese government is looking at the proposal carefully and trying to gauge how important it is for the research community in China. Currently, Chinese accelerator scientists are busy with two upcoming projects in the country: a free-electron laser in Shanghai and a synchrotron in Beijing. That will keep them busy for the next five years, but after that this project could really take off.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

    Advertisements
     
  • richardmitnick 11:38 am on January 5, 2018 Permalink | Reply
    Tags: HANARO research reactor at the Korean Atomic Energy Research Institute South Korea, , , , physicsworld.com, Spallation neutron source   

    From physicsworld.com: “Neutrons probe gravity’s inverse square law” 

    physicsworld
    physicsworld.com

    Jan 4, 2018
    Edwin Cartlidge

    1
    Gravitating toward Newton’s law: the J-PARC neutron facility

    A spallation neutron source used by physicists in Japan to search for possible violations of the inverse square law of gravity. By scattering neutrons off noble-gas nuclei, the researchers found no evidence of any deviation from the tried and tested formula. However, they could slightly reduce the wiggle room for any non-conventional interactions at distances of less than 0.1 nm, and are confident they can boost the sensitivity of their experiment over the next few months.

    According to Newton’s law of universal gravitation, the gravitational force between two objects is proportional to each of their masses and inversely proportional to the square of the distance between them. This relationship can also be derived using general relativity, when the field involved is fairly weak and objects are travelling significantly slower than the speed of light. However, there are many speculative theories – some designed to provide a quantum description of gravity – that predict that the relationship breaks down at small distances.

    Physicists have done a wide range of different experiments to look for such a deviation. These include torsion balances, which measure the tiny gravitational attraction between two masses suspended on a fibre and two fixed masses. However, this approach is limited by environmental noise such as seismic vibrations and even the effects of dust particles. As a result such experiments cannot probe gravity at very short distances, with the current limit being about 0.01 mm.

    Scattered in all directions

    Neutrons, on the other hand, can get down to the nanoscale and beyond. The idea is to fire a beam of neutrons at a gas and record how the neutrons are scattered by the constituent nuclei. In the absence of any new forces modifying gravity at short scales, the neutrons and nuclei essentially only interact via the strong force (neutrons being electrically neutral). But the strong force acts over extremely short distances – roughly the size of the nucleus, about 10–14 m – while the neutrons have a de Broglie wavelength of around 1 nm. The neutrons therefore perceive the nuclei as point sources and as such are scattered equally in all directions.

    Any new force, however, would likely extend beyond the nucleus. If its range were comparable to the neutrons’ wavelength then those neutrons would be scattered more frequently in a forward direction than at other angles. Evidence of such a force, should it exist, can therefore be sought by firing in large numbers of neutrons and measuring the distribution of their scattering angles.

    In 2008, Valery Nesvizhevsky of the Institut Laue-Langevin in France and colleagues looked for evidence of such forward scattering in data from previous neutron experiments. They ended up empty handed but could place new upper limits on the strength of any new forces, improving on the existing constraints for scales between 1 pm and 5 nm by several orders of magnitude. Those limits were then pushed back by about another order of magnitude two years ago, when Sachio Komamiya at the University of Tokyo and team scattered neutrons off atomic xenon at the HANARO research reactor at the Korean Atomic Energy Research Institute in South Korea.

    2
    HANARO research reactor at the Korean Atomic Energy Research Institute in South Korea

    Time of flight

    In the new research, Tamaki Yoshioka of Kyushu University in Japan and colleagues use neutrons from a spallation source at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, which they fire at samples of xenon and helium. Because the J-PARC neutrons come in pulses, the researchers can easily measure their time of flight, and, from that, work out their velocity and hence their wavelength.


    J-PARC Facility Japan Proton Accelerator Research Complex J-PARC, located in Tokai village, Ibaraki prefecture, Japan

    Armed with this information, the team can establish whether any forward scattering is due to a new force or simply caused by neutrons bouncing off larger objects in the gas, such as trace amounts of atmospheric gases. At any given wavelength, both types of scattering would be skewed in the forward direction and so would be indistinguishable from one another. But across a range of wavelengths different patterns would emerge. For atmospheric gases, the scattering angle would simply be proportional to the neutrons’ wavelength. In the case of a new force, on the other hand, the relationship would be more complex because the effective size of the nucleus would itself vary with neutron wavelength.

    Reactors can also be used to generate pulses, by “chopping” a neutron beam. But that process severely limits the beam’s intensity. Taking advantage of the superior statistics at J-PARC, Yoshioka and colleagues were able to reduce the upper limit on any new forces below 0.1 nm by about an order of magnitude over the HANARO results – showing that their inherent strength can at most be 10^24 times that of gravity’s (gravity being an exceptionally weak force).

    Cost-effective search

    That is still nowhere near the sensitivity of torsion balance searches at bigger scales – which can get down to the strength of gravity itself. As Nesvizhevsky points out, torsion balances use macroscopic masses with “Avogadro numbers” (1023) of atoms, whereas neutron scattering experiments involve at most a few tens of millions of neutrons. Nevertheless, he believes that the new line of research is well worth pursuing, pointing out that many theories positing additional gravity-like forces “predict forces in this range of observations”. Such experiments, he argues, represent “an extremely cost-effective way of looking for a new fundamental force” when compared to searches carried out in high-energy physics.

    Spurred on by the prospect of discovery, Yoshioka and colleagues are currently taking more data. The lead author of a preprint on arXiv describing the latest research, Christopher Haddock of Nagoya University, says that they hope to have new results by the summer. A series of improvements to the experiment, including less scattering from the beam stop, he says, could boost sensitivity to new forces in the sub-nanometre range by up to a further order of magnitude and should also improve existing limits at distances of up to 10 nm.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 4:54 pm on January 2, 2018 Permalink | Reply
    Tags: , , , physicsworld.com, Seismic-wave attenuation studies have great potential to add knowledge in this field, Wave attenuation hints at nature of Earth’s asthenosphere   

    From physicsworld.com: “Wave attenuation hints at nature of Earth’s asthenosphere” 

    physicsworld
    physicsworld.com

    Jan 2, 2018

    1
    Soft on the inside. No image credit.

    Researchers in Japan have used measurements of the aftershocks of the 2011 Tohoku earthquake to gain insight into the dynamics of the Earth’s crust and upper mantle. Nozomu Takeuchi and colleagues at the University of Tokyo, Kobe University, and the Japan Agency for Marine–Earth Science and Technology, analysed the attenuation of seismic waves as they propagated through the rigid lithosphere and the less viscous asthenosphere beneath. The team found that the rate of attenuation in the lithosphere showed a marked frequency dependence, whereas in the asthenosphere the relationship was much weaker. The result demonstrates the possibility of using broadband seismic attenuation data to characterize the properties of the Earth’s subsurface.

    Weak, deep and mysterious

    The lithosphere is the rigid outermost layer of the Earth. It comprises two compositional units – the crust and the upper mantle. The movement of individual fragments of the lithosphere (the tectonic plates) is responsible for the phenomenon of continental drift, and is possible due to the low mechanical strength of the underlying asthenosphere.

    Away from the active mid-ocean ridges, the lithosphere–asthenosphere boundary (LAB) lies at least tens of kilometres below the ocean floor, making direct investigation impossible for now. The LAB is even less accessible beneath the continents, where the lithosphere can be hundreds of kilometres thick. Nevertheless, seismic wave velocities and the way the continents have rebounded after deglaciation have allowed the viscosity of the asthenosphere to be estimated even though the physical cause of the mechanical contrast between the layers remains mysterious. A rise in temperature across the boundary presumably contributes, but probably does not explain the disparity completely; partial melting and differences in water content have also been proposed.

    Complex signal

    To help discriminate between these mechanisms, Takeuchi and collaborators looked to differences in the attenuating effects of the lithosphere and asthenosphere. This is a promising approach, because the process of anelastic attenuation is closely related to a material’s thermomechanical properties. The situation is complicated, however, by the fact that high-frequency seismic waves are also attenuated by scattering from small-scale features, and low-frequency waves are attenuated by geometrical spreading.

    Using a dataset obtained after the 2011 earthquake by an array of ocean-floor seismometers in the northwest Pacific, the group compared actual records of seismic waves with a series of probabilistic models. To isolate the anelastic attenuation signature for high-frequency (>3 Hz) waves, the researchers conducted simulations in which the scattering properties of the lithosphere and asthenosphere were varied. The model that most closely matched observations indicated a rate of attenuation for the asthenosphere 50 times that for the lithosphere, and suggested that this attenuation is not related to frequency. Seismic waves in the lithosphere, in contrast, seem strongly frequency dependent.

    More experiments needed

    Although Takeuchi and colleagues’ research shows that seismic-wave attenuation studies have great potential to add knowledge in this field, the results themselves do not immediately support one model over another. Laboratory experiments reveal that partial melting of a sample can produce a weak frequency dependence similar to that determined by this study for the asthenosphere, which on its own would strongly suggest that as the reason for the layer’s low viscosity. However, a similar effect has been observed for samples below the material’s solidus, undermining that explanation somewhat, and also failing to explain why the same response is not observed in the solid lithosphere. Further experiments involving additional factors will be needed to settle the issue.

    Full details of the research are published in Science. A commentary on the research, written by Colleen Dalton of Brown University in the US, is also published in the same issue.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 4:00 pm on December 28, 2017 Permalink | Reply
    Tags: , , , Can Bose-Einstein condensates simulate cosmic inflation?, , physicsworld.com   

    From physicsworld.com: “Can Bose-Einstein condensates simulate cosmic inflation?” 

    physicsworld
    physicsworld.com

    Dec 28, 2017
    Tim Wogan

    1
    Rolling downhill: illustration of a coherent quantum phase transition

    Cosmological inflation, first proposed by Alan Guth in 1979, describes a hypothetical period when the early Universe expanded faster than the speed of light.

    4
    Alan Guth, Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    5
    Alan Guth’s notes. http://www.bestchinanews.com/Explore/4730.html

    The model, which answers fundamental questions about the formation of the Universe we know today, has become central to modern cosmology, but many details remain uncertain. Now atomic physicists in the US have developed a laboratory analogue by shaking a Bose-Einstein condensate (BEC). The team’s initial results suggest that the Universe may have remained quantum coherent throughout inflation and beyond. The researchers hope their condensate model may provide further insights into inflation in a more accessible system, however not everyone agrees on its usefulness.

    Dynamical instability occurs in all sorts of physical systems that are out of equilibrium. A ball perched at the top of a hill, for example, may stay put for short time. But the tiniest perturbation will send the ball falling towards a lower-energy state at the bottom of the hill. Guth realized that a very short, very rapid period of expansion could occur if the Universe got stuck out of equilibrium around 10-35 s after the Big Bang, causing it to expand by a factor of around 1026 in a tiny fraction of a second. The details of the inflationary model have been revised many times, and numerous questions remain. “This is where I can contribute, even though I’m not a cosmologist,” says Cheng Chin of the University of Chicago in Illinois: “We have only one Universe, so it becomes very hard to say whether our theories really capture the whole physics as we can’t repeat the experiment.”

    Shake it up

    Chin and colleagues created their model system by cooling 30,000 atoms in an optical trap into a BEC, in which all the atoms occupy a single quantum state. Initially, this BEC was sitting still in the centre of the trap. The researchers then began to shake the condensate by moving the trapping potential from side to side with increasing amplitude. This raised the energy of the state in which the condensate was stationary relative to the trapping potential. When the shaking amplitude was increased past a critical value, the energy of this “stationary” state became higher than the energy of two other states with the condensate oscillating in opposite directions inside the trap. The condensate therefore underwent a dynamical phase transition, splitting into two parts that each entered one of these two momentum states.

    Between 20-30 ms after the phase transition, the researchers saw a clear interference pattern in the density of the condensate. This shows, says Chin, that the condensate had undergone a quantum coherent separation, with each atom entering a superposition of both momentum states. After this, the clear interference pattern died out. This later period corresponds, says Chin, to the period of cosmological relaxation in which, after inflation had finished, different parts of the Universe relaxed to their new ground states. More detailed analysis of the condensate in this phase showed that, although its quantum dynamics were more complicated – with higher harmonics of the oscillation frequencies becoming more prominent – the researchers’ observations could not be described classically.

    Chin says that cosmologists may find this observation interesting. Although “in principle, everything is quantum mechanical,” he explains, the practical impossibility of performing a full quantum simulation of the Universe as its complexity grows leads cosmologists to fall back on classical models. “The value of our research is to try and point out that we shouldn’t give up [on quantum simulation] that early,” he says. “Even in inflation and the subsequent relaxation process, we have one concrete example to show that quantum mechanics and coherence still play a very essential role.”

    Inflated claims?

    James Anglin of the University of Kaiserslautern in Germany is impressed by the research. “Understanding what happens to small initial quantum fluctuations after a big instability has saturated is an important and basic question in physics, and it really is an especially relevant question for cosmology,” he explains. “The big difference, of course, is that the cosmic inflation scenario includes gravity as curved spacetime in general relativity, such that space expands enormously while the inflaton field [the field thought to drive inflation] finds its true ground state. A malicious critic might say that this experiment is a perfect analogue for cosmological inflation, except for the inflation part.”

    “This is indeed nice work,” he concludes: “The language is simply a little bit inflated!” The research is described in Nature Physics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:10 am on August 12, 2017 Permalink | Reply
    Tags: , , , , , physicsworld.com, Solar core spins four times faster than expected,   

    From physicsworld.com: “Solar core spins four times faster than expected” 

    physicsworld
    physicsworld.com

    Aug 11, 2017
    Keith Cooper

    1
    Sunny science: the Sun still holds some mysteries for researchers. No image credit.

    The Sun’s core rotates four times faster than its outer layers – and the elemental composition of its corona is linked to the 11 year cycle of solar magnetic activity. These two findings have been made by astronomers using a pair of orbiting solar telescopes – NASA’s Solar Dynamics Observatory (SDO) and the joint NASA–ESA Solar and Heliospheric Observatory (SOHO). The researchers believe their conclusions could revolutionize our understanding of the Sun’s structure.

    NASA/SDO

    ESA/NASA SOHO

    Onboard SOHO is an instrument named GOLF (Global Oscillations at Low Frequencies) – designed to search for millimetre-sized gravity, or g-mode, oscillations on the Sun’s surface (the photosphere). Evidence for these g-modes has, however, proven elusive – convection of energy within the Sun disrupts the oscillations, and the Sun’s convective layer exists in its outer third. If solar g-modes exist then they do so deep within the Sun’s radiative core.

    A team led by Eric Fossat of the Université Côte d’Azur in France has therefore taken a different tack. The researchers realized that acoustic pressure, or p-mode, oscillations that penetrate all the way through to the core – which Fossat dubs “solar music” – could be used as a probe for g-mode oscillations. Assessing over 16 years’ worth of observations by GOLF, Fossat’s team has found that p-modes passing through the solar core are modulated by the g-modes that reverberate there, slightly altering the spacing between the p-modes.

    Fossat describes this discovery as “a fantastic result”, in terms of what g-modes can tell us about the solar interior. The properties of the g-mode oscillations depend strongly on the structure and conditions within the Sun’s core, including the ratio of hydrogen to helium, and the period of the g-modes indicate that the Sun’s core rotates approximately once per week. This is around four times faster than the Sun’s outer layers, which rotate once every 25 days at the equator and once every 35 days at the poles.

    Diving into noise

    Not everyone is convinced by the results. Jeff Kuhn of the University of Hawaii describes the findings as “interesting”, but warns that independent verification is required.

    “Over the last 30 years there have been several claims for detecting g-modes, but none have been confirmed,” Kuhn told physicsworld.com. “In their defence, [Fossat’s researchers] have tried several different tests of the GOLF data that give them confidence, but they are diving far into the noise to extract this signal.” He thinks that long-term ground-based measurements of some p-mode frequencies should also contain the signal and confirm Fossat’s findings further.

    If the results presented in Astronomy & Astrophysics can be verified, then Kuhn is excited about what a faster spinning core could mean for the Sun. “It could pose some trouble for our basic understanding of the solar interior,” he says. When stars are born, they are spinning fast but over time their stellar winds rob their outer layers of angular momentum, slowing them down. But Fossat suggests that conceivably their cores could somehow retain their original spin rate.

    Solar links under scrutiny

    Turning attention from the Sun’s core to its outer layers reveals another mystery. The energy generated by nuclear reactions in the Sun’s core ultimately powers the activity in the Sun’s outer layers, including the corona. But the corona is more than a million degrees hotter than the layers of the chromosphere and photosphere below it. The source of this coronal heating is unknown, but a new paper published in Nature Communications has found a link between the elemental composition of the corona, which features a broad spectrum of atomic nuclei including iron and neon, and the Sun’s 11 year cycle of magnetic activity.

    Observations made by SDO between 2010 (when the Sun was near solar minimum) and 2014 (when its activity peaked) revealed that when at minimum, the corona’s composition is dominated by processes of the quiet Sun. However, when at maximum the corona’s composition is instead controlled by some unidentified process that takes place around the active regions of sunspots.

    That the composition of the corona is not linked to a fixed property of the Sun (such as its rotation) but is instead connected to a variable property, could “prompt a new way of thinking about the coronal heating problem,” says David Brooks of George Mason University, USA, who is lead author on the paper. This is because the way in which elements are transported into the corona is thought to be closely related to how the corona is being heated.

    Quest for consensus

    Many explanations for the corona’s high temperature have been proposed, ranging from magnetic reconnection to fountain-like spicules, and magnetic Alfvén waves to nanoflares, but none have yet managed to win over a consensus of solar physicists.

    “If there’s a model that explains everything – the origins of the solar wind, coronal heating and the observed preferential transport – then that would be a very strong candidate,” says Brooks. The discovery that the elemental abundances vary with the magnetic cycle is therefore a new diagnostic against which to test models of coronal heating.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:06 pm on July 1, 2017 Permalink | Reply
    Tags: In contrast the quadruple form of the decay would allow the neutrino to be a Dirac particle: like every other particle in the Standard Model, It would defy the never previously violated conservation of lepton number, , NEMO-3 hunts for ultra-rare beta decay, , Neutrinoless double beta decay would also mean that the neutrino is its own antiparticle: a so-called Majorana particle, Neutrinoless quadruple beta decay, physicsworld.com, says Rodejohann: "We pointed out that Dirac particles can in fact violate lepton number but by four units say rather than two.", Such a violation in turn might explain the dominance of matter over antimatter, The potential prize on offer: an explanation for the universe's matter/antimatter asymmetry   

    From physicsworld.com: “NEMO-3 hunts for ultra-rare beta decay” 

    physicsworld
    physicsworld.com

    Jun 30, 2017

    1
    Out of the blue: NEMO-3 being built. NO image credit.

    For the best part of 30 years, physicists have been looking for a very rare nuclear process known as neutrinoless double beta decay. With discovery still elusive, researchers in France have now turned their attention to an even rarer process called neutrinoless quadruple beta decay. As expected, their first search has drawn a blank. But they say it is worth persisting, given the potential prize on offer: an explanation for the universe’s matter/antimatter asymmetry.

    In normal beta decay, an electron and an antineutrino are emitted from a nucleus within which a neutron transforms into a proton. There are also several dozen isotopes that have been shown to undergo double beta decay, in which two neutrons turn into two protons and emit two electrons plus two antineutrinos. But what physicists have been keen to observe, so far without success, is the neutrinoless version without the emission of any antineutrinos.

    The discovery of this phenomenon, if real, would be huge news in physics, since it would defy the never previously violated conservation of lepton number – protons and neutrons having a lepton number of zero while electrons and neutrinos are +1 and their antimatter counterparts –1. Such a violation in turn might explain the dominance of matter over antimatter, since it would reveal a process that yields a slight excess of matter.

    Majorana particles

    Neutrinoless double beta decay would also mean that the neutrino is its own antiparticle, a so-called Majorana particle. This is because an antineutrino emitted by one of the two decaying neutrons could be absorbed by the other neutron as a neutrino, leading to no neutrino output. In contrast, the quadruple form of the decay would allow the neutrino to be a Dirac particle, like every other particle in the Standard Model, which is not the mirror image of itself.

    Neutrinoless quadruple beta decay was proposed theoretically by Julian Heeck and Werner Rodejohann of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, in 2013. The pair found that by adding three right-handed neutrinos to the existing trio of left-handed neutrinos in the Standard Model, as well as two new scalar particles, which are similar to the Higgs boson, the (virtual) neutrinos emitted in the simultaneous beta decay of four neutrons would annihilate one another before they could be emitted from the nucleus in question.

    “Before we published our paper the common opinion was that Dirac neutrinos conserve lepton number,” says Rodejohann. “We pointed out that Dirac particles can in fact violate lepton number, but by four units, say, rather than two.”

    Energy boost

    The Heidelberg researchers point out that the only nuclei that could undergo this decay are those for which the transformation of just one neutron into a proton boosts their energy – so forbidding normal beta decay, which would otherwise predominate – while the transformation of four neutrons makes them less energetic. They have identified just three such isotopes – zirconium-96, xenon-136 and neodymium-150 – of which the latter is best, they say, because it releases the greatest amount of energy during the decay, so making it more detectable.

    Indeed, it is that nucleus that has been used in the experimental work. The work was carried out at the NEMO-3 experiment at the Modane Underground Laboratory in France.

    Edelweiss Dark Matter Experiment, located at the Modane Underground Laboratory in France

    This comprises a 3 × 5 m cylindrical detector consisting of thin foils of various isotopes – including 37 g of neodymium-150 – surrounded by tracking chambers and calorimeters. Although optimized to search for neutrinoless double beta decay, the detector’s ability to plot the trajectories of individual emitted particles also makes it well suited to the new line of research.

    NEMO-3 has not collected new data, but rather physicists have analysed existing events recorded in its detector between 2003 and 2011. The researchers looked for events generating either three or four particles (in the former, one of the emitted electrons would be reabsorbed by the neodymium foil). Doing so, they found no evidence for events beyond those expected from background radioactive processes. But they were able to stipulate a first lower bound on the process’s half-life – some 1021 years.

    Forty orders of magnitude

    Steven Elliott of the Los Alamos National Laboratory in the US praises NEMO-3 for reaching “an interesting milestone that no other existing experiment can reach”. But he doubts that the group will be able to detect the putative decay, pointing out that Heeck and Rodejohann predicted a half-life (of around 1065 years) that is “at least 40 orders of magnitude” beyond the experiment’s sensitivity. Ettore Fiorini of the University of Milano-Bicocca shares that scepticism, arguing that a positive sighting “seems to be outside any realistic hope”.

    Former NEMO-3 member Xavier Sarazin of the Linear Accelerator Laboratory in Orsay, France, acknowledges that the group is very unlikely to make a discovery. But he maintains that it will still be worthwhile carrying out a new search with the upgraded “SuperNEMO”, which should start taking data in about a year and which could contain up to a kilogram of neodymium-150. “You would never design an experiment from scratch to look for this decay,” he says, “but if you can increase the amount of neodymium, why not?”

    Indeed, Heeck says that potential alternatives to the model developed by himself and Rodejohann might feature much shorter decay half-lives. “Our hope would be that NEMO-3’s first experimental search for quadruple beta decay will motivate people to explore models that could lead to testable rates,” he says.

    The research has been accepted for publication in Physical Review Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 10:17 am on May 13, 2017 Permalink | Reply
    Tags: , Laser-plasma accelerator, Optocouplers, , physicsworld.com, Space radiation brought down to Earth   

    From physicsworld.com: “Space radiation brought down to Earth” 

    physicsworld
    physicsworld.com

    May 12, 2017
    Sarah Tesh

    1
    Space on Earth: scientists mimic the radiation of space

    Space radiation has been reproduced in a lab on Earth. Scientists have used a laser-plasma accelerator to replicate the high-energy particle radiation that surrounds our planet. The research could help study the effects of space exploration on humans and lead to more resilient satellite and rocket equipment.

    The radiation in space is a major obstacle for our ambitions to explore the solar system. Highly energetic ionizing particles from the Sun and deep space are extremely dangerous for human health because they can pass right through the skin and deposit energy, irreversibly damaging cells and DNA. On top of that, the radiation can also wreak havoc on satellites and equipment.

    While the most obvious way to study these effects is to take experiments into space, this is very expensive and impractical. Yet doing the reverse – producing space-like radiation on Earth – is surprisingly difficult. Scientists have tried using conventional cyclotrons and linear particle accelerators. However, these can only produce monoenergetic particles that do not accurately represent the broad range of particle energies found in space radiation.

    Now, researchers led by Bernhard Hidding from the University of Strathclyde in the UK have found a solution. The team used laser-plasma accelerators at the University of Dusseldorf and the Rutherford Appleton Laboratory to produce broadband electrons and protons typical of those found in the van-Allen belts – zones of particle radiation caused by Earth’s protective magnetic fields.

    2
    Laser-plasma accelerator. LBNL

    Laser to plasma

    The accelerator works by firing a high-energy, high-intensity laser at a tiny spot just a few μm2 on a thin-metal-foil target. “The sheer intensity of the laser pulse means that the electric fields involved are orders of magnitude larger than the inneratomic Coulomb forces,” explains Hidding, “The metal-foil target is therefore instantly converted into a plasma.” The plasma particles – electrons and protons – are accelerated by the intense electromagnetic fields of the laser and the collective fields of the other plasma particles. The extent at which this happens depends on the particle’s initial position, resulting in the huge range of energies.

    The team studied its plasma particles using electron-sensitive image plates, radiochromic films for protons and scintillating phosphor screens. Then, to prove the lab-made radiation was comparable to space radiation, the team used simulations from NASA. “The NASA codes are based on models as well as a few measurements, so they represent the best knowledge we have,” says Hidding.

    Monitoring the damage

    The next task was to prove that the system could be used to test the effects of space radiation by subjecting optocouplers to the particle radiation. Optocouplers are common devices that transfer electric signals between isolated circuits. As they are characterized by their current transfer ratio, Hidding and team were able to monitor the radiation-induced degradation by measuring this performance.

    The proof-of-concept experiment, described in Scientific Reports, could represent a major breakthrough towards understanding the effects of space radiation without the need to leave Earth. The next step will be to develop a testing standard that can be used to test electronics and biological samples – “After all, radiation in space is one of the key showstoppers for human spaceflight,” Hidding remarks.

    Strathclyde’s newly installed laser will also play a key role in future research – “[It is] the highest-average-power laser system in the world today,” says Hiddings. Housed in three radiation-shielded bunkers at the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA), the system will power up to seven beamlines. “The vision is to develop a dedicated beamline for space-radiation reproduction and testing, and to put this to use for the growing space industry in the UK and beyond.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 2:38 pm on May 6, 2017 Permalink | Reply
    Tags: , , , , physicsworld.com, Simulating the universe,   

    From physicsworld.com: “Simulating the universe” 

    physicsworld
    physicsworld.com

    May 4, 2017
    Tom Giblin
    James Mertens
    Glenn Starkman

    Powerful computers are now allowing cosmologists to solve Einstein’s frighteningly complex equations of general relativity in a cosmological setting for the first time. Tom Giblin, James Mertens and Glenn Starkman describe how this new era of simulations could transform our understanding of the universe.

    1
    A visualization of the curved space–time “sea” No image credit.

    From the Genesis story in the Old Testament to the Greek tale of Gaia (Mother Earth) emerging from chaos and giving birth to Uranus (the god of the sky), people have always wondered about the universe and woven creation myths to explain why it looks the way it does. One hundred years ago, however, Albert Einstein gave us a different way to ask that question. Newton’s law of universal gravitation, which was until then our best theory of gravity, describes how objects in the universe interact. But in Einstein’s general theory of relativity, spacetime (the marriage of space and time) itself evolves together with its contents. And so cosmology, which studies the universe and its evolution, became at least in principle a modern science – amenable to precise description by mathematical equations, able to make firm predictions, and open to observational tests that could falsify those predictions.

    Our understanding of the mathematics of the universe has advanced alongside observations of ever-increasing precision, leading us to an astonishing contemporary picture. We live in an expanding universe in which the ordinary material of our everyday lives – protons, neutrons and electrons – makes up only about 5% of the contents of the universe. Roughly 25% is in the form of “dark matter” – material that behaves like ordinary matter as far as gravity is concerned, but is so far invisible except through its gravitational pull. The other 70% of the universe is something completely different, whose gravity pushes things apart rather than pulling them together, causing the expansion of the universe to accelerate over the last few billion years. Naming this unknown substance “dark energy” teaches us nothing about its true nature.

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Now, a century into its work, cosmology is brimming with existential questions. If there is dark matter, what is it and how can we find it? Is dark energy the energy of empty space, also known as vacuum energy, or is it the cosmological constant, Λ, as first suggested by Einstein in 1917? He introduced the constant after mistakenly thinking it would stop the universe from expanding or contracting, and so – in what he later called his “greatest blunder” – failed to predict the expansion of the universe, which was discovered a dozen years later. Or is one or both of these invisible substances a figment of the cosmologist’s imagination and it is general relativity that must be changed?

    At the same time as being faced with these fundamental questions, cosmologists are testing their currently accepted model of the universe – dubbed ΛCDM – to greater and greater precision observationally.

    2
    Lambda Coild Dark Matter. No image credit

    (CDM indicates the dark-matter particles are cold because they must move slowly, like the molecules in a cold drink, so as not to evaporate from the galaxies they help bind together.) And yet, while we can use general relativity to describe how the universe expanded throughout its history, we are only just starting to use the full theory to model specific details and observations of how galaxies, clusters of galaxies and superclusters are formed and created. How this happens is simple – the equations of general relativity aren’t.

    Horribly complex

    While they fit neatly onto a T-shirt or a coffee mug, Einstein’s field equations are horrible to solve even using a computer. The equations involve 10 separate functions of the four dimensions of space and time, which characterize the curvature of space–time in each location, along with 40 functions describing how those 10 functions change, as well as 100 further functions describing how those 40 changes change, all multiplied and added together in complicated ways. Exact solutions exist only in highly simplified approximations to the real universe. So for decades cosmologists have used those idealized solutions and taken the departures from them to be small perturbations – reckoning, in particular, that any departures from homogeneity can be treated independently from the homogeneous part and from one another.

    3
    Not at your leisure. No image credit.

    This “first-order perturbation theory” has taught us a lot about the early development of cosmic structures – galaxies, clusters of galaxies and superclusters – from barely perceptible concentrations of matter and dark matter in the early universe. The theory also has the advantage that we can do much of the analysis by hand, and follow the rest on computer. But to track the development of galaxies and other structures from after they were formed to the present day, we’ve mostly reverted to Newton’s theory of gravity, which is probably a good approximation.

    To make progress, we will need to improve on first-order perturbation theory, which treats cosmic structures as independent entities that are affected by the average expansion of the universe, but neither alter the average expansion themselves, nor influence one another. Unfortunately, higher-order perturbation theory is much more complicated – everything affects everything else. Indeed, it’s not clear there is anything to gain from using these higher-order approximations rather than “just solving” the full equations of general relativity instead.

    Improving the precision of our calculations – how well we think we know the answer – is one thing, as discussed above. But the complexity of Einstein’s equations has made us wonder just how accurate the perturbative description really is. In other words, it might give us answers, but are they the right ones? Nonlinear equations, after all, can have surprising features that appear unexpectedly when you solve them in their full glory, and it is hard to predict surprises. Some leading cosmologists, for example, claim that the accelerating expansion of the universe, which dark energy was invented to explain, is caused instead by the collective effects of cosmic structures in the universe acting through the magic of general relativity. Other cosmologists argue this is nonsense.

    The only way to be sure is to use the full equations of general relativity. And the good news is that computers are finally becoming fast enough that modelling the universe using the full power of general relativity – without the traditional approximations – is not such a crazy prospect. With some hard work, it may finally be feasible over the next decade.

    Computers to the rescue

    Numerical general relativity itself is not new. As far back as the late 1950s, Richard Arnowitt, Stanley Deser and Charles Misner – together known as ADM – laid out a basic framework in which space–time could be carefully separated into space and time – a vital first step in solving general relativity with a computer. Other researchers also got in on the act, including Thomas Baumgarte, Stuart Shapiro, Masaru Shibata and Takashi Nakamura, who made important improvements to the numerical properties of the ADM system in the 1980s and 1990s so that the dynamics of systems could be followed accurately over long enough times to be interesting.

    4
    Beam on. No image credit.

    Other techniques for obtaining such long-time stability were also developed, including one imported from fluid mechanics. Known as adaptive mesh refinement, it allowed scarce computer memory resources to be focused only on those parts of problems where they were needed most. Such advances have allowed numerical relativists to simulate with great precision what happens when two black holes merge and create gravitational waves – ripples in space–time. The resulting images are more than eye candy; they were essential in allowing members of the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO) collaboration to announce last year that they had directly detected gravitational waves for the first time.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    By modelling many different possible configurations of pairs of black holes – different masses, different spins and different orbits – LIGO’s numerical relativists produced a template of the gravitational-wave signal that would result in each case. Other researchers then compared those simulations over and over again to what the experiment had been measuring, until the moment came when a signal was found that matched one of the templates. The signal in question was coming to us from a pair of black holes a billion light-years away spiralling into one another and merging to form a single larger black hole.

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Using numerical relativity to model cosmology has its own challenges compared to simulating black-hole mergers, which are just single astrophysical events. Some qualitative cosmological questions can be answered by reasonably small-scale simulations, and there are state-of-the-art “N-body” simulations that use Newtonian gravity to follow trillions of independent masses over billions of years to see where gravity takes them. But general relativity offers at least one big advantage over Newtonian gravity – it is local.

    The difficulty with calculating the gravity experienced by any particular mass in a Newtonian simulation is that you need to add up the effects of all the other masses. Even Isaac Newton himself regarded this “action at a distance” as a failing of his model, since it means that information travels from one side of the simulated universe to the other instantly, violating the speed-of-light limit. In general relativity, however, all the equations are “local”, which means that to determine the gravity at any time or location you only need to know what the gravity and matter distribution were nearby just moments before. This should, in other words, simplify the numerical calculations.

    Recently, the three of us at Kenyon College and Case Western Reserve University showed that the cosmological problem is finally becoming tractable (Phys. Rev. Lett. 116 251301 and Phys. Rev. D 93 124059). Just days after our paper appeared, Eloisa Bentivegna at the University of Catania in Italy and Marco Bruni at the University of Portsmouth, UK, had similar success (Phys. Rev. Lett. 116 251302). The two groups each presented the results of low-resolution simulations, where grid points are separated by 40 million light-years, with only long-wavelength perturbations. The simulations followed the universe for only a short time by cosmic standards – long enough only for the universe to somewhat more than double in size – but both tracked the evolution of these perturbations in full general relativity with no simplifications or approximations whatsoever. As the eminent Italian cosmologist Sabino Matarese wrote in Nature Physics, “the era of general relativistic numerical simulations in cosmology ha[s] begun”.

    These preliminary studies are still a long way from competing with modern N-body simulations for resolution, duration or dynamic range. To do so will require advances in the software so that the code can run on much larger computer clusters. We will also need to make the code more stable numerically so that it can model much longer periods of cosmic expansion. The long-term goal is for our numerical simulations to match as far as possible the actual evolution of the universe and its contents, which means using the full theory of general relativity. But given that our existing simulations using full general relativity have revealed no fluctuations driving the accelerated expansion of the universe, it appears instead that accelerated expansion will need new physics – whether dark energy or a modified gravitational theory.

    Both groups also observe what appear to be small corrections to the dynamics of space–time when compared with simple perturbation theory. Bentivegna and Bruni studied the collapse of structures in the early universe and suggested that they appear to coalesce somewhat more quickly than in the standard simplified theory.

    Future perfect

    Drawing specific conclusions about simulations is a subtle matter in general relativity. At the mathematical heart of the theory is the principle of “co-ordinate invariance”, which essentially says that the laws of physics should be the same no matter what set of labels you use for the locations and times of events. We are all familiar with milder versions of this symmetry: we wouldn’t expect the equations governing basic scientific laws to depend on whether we measure our positions in, say, New York or London, and we don’t need new versions of science textbooks whenever we switch from standard time to daylight savings time and back. Co-ordinate invariance in the context of general relativity is just a more extreme version of that, but it means we must ensure that any information we extract from our simulations does not depend on how we label the points in our simulations.

    Our Ohio group has taken particular care with this subtlety by sending simulated beams of light from distant points in the distant past at the speed of light through space–time to arrive at the here and now. We then use those beams to simulate observations of the expansion history of our universe. The universe that emerges exhibits an average behaviour that agrees with a corresponding smooth, homogeneous model, but with inhomogeneous structures on top. These additional structures contribute to deviations in observable quantities across the simulated observer’s sky that should soon be accessible to real observers.

    This work is therefore just the start of a journey. Creating codes that are accurate and sensitive enough to make realistic predictions for future observational programmes – such as the all-sky surveys to be carried out by the Large Scale Synoptic Telescope or the Euclid satellite – will require us to study larger volumes of space.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    ESA/Euclid spacecraft

    These studies will also have to incorporate ultra-large-scale structures some hundreds of millions of light-years across as well as much smaller-scale structures, such as galaxies and clusters of galaxies. They will also have to follow these volumes for longer stretches of time than is currently possible.

    All this will require us to introduce some of the same refinements that made it possible to predict the gravitational-wave ripples produced by a merging black hole, such as adaptive mesh refinement to resolve the smaller structures like galaxies, and N-body simulations to allow matter to flow naturally across these structures. These refinements will let us characterize more precisely and more accurately the statistical properties of galaxies and clusters of galaxies – as well as the observations we make of them – taking general relativity fully into account. Doing so will, however, require clusters of computers with millions of cores, rather than the hundreds we use now.

    These improvements to code will take time, effort and collaboration. Groups around the world – in addition to the two mentioned – are likely to make important contributions. Numerical general-relativistic cosmology is still in its infancy, but the next decade will see huge strides to make the best use of the new generation of cosmological surveys that are being designed and built today. This work will either give us increased confidence in our own scientific genesis story – ΛCDM – or teach us that we still have a lot more thinking to do about how the universe got itself to where it is today.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 1:07 pm on May 5, 2017 Permalink | Reply
    Tags: , , , , physicsworld.com   

    From physicsworld.com: “Flash Physics: Matter-wave tractor beams” 

    physicsworld
    physicsworld.com

    May 5, 2017
    Sarah Tesh

    Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World’s team of editors and reporters

    Tractor beams could be made from matter waves

    1
    Grabbing hold: a matter-wave tractor beam

    It should be possible to create a matter-wave tractor beam that grabs hold of an object by firing particles at it – according to calculations by an international team of physicists. Tractor beams work by firing cone-like “Bessel beams” of light or sound at an object. Under the right conditions, the light or sound waves will bounce off the object in such a way that the object experiences a force in the opposite direction to that of the beam. If this force is greater than the outward pressure of the beam, the object will be pulled inwards. Now, Andrey Novitsky and colleagues at Belarusian State University, ITMO University in St Petersburg and the Technical University of Denmark have done calculations that show that beams of particles can also function as tractor beams. Quantum mechanics dictates that these particles also behave as waves and the team found that cone-like beams of matter waves should also be able to grab hold of objects. There is, however, an important difference regarding the nature of the interaction between the particles and the object. Novitsky and colleagues found that if the scattering is defined by the Coulomb interaction between charged particles, then it is not possible to create a matter-wave tractor beam. However, tractor beams are possible if the scattering is defined by a Yukawa potential, which is used to describe interactions between some subatomic particles. The calculations are described in Physical Review Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 5:12 pm on May 4, 2017 Permalink | Reply
    Tags: , , , , physicsworld.com,   

    From physicsworld.com: “Optical chip gives microscopes nanoscale resolution” 

    physicsworld
    physicsworld.com

    May 3, 2017
    Michael Allen

    1
    Super resolution: image taken using the new chip. No image credit.

    A photonic chip that allows a conventional microscope to work at nanoscale resolution has been developed by a team of physicists in Germany and Norway. The researchers claim that as well as opening up nanoscopy to many more people, the mass-producible optical chip also offers a much larger field of view than current nanoscopy techniques, which rely on complex microscopes.

    Nanoscopy, which is also known as super-resolution microscopy, allows scientists to see features smaller than the diffraction limit – about half the wavelength of visible light. It can be used to produce images with resolutions as high as 20–30 nm – approximately 10 times better than a normal microscope. Such techniques have important implications for biological and medical research, with the potential to provide new insights into disease and improve medical diagnostics.

    “The resolution of the standard optical microscope is basically limited by the diffraction barrier of light, which restricts the resolution to 200–300 nm for visible light,” explains Mark Schüttpelz, a physicist at Bielefeld University in Germany. “But many structures, especially biological structures like compartments of cells, are well below the diffraction limit. Here, super-resolution will open up new insights into cells, visualizing proteins ‘at work’ in the cell in order to understand structures and dynamics of cells.”

    Expensive and complex

    There are a number of different nanoscopy techniques that rely on fluorescent dyes to label molecules within the specimen being imaged. A special microscope illuminates and determines the position of individual fluorescent molecules with nanometre precision to build up an image. The problem with these techniques, however, is that they use expensive and complex equipment. “It is not very straightforward to acquire super-resolved images,” says Schüttpelz. “Although there are some rather expensive nanoscopes on the market, trained and experienced operators are required to obtain high-quality images with nanometer resolution.”

    To tackle this, Schüttpelz and his colleagues turned current techniques on their head. Instead of using a complex microscope with a simple glass slide to hold the sample, their method uses a simple microscope for imaging combined with a complex, but mass-producible, optical chip to hold and illuminate the sample.

    “Our photonic chip technology can be retrofitted to any standard microscope to convert it into an optical nanoscope,” explains Balpreet Ahluwalia, a physicist at The Arctic University of Norway, who was also involved in the research.

    Etched channels

    The chip is essentially a waveguide that completely removes the need for the microscope to contain a light source that excites the fluorescent molecules. It consists of five 25–500 μm-wide channels etched into a combination of materials that causes total internal reflection of light.

    The chip is illuminated by two solid-state lasers that are coupled to the chip by a lens or lensed fibres. Light with two different wavelengths is tightly confined within the channels and illuminates the sample, which sits on top of the chip. A lens and camera on the microscope record the resulting fluorescent signal, and the data obtained are used to construct a high-resolution image of the sample.

    To test the effectiveness of the chip, the researchers imaged liver cells. They demonstrated that a field of view of 0.5 × 0.5 mm2 can be achieved at a resolution of around 340 nm in less than half a minute. In principle, this is fast enough to capture live events in cells. For imaging times of up to 30 min, a similar field of view at a resolution better than 140 nm is possible. Resolutions of less than 50 nm are also achievable with the chip, but require higher magnification lenses, which limit the field of view to around 150 μm.

    Many cells

    Ahluwalia told Physics World that the advantage of using the photonic chip for nanoscopy is that it “decouples illumination and detection light paths” and the “waveguide generates illumination over large fields of view”. He adds that this has enabled the team to acquire super-resolved images over an area 100 times larger than with other techniques. This makes single images of as many as 50 living cells possible.

    According to Schüttpelz, the technique represents “a paradigm shift in optical nanoscopy”. “Not only highly specialized laboratories will have access to super-resolution imaging, but many scientists all over the world can convert their standard microscope into a super-resolution microscope just by retrofitting the microscope in order to use waveguide chips,” he says. “Nanoscopy will then be available to everyone at low costs in the near future.”

    The chip is described in Nature Photonics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: