Tagged: physicsworld.com Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:52 pm on January 5, 2017 Permalink | Reply
    Tags: , , , physicsworld.com, Semiconductor discs could boost night vision   

    From physicsworld.com: “Semiconductor discs could boost night vision” 

    physicsworld
    physicsworld.com.com

    1
    Frequency double: Maria del Rocio Camacho-Morales studies the new optical material.

    A new method of fabricating nanoscale optical crystals capable of converting infrared to visible light has been developed by researchers in Australia, China and Italy. The new technique allows the crystals to be placed onto glass and could lead to improvements in holographic imaging – and even the development of improved night-vision goggles.

    Second-harmonic generation, or frequency doubling, is an optical process whereby two photons with the same frequency are combined within a nonlinear material to form a single photon with twice the frequency (and half the wavelength) of the original photons. The process is commonly used by the laser industry, in which green 532 nm laser light is produced from a 1064 nm infrared source. Recent developments in nanotechnology have opened up the potential for efficient frequency doubling using nanoscale crystals – potentially enabling a variety of novel applications.

    Materials with second-order nonlinear susceptibilities – such as gallium arsenide (GaAs) and aluminium gallium arsenide (AlGaAs) – are of particular interest for these applications because their low-order nonlinearity makes them efficient at conversion.

    Substrate mismatch

    To be able to exploit second-harmonic generation in a practical device, these nanostructures must be fabricated on a substrate with a relatively low refractive index (such as glass), so that light may pass through the optical device. This is challenging, however, because the growth of GaAs-based crystals in a thin film – and type III-V semiconductors in general – requires a crystalline substrate.

    “This is why growing a layer of AlGaAs on top of a low-refractive-index substrate, like glass, leads to unmatched lattice parameters, which causes crystalline defects,” explains Dragomir Neshev, a physicist at the Australian National University (ANU). These defects, he adds, result in unwanted changes in the electronic, mechanical, optical and thermal properties of the films.

    Previous attempts to overcome this issue have led to poor results. One approach, for example, relies on placing a buffer layer under the AlGaAs films, which is then oxidized. However, these buffer layers tend to have higher refractive indices than regular glass substrates. Alternatively, AlGaAs films can be transferred to a glass surface prior to the fabrication of the nanostructures. In this case the result is poor-quality nanocrystals.

    Best of both

    The new study was done by Neshev and colleagues at ANU, Nankai University and the University of Brescia, who combined the advantages of the two different approaches to develop a new fabrication method. First, high-quality disc-shaped nanocrystals about 500 nm in diameter are fabricated using electron-beam lithography on a GaAs wafer, with a layer of AlAs acting as a buffer between the two. The buffer is then dissolved, and the discs are coated in a transparent layer of benzocyclobutene. This can then be attached to the glass substrate, and the GaAs wafer peeled off with minimal damage to the nanostructures.

    The development could have various applications. “The nanocrystals are so small they could be fitted as an ultrathin film to normal eye glasses to enable night vision,” says Neshev, explaining that, by combining frequency doubling with other nonlinear interactions, the film might be used to convert invisible, infrared light to the visible spectrum.

    If they could be made, such modified glasses would be an improvement on conventional night-vision binoculars, which tend to be large and cumbersome. To this end, the team is working to scale up the size of the nanocrystal films to cover the area of typical spectacle lenses, and expects to have a prototype device completed within the next five years.

    Security holograms

    Alongside frequency doubling, the team was also able to tune the nanodiscs to control the direction and polarization of the emitted light, which makes the film more efficient. “Next, maybe we can even engineer the light and make complex shapes such as nonlinear holograms for security markers,” says Neshev, adding: “Engineering of the exact polarization of the emission is also important for other applications such as microscopy, which allows light to be focused to a smaller volume.”

    “Vector beams with spatially arranged polarization distributions have attracted great interest for their applications in a variety of technical areas,” says Qiwen Zhan, an engineer at the University of Dayton in Ohio, who was not involved in this study. The novel fabrication technique, he adds, “opens a new avenue for generating vector fields at different frequencies through nonlinear optical processes”.

    With their initial study complete, Neshev and colleagues are now looking to refine their nanoantennas, both to increase the efficiency of the wavelength conversion process but also to extend the effects to other nonlinear interactions such as down-conversion.

    The research is described in the journal Nano Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:41 am on October 10, 2016 Permalink | Reply
    Tags: Laser-scanning confocal microscopes, Mesolens, physicsworld.com, Radical' new microscope lens combines high resolution with large field of view   

    From physicsworld.com: “‘Radical’ new microscope lens combines high resolution with large field of view” 

    physicsworld
    physicsworld.com.com

    Oct 10, 2016
    Michael Allen

    1
    Zooming in: image of mouse embryo

    A new microscope lens that offers the unique combination of a large field of view with high resolution has been created by researchers in the UK. The new “mesolens” for confocal microscopes can create 3D images of much larger biological samples than was previously possible – while providing detail at the sub-cellular level. According to the researchers, the ability to view whole specimens in a single image could assist in the study of many biological processes and ensure that important details are not overlooked.

    Laser-scanning confocal microscopes are an important tool in modern biological sciences. They emerged in the 1980s as an improvement on fluorescence microscopes, which view specimens that have been dyed with a substance that emits light when illuminated. Standard fluorescence microscopes are not ideal because they pick up fluorescence from behind the focal point, creating images with blurry backgrounds. To eliminate the out-of-focus background, confocal microscopes use a small spot of illuminating laser light and a tiny aperture so that only light close to the focal plane is collected. The laser is scanned across the specimen and many images are taken to create the full picture. Due to the small depth of focus, confocal microscopes are also able to focus a few micrometres through samples to build up a 3D image.

    In microscopy there is a trade-off between resolution and the size of the specimen that can be imaged, or field-of-view – you either have a large field-of-view and low resolution or a small field-of-view and high resolution. Current confocal microscopes struggle to image large specimens, because low magnification produces poor resolution.

    Stitched together

    “Normally, when a large object is imaged with a low-magnification lens, rays of light are collected from only a small range of angles (i.e. the lens has a low numerical aperture),” explains Gail McConnell from the Centre for Biophotonics at the University of Strathclyde, in Glasgow. “This reduces the resolution of the image and has an even more serious effect in increasing the depth of focus, so all the cells in a tissue specimen are superimposed and you cannot see them individually.” Large objects can be imaged by stitching smaller images together. But variations in illumination and focus affect the quality of the final image.

    McConnell and colleagues set out to design a lens that could image larger samples, while retaining the detail produced by confocal microscopy. They focused on creating a lens that could be used to image an entire 12.5 day-old mouse embryo – a specimen that is typically about 5 mm across. This was to “facilitate the recognition of developmental abnormalities” in such embryos, which “are routinely used to screen human genes that are suspected of involvement in disease”, says McConnell.

    Dubbed a mesolens, their optical system is more than half a metre long and contains 15 optical elements. This is unlike most confocal lenses, which are only a few centimetres in length. The mesolens has a magnification of 4× and a numerical aperture of 0.47, which is a significant improvement over the 0.1–0.2 apertures currently available. The system is also able to obtain 3D images of objects 6 mm wide and long, and 3 mm thick.

    The high numerical aperture also provides a very good depth resolution. “This makes it possible to focus through tissue and see a completely different set of sub-cellular structures in focus every 1/500th of a millimetre through a depth of 3 mm,” explains McConnell. The distortion of the images is less than 0.7% at the periphery of the field and the lens works across the full visible spectrum of light, enabling imaging with multiple fluorescent labels.

    Engineering and design

    The lens was made possible through a combination of skilled engineering and optical design, and the use of components with very small aberrations. “Making the new lens is very expensive and difficult: to achieve the required very low field curvature across the full 6 mm field of view and because we need chromatic correction through the entire visible spectrum, the lens fabrication and mounting must be unusually accurate and the glass must be selected very carefully and tested before use,” explains McConnell.

    The researchers used the lens in a customized confocal microscope to image 12.5 day-old mouse embryos. They were able to image single cells, heart muscle fibres and sub-cellular details, not just near the surface of the sample but throughout the depth of the embryo. Writing in the journal eLife, the researchers claim “no existing microscope can show all of these features simultaneously in an intact mouse embryo in a single image.”

    The researchers also write that their mesolens “represents the most radical change in microscope objective design for over a century” and “has the potential to transform optical microscopy through the acquisition of sub-cellular resolution 3D data sets from large tissue specimens”.

    Rafael Yuste, a neuroscientist at Columbia University in New York, saw an earlier prototype of the mesolens microscope. He told physicsworld.com that McConnell and colleagues “have completely redesigned the objective lens to achieve an impressive performance”. He adds that it could enable “wide-field imaging of neuronal circuits and tissues while preserving single-cell resolution”, which could help produce a dynamic picture of how cells and neural circuits in the brain interact.

    Video images taken by the mesolens can be viewed in the eLife paper describing the microscope.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:52 am on October 7, 2016 Permalink | Reply
    Tags: , , Correlation between galaxy rotation and visible matter puzzles astronomers, , , physicsworld.com   

    From physicsworld: “Correlation between galaxy rotation and visible matter puzzles astronomers” 

    physicsworld
    physicsworld.com

    Oct 7, 2016
    Keith Cooper

    1
    Strange correlation: why is galaxy rotation defined by visible mass? No image credit.

    A new study of the rotational velocities of stars in galaxies has revealed a strong correlation between the motion of the stars and the amount of visible mass in the galaxies. This result comes as a surprise because it is not predicted by conventional models of dark matter.

    Stars on the outskirts of rotating galaxies orbit just as fast as those nearer the centre. This appears to be in violation of Newton’s laws, which predict that these outer stars would be flung away from their galaxies. The extra gravitational glue provided by dark matter is the conventional explanation for why these galaxies stay together. Today, our most cherished models of galaxy formation and cosmology rely entirely on the presence of dark matter, even though the substance has never been detected directly.

    These new findings, from Stacy McGaugh and Federico Lelli of Case Western Reserve University, and James Schombert of the University of Oregon, threaten to shake things up. They measured the gravitational acceleration of stars in 153 galaxies with varying sizes, rotations and brightness, and found that the measured accelerations can be expressed as a relatively simple function of the visible matter within the galaxies. Such a correlation does not emerge from conventional dark-matter models.

    Mass and light

    This correlation relies strongly on the calculation of the mass-to-light ratio of the galaxies, from which the distribution of their visible mass and gravity is then determined. McGaugh attempted this measurement in 2002 using visible light data. However, these results were skewed by hot, massive stars that are millions of times more luminous than the Sun. This latest study is based on near-infrared data from the Spitzer Space Telescope.

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    Since near-infrared light is emitted by the more common low-mass stars and red giants, it is a more accurate tracer for the overall stellar mass of a galaxy. Meanwhile, the mass of neutral hydrogen gas in the galaxies was provided by 21 cm radio-wavelength observations.

    McGaugh told physicsworld.com that the team was “amazed by what we saw when Federico Lelli plotted the data.”

    The result is confounding because galaxies are supposedly ensconced within dense haloes of dark matter.

    1
    Spherical halo of dark matter. cerncourier.com

    Furthermore, the team found a systematic deviation from Newtonian predictions, implying that there is some other force is at work beyond simple Newtonian gravity.

    “It’s an impressive demonstration of something, but I don’t know what that something is,” admits James Binney, a theoretical physicist at the University of Oxford, who was not involved in the study.

    This systematic deviation from Newtonian mechanics was predicted more than 30 years ago by an alternate theory of gravity known as modified Newtonian dynamics (MOND). According to MOND’s inventor, Mordehai Milgrom of the Weizmann Institute in Israel, dark matter does not exist, and instead its effects can be explained by modifying how Newton’s laws of gravity operate over large distances.

    “This was predicted in the very first MOND paper of 1983,” says Milgrom. “The MOND prediction is exactly what McGaugh has found, to a tee.”

    However, Milgrom is unhappy that McGaugh hasn’t outright attributed his results to MOND, and suggests that there’s nothing intrinsically new in this latest study. “The data here are much better, which is very important, but this is really the only conceptual novelty in the paper,” says Milgrom.

    No tweaking required

    McGaugh disagrees with Milgrom’s assessment, saying that previous results had incorporated assumptions that tweak the data to get the desired result for MOND, whereas this time the mass-to-light ratio is accurate enough that no tweaking is required.

    Furthermore, McGaugh says he is “trying to be open-minded”, by pointing out that exotic forms of dark matter like superfluid dark matter or even complex galactic dynamics could be consistent with the data. However, he also feels that there is implicit bias against MOND among members of the astronomical community.

    “I have experienced time and again people dismissing the data because they think MOND is wrong, so I am very consciously drawing a red line between the theory and the data.”

    Much of our current understanding of cosmology relies on cold dark matter, so could the result threaten our models of galaxy formation and large-scale structure in the universe? McGaugh thinks it could, but not everyone agrees.

    Way too complex

    Binney points out that dark-matter simulations struggle on the scale of individual galaxies because “the physics of galaxy formation is way too complex to compute properly,” he says, the implication being that it is currently impossible to say whether dark matter can explain these results or not. “It’s unfortunately beyond the powers of humankind at the moment to know.”

    That leaves the battle between dark matter and alternate models of gravitation at an impasse. However, Binney points out that dark matter has an advantage because it can also be studied through observations of galaxy mergers and collisions between galaxy clusters. Also, there are many experiments that are currently searching for evidence of dark-matter particles.

    McGaugh’s next step is to extend the study to elliptical and dwarf spheroidal galaxies, as well as to galaxies at greater distances from the Milky Way.

    The research is to be published in Physical Review Letters and a preprint is available on arXiv.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 10:09 am on August 29, 2016 Permalink | Reply
    Tags: , physicsworld.com,   

    From physicsworld.com: “Nonlinear optical quantum-computing scheme makes a comeback” 

    physicsworld
    physicsworld.com

    Aug 29, 2016
    Hamish Johnston

    A debate that has been raging for 20 years about whether a certain interaction between photons can be used in quantum computing has taken a new twist, thanks to two physicists in Canada. The researchers have shown that it should be possible to use “cross-Kerr nonlinearities” to create a cross-phase (CPHASE) quantum gate. Such a gate has two photons as its input and outputs them in an entangled state. CPHASE gates could play an important role in optical quantum computers of the future.

    Photons are very good carriers of quantum bits (qubits) of information because the particles can travel long distances without the information being disrupted by interactions with the environment. But photons are far from ideal qubits when it comes to creating quantum-logic gates because photons so rarely interact with each other.

    One way around this problem is to design quantum computers in which the photons do not interact with each other. Known as “linear optical quantum computing” (LOQC), it usually involves preparing photons in a specific quantum state and then sending them through a series of optical components, such as beam splitters. The result of the quantum computation is derived by measuring certain properties of the photons.

    Simpler quantum computers

    One big downside of LOQC is that you need lots of optical components to perform basic quantum-logic operations – and the number quickly becomes very large to make an integrated quantum computer that can perform useful calculations. In contrast, quantum computers made from logic gates in which photons interact with each other would be much simpler – at least in principle – which is why some physicists are keen on developing them.

    This recent work on cross-Kerr nonlinearities has been carried out by Daniel Brod and Joshua Combes at the Perimeter Institute for Theoretical Physics and Institute for Quantum Computing in Waterloo, Ontario. Brod explains that a cross-Kerr nonlinearity is a “superidealized” interaction between two photons that can be used to create a CPHASE quantum-logic gate.

    This gate takes zero, one or two photons as input. When the input is zero or one photon, the gate does nothing. But when two photons are present, the gate outputs both with a phase shift between them. One important use of such a gate is to entangle photons, which is vital for quantum computing.

    The problem is that there is no known physical system – trapped atoms, for example – that behaves exactly like a cross-Kerr nonlinearity. Physicists have therefore instead looked for systems that are close enough to create a practical CPHASE. Until recently, it looked like no appropriate system would be found. But now Brod and Combes argue that physicists have been too pessimistic about cross-Kerr nonlinearities and have shown that it could be possible to create a CPHASE gate – at least in principle.

    From A to B via an atom

    Their model is a chain of interaction sites through which the two photons propagate in opposite directions. These sites could be pairs of atoms, in which the atoms themselves interact with each other. The idea is that one photon “A” will interact with one of the atoms in a pair, while the other photon “B” interacts with the other atom. Because the two atoms interact with each other, they will mediate an interaction between photons A and B.

    Unlike some previous designs that implemented quantum error correction to protect the integrity of the quantum information, this latest design is “passive” and therefore simpler.

    Brod and Combes reckon that a high-quality CPHASE gate could be made using five such atomic pairs. Brod told physicsworld.com that creating such a gate in the lab would be difficult, but if successful it could replace hundreds of components in a LOQC system.

    As well as pairs of atoms, Brod says that the gate could be built from other interaction sites such as individual three-level atoms or optical cavities. He and Combes are now hoping that experimentalists will be inspired to test their ideas in the lab. Brod points out that measurements on a system with two interaction sites would be enough to show that their design is valid.

    The work is described in Physical Review Letters. Brod and Combes have also teamed-up with Julio Gea-Banacloche of the University of Arkansas to write a related paper that appears in Physical Review A. This second work looks at their design in more detail.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:10 pm on August 12, 2016 Permalink | Reply
    Tags: , , physicsworld.com, X-ray pulsars   

    From physicsworld.com: “X-ray pulsars plot the way for deep-space GPS” 

    physicsworld
    physicsworld.com

    Aug 11, 2016
    Keith Cooper

    1
    Pulsar phone home: X-ray pulsars could be great for interstellar navigation. No image credit.

    An interstellar navigation technique that taps into the highly periodic signals from X-ray pulsars is being developed by a team of scientists from the National Physical Laboratory (NPL) and the University of Leicester. Using a small X-ray telescope on board a craft, it should be possible to determine its position in deep space to an accuracy of 2 km, according to the researchers.

    Referred to as XNAV, the system would use careful timing of pulsars – which are highly magnetized spinning neutron stars – to triangulate a spacecraft’s position relative to a standardized location, such as the centre-of-mass in the solar system, which lies within the Sun’s corona. As pulsars spin, they emit beams of electromagnetic radiation, including strong radio emission, from their magnetic poles. If these beams point towards Earth, they appear to “pulse” with each rapid rotation.

    Some pulsars in binary systems also accrete gas from their companion star, which can gather over the pulsar’s poles and grow hot enough to emit X-rays. It is these X-ray pulsars that can be used for stellar navigation – radio antennas are big and bulky, whereas X-ray detectors are smaller, often armed with just a single-pixel sensor, and are easier to include within a spacecraft’s payload.

    X-ray payload

    By 2013, theoretical work describing XNAV techniques had developed to the point where the European Space Agency commissioned a team, led by Setnam Shemar at NPL, to conduct a feasibility study, with an eye to one day using it on their spacecraft.

    Shemar’s team analysed two techniques. The simplest is called “delta correction”, and works by timing incoming X-ray pulses – from a single pulsar – using an on-board atomic clock and comparing them to their expected time-of-arrival at the standardized location. The offset between these two timings, taken together with an initial estimated spacecraft position from ground tracking, can be used to obtain a more precise spacecraft position. This method is designed to be used in conjunction with ground-based tracking by NASA’s Deep Space Network or the European Space Tracking Network to provide more positional accuracy. Simulations indicated an accuracy of 2 km when locked onto a pulsar for 10 hours, or 5 km with just one hour of observation.

    The benefits of this method would be most apparent in missions to the outer solar system, says Shemar, where the distance means that ground tracking is less accurate than within the inner solar system, where the XNAV system could be calibrated. However, Werner Becker of the Max Planck Institute for Extraterrestrial Physics, who was not involved in the current work, points out that such a system would not be automated and would still rely on communication with Earth.

    Shemar agrees, which is why his team also considered a second technique, known as “absolute navigation”. To determine a location in 3D space, one must have the x, y and z co-ordinates, plus a time co-ordinate. If a spacecraft has an atomic clock on board, then this could be achieved by monitoring a minimum of three pulsars – if there is no atomic clock, a fourth pulsar would be required. The team’s simulations indicate that at the distance of Neptune, a spacecraft could autonomously measure its position to within 30 km in 3D space using the four-pulsar system.

    Limits to technology

    The downside to absolute navigation is that either more X-ray detectors are required – one for each pulsar – or a mechanism to allow the X-ray detector to slew to each pulsar in turn would need to be implemented. It’s a trade-off, points out Shemar, between accuracy and the practical limits of technology and cost. Becker, for instance, advocates using up to 10 pulsars to provide the highest accuracy, but implementing this on a spacecraft may be more difficult.

    While the engineering behind such a steering mechanism is complex, “it’s not miles out of the scope of existing technology,” says Adrian Martindale of the University of Leicester, who participated in the feasibility study. In terms of the cost, complexity and size of X-ray detector required for XNAV, the team cites the example of the Mercury Imaging and X-ray Spectrometer (MIXS) instrument that will launch to the innermost planet on the upcoming Bepi-Colombo mission in 2018.

    3
    MIX Mercury Imaging X-ray Spectrometer

    BepiColombo II preferred
    ESA/BepiColombo

    “We’ve shown that we think it is feasible to achieve,” Shemar told physicsworld.com, adding the caveat that some of the technology needs to catch up with the theoretical work. “Reducing the mass of the detector as far as possible, reducing the observation time for each pulsar and having a suitable steering mechanism are all significant challenges to be overcome.”

    In February 2017, NASA plans to launch the Neutron star Interior Composition Explorer (NICER), to the International Space Station. Although primarily for X-ray astronomy, NICER will also perform a demonstration of XNAV. As this idea of pulsar-based navigation continues to grow, “space agencies may begin to take a more proactive role and start developing strategies for how an XNAV system could be implemented on a space mission,” says Shemar.

    Becker is a little more sceptical about how soon XNAV will be ushered in for use on spacecraft. “The technology will become available when there is a need for it,” he says. “Autonomous pulsar navigation becomes attractive for deep-space missions but there are none planned for many years.”

    The research is published in the journal Experimental Astronomy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:36 am on August 7, 2016 Permalink | Reply
    Tags: , , , , physicsworld.com,   

    From physicsworld.com: “And so to bed for the 750 GeV bump” 

    physicsworld
    physicsworld.com

    Aug 5, 2016
    Tushna Commissariat

    1
    No bumps: ATLAS diphoton data – the solid black line shows the 2015 and 2016 data combined. (Courtesy: ATLAS Experiment/CERN)

    2
    Smooth dips: CMS diphoton data – blue lines show 2015 data, red are 2016 data and black are the combined result. (Courtesy: CMS collaboration/CERN)

    After months of rumours, speculation and some 500 papers posted to the arXiv in an attempt to explain it, the ATLAS and CMS collaborations have confirmed that the small excess of diphoton events, or “bump”, at 750 GeV detected in their preliminary data is a mere statistical fluctuation that has disappeared in the light of more data. Most folks in the particle-physics community will have been unsurprised if a bit disappointed by today’s announcement at the International Conference on High Energy Physics (ICHEP) 2016, currently taking place in Chicago.

    The story began around this time last year, soon after the LHC was rebooted and began its impressive 13 TeV run, when the ATLAS collaboration saw more events than expected around the 750 GeV mass window. This bump immediately caught the interest of physicists world over, simply because there was a sniff of “new physics” around it, meaning that the Standard Model of particle physics did not predict the existence of a particle at that energy. But also, it was the first interesting data to emerge from the LHC after its momentous discovery of the Higgs boson in 2012 and if it had held, would have been one of the most exciting discoveries in modern particle physics.

    According to ATLAS, “Last year’s result triggered lively discussions in the scientific communities about possible explanations in terms of new physics and the possible production of a new, beyond-Standard-Model particle decaying to two photons. However, with the modest statistical significance from 2015, only more data could give a conclusive answer.”

    And that is precisely what both ATLAS and CMS did, by analysing the 2016 dataset that is nearly four times larger than that of last year. Sadly, both years’ data taken together reveal that the excess is not large enough to be an actual particle. “The compatibility of the 2015 and 2016 datasets, assuming a signal with mass and width given by the largest 2015 excess, is on the level of 2.7 sigma. This suggests that the observation in the 2015 data was an upward statistical fluctuation.” The CMS statement is succinctly similar: “No significant excess is observed over the Standard Model predictions.”

    Tommaso Dorigo, blogger and CMS collaboration member, tells me that it is wisest to “never completely believe in a new physics signal until the data are confirmed over a long time” – preferably by multiple experiments. More interestingly, he tells me that the 750 Gev bump data seemed to be a “similar signal” to the early Higgs-to-gamma-gamma data the LHC physicists saw in 2011, when they were still chasing the particle. In much the same way, more data were obtained and the Higgs “bump” went on to be an official discovery. With the 750 GeV bump, the opposite is true. “Any new physics requires really really strong evidence to be believed because your belief in the Standard Model is so high and you have seen so many fluctuations go away,” says Dorigo.

    And this is precisely what Colombia University’s Peter Woit – who blogs at Not Even Wrong – told me in March this year when I asked him how he thought the bump would play out. Woit pointed out that particle physics has a long history of “bumps” that may look intriguing at first glance, but will most likely be nothing. “If I had to guess, this will disappear,” he said, adding that the real surprise for him was that “there aren’t more bumps” considering how good the LHC team is at analysing its data and teasing out any possibilities.

    It may be fair to wonder just why so many theorists decided to work with the unconfirmed data from last year and look for a possible explanation of what kind of particle it may have been and indeed, Dorigo says that “theorists should have known better”. But on the flip-side, the Standard Model predicted many a particle long before it was eventually discovered and so it is easy to see why many were keen to come up with the perfect new model.

    Despite the hype and the eventual letdown, Dorigo is glad that this bump has got folks talking about high-energy physics. “It doesn’t matter even if it fizzles out; it’s important to keep asking ourselves these questions,” he says. The main reason for this, Dorigo explains, is that “we are at a very special junction in particle physics as we decide what new machine to build” and some input from current colliders is necessary.”Right now there is no clear direction,” he says. In light of the fact that there has been no new physics (or any hint of supersymmetry) from the LHC to date, the most likely future devices would be an electron–positron collider or, in the long term, a muon collider. But a much clearer indication is necessary before these choices are made and for now, much more data are needed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:37 am on August 1, 2016 Permalink | Reply
    Tags: , LiFi, physicsworld.com   

    From physicsworld.com: “A light-connected world” 

    physicsworld
    physicsworld.com

    Aug 1, 2016
    Harald Haas
    h.haas@ed.ac.uk

    The humble household light bulb – once a simple source of illumination – could soon be transformed into the backbone of a revolutionary new wireless communications network based on visible light. Harald Haas explains how this “LiFi” system works and how it could shape our increasingly data-driven world.

    1
    NO image caption.No image credit

    Over the past year the world’s computers, mobile phones and other devices generated an estimated 12 zettabytes (1021 bytes) of information. By 2020 this data deluge is predicted to increase to 44 zettabytes – nearly as many bits as there are stars in the universe. There will also be a corresponding increase in the amount of data transmitted over communications networks, from 1 to 2.3 zettabytes. The total mobile traffic including smartphones will be 30 exabytes (1018 bytes). A vast amount of this increase will come from previously uncommunicative devices such as home appliances, cars, wearable electronics and street furniture as they become part of the so-called “Internet of Things”, transmitting some 335 petabytes (1015 bytes) of status information, maintenance data and video to their owners and users for services such as augmented reality.

    In some fields, this data-intensive future is already here. A wind turbine, for example, creates 10 terabytes of data per day for operational and maintenance purposes and to ensure optimum performance. But by 2020 there could be as many as 80 billion data-generating devices all trying to communicate with us and with each other – often across large distances, and usually without a wired connection.

    2
    1 A crowded field. No image credit

    So far, the resources required to achieve this wireless connectivity have been taken almost entirely from the radio frequency (RF) part of the electromagnetic spectrum (up to 300 GHz). However, the anticipated exponential increase in data volumes during the next decade will make it increasingly hard to accomplish this with RF alone. The RF spectrum “map” of the US is already very crowded (figure 1), with large chunks of frequency space allocated to services such as satellite communication, military and defence, aeronautical communication, terrestrial wireless communication and broadcast. In many cases, the same frequency band is used for multiple services. So how are we going to accommodate perhaps 70 billion additional communication devices?

    At this point it is helpful to remember that RF is only one small part of the electromagnetic spectrum. The visible-light portion of the spectrum stretches from about 430 to 770 THz, more than 1000 times the bandwidth of the RF portion. These frequencies are seldom used for communication, even though visible-light-based data transmission has been successfully demonstrated for decades in the fibre-optics industry. The difference, of course, is that the coherent laser light used in fibre optics is confined to cables rather than being transmitted in free space. But might it be possible to exploit the communication potential of the visible-light region of the spectrum while also benefitting from the convenience and reach of wireless RF?

    With the advent of high-brightness light-emitting diodes (LEDs), I believe the logical answer is “yes”. Using this new “LiFi” system (a term I coined in a TED talk in 2011), it will be possible to achieve high-speed, secure, bi-directional and fully networked wireless communications with data encoded in visible light. In a LiFi network, every light source – a light bulb, a street lamp, the head and/or tail light of a car, a reading light in a train or an aircraft – can become a wireless access point or wireless router like our WiFi routers at home. However, instead of using RF signals, a LiFi network modulates the intensity of visible light to send and receive data at high speeds – 10 gigabits per second (Gbps) per light source are technically feasible. Thus, our lighting networks can be transformed into high-speed wireless communications networks where illumination is only a small part of what they do.

    The ubiquitous nature of light sources means that LiFi would guarantee seamless and mobile wireless services (figure 2). A single LiFi access point will be able to communicate to multiple terminals in a bi-directional fashion, providing access for multiple users. If the terminals move (for example, if someone walks around while using their phone) the wireless connection will not be interrupted, as the next-best-placed light source will take over – a phenomenon referred to as “handover”. And because there are so many light sources, each of them acting as an independent wireless access point, the effective data rate that a mobile user will experience could be orders of magnitude higher than is achievable with current wireless networks. Specifically, the average data rate that is delivered to a user terminal by current WiFi networks is about 10 megabits per second; with a future LiFi network this can be increased to 1 Gbps.

    3
    2 Data delights. No image credit.

    This radically new type of wireless network also offers other advantages. One is security. The next time you walk around in an urban environment, note how many WiFi networks appear in a network search on your smartphone. In contrast, because light does not propagate through opaque objects such as plastered walls, LiFi can be much more tightly controlled, significantly enhancing the security of wireless networks. LiFi networks are also more energy efficient, thanks to the relatively short distance between a light source and the user terminal (in the region of metres) and the relatively small coverage area of a single light source (10 m2 or less). Moreover, because LiFi piggybacks on existing lighting systems, the energy efficiency of this new type of wireless network can be improved by three orders of magnitude compared with WiFi networks. A final advantage is that because LiFi systems don’t use an antenna to receive signals, they can be used in environments that need to be intrinsically safe such as petrochemical plants and oil-drilling platforms, where a spark to or from an antenna can cause an explosion.

    LiFi misconceptions

    A number of misconceptions commonly arise when I talk to people about LiFi. Perhaps the biggest of these is that LiFi must be a “line-of-sight” technology. In other words, people assume that the receiver needs to be directly in line with the light source for the data connection to work. In fact, this is not the case. My colleagues and I have shown that for a particular light-modulation technology, the data rate scales with the signal-to-noise ratio (SNR), and that it is possible to transmit data at SNRs as low as 6 dB. This means LiFi can tolerate signal blockages between 46  and 66 dB (signal attenuation factors of 40,000 – 4 million). This is important because in a typical office environment where the lights are on the ceiling and the minimum level of illumination for reading purposes is 500 lux, the SNR at table height is between 40 and 60 dB, as shown by Jelena Grubor and colleagues at the Fraunhofer Institute for Telecommunications in Berlin, Germany (2008 Proceedings of the 6th International Symposium Communication Systems, Networks and Digital Signal Processing 165). In our own tests we transmitted video to a laptop over a distance of about 3 m. The LED light fixture was pointing against a white wall, in the opposite direction to the location of the receiver, therefore there was no direct line-of-sight component reaching the receiver, yet the video was successfully received via reflected light.

    Another misconception is that LiFi does not work when it is sunny. If true, this would be a serious limitation, but in fact, the interference from sunlight falls outside the bandwidth used for data modulation. The LiFi signal is modulated at frequencies typically greater than 1 MHz, so sunlight (even flickering sunlight) can simply be filtered out, and has negligible impact on the performance as long as the receiver is not saturated (saturation can be avoided by using algorithms that automatically control the gain at the receiver). Indeed, my colleagues and I argue that sunlight is hugely beneficial for LiFi, as it is possible to create solar-cell-based LiFi receivers where the solar cell acts as a data receiver device at the same time as it converts sunlight into electricity.

    A third misconception relates to the behaviour of the light sources. Some have suggested that the light sources used in LiFi cannot be dimmed, but in fact, sophisticated modulation techniques make it possible for LiFi to operate very close to the “turn on voltage” of the LEDs. This means that the lights can be operated at very low light output levels while maintaining high data rates. Another, related concern is that the modulation of LiFi lights might be visible as “flicker”. In reality, the lowest frequency at which the lights are modulated, 1 MHz, is 10,000 times higher than the refresh rate of computer screens (100 Hz). This means the “flicker-rate” of a LiFi light bulb is far too quick for human or animal eyes to perceive.

    A final misconception is that LiFi is a one-way street, good for transmitting data but not for receiving it. Again, this is not true. The fact that LiFi can be combined with LED illumination does not mean that both functions always have to be used together. The two functions – illumination and data – can easily be separated (note my previous comment on dimming), so LiFi can also be used very effectively in situations where lighting is not required. In these circumstances, the infrared output of an LED light on the data-generating device would be very suitable for the “uplink” (i.e. for sending data). Because infrared sensors are already incorporated into many LED lights (as motion sensors, for example), no new technology would be necessary, and sending a signal with infrared requires very little power: my colleagues and I have conducted an experiment where we sent data at a speed of 1.1 Gbps over a distance of 10 m using an LED with an optical output power of just 4.5 mW. Using infrared for the uplink has the added advantage of spectrally separating uplink and downlink transmissions, avoiding interference.

    Nuts and bolts

    Now that we know what LiFi can and cannot do, let’s examine how it works. At the most basic level, you can think of LiFi as a network of point-to-point wireless communication links between LED light sources and receivers equipped with some form of light-detection device, such as a photodiode. The data rate achievable with such a network depends on both the light source and the technology used to encode digital information into the light itself.

    First, let’s consider the available light sources. Most commercial LEDs have a blue high-brightness LED with a phosphorous coating that converts blue light into yellow; the blue light and yellow light then combine to produce white light. This is the most cost-efficient way to produce white light today, but the colour-converting material slows down the light’s response to intensity modulation, meaning that higher frequencies (blue light) are heavily attenuated. Consequently, the light intensity from this type of LED can only be modulated at a fairly low rate, about 2 MHz. It is also not possible to modulate the individual spectral components (red, green and blue) of the resulting white light; all you can do is vary the intensity of the composite light spectrum. Even so, one can achieve data rates of about 100 Mbps with these devices by placing a blue filter placed at the receiver to remove the slow yellow spectral components.

    More advanced red, green and blue (RGB) LEDs produce white light by mixing these base colours instead of using a colour-converting chemical. This eases the restrictions on modulation rates, making it possible to achieve data rates of up to 5 Gbps. In addition, one can encode different data onto each wavelength (a technique known as wavelength division multiplexing), meaning that for an RGB LED there are effectively three independent data channels available. However, because they require three separate light sources, these devices are more expensive than single blue LEDs.

    4
    3 Faster, brighter, longer. No image credit

    A third alternative – gallium-nitride micro LEDs – are small devices that achieve very high current densities, with a bandwidth of up to 1 GHz. Data rates of up to 10 Gbps have recently been demonstrated with these devices by Hyunchae Chun and colleagues (2016 Journal of Lightwave Technology, in press). This type of LED currently is a relatively poor source of illumination compared with phosphor-coated white LEDs or RGB LEDs, but it would be ideal for uplink communications – for example, in an Internet of Things where an indicator light on an oven is capable of sending data to a light bulb in the ceiling – and in the future we may also see these devices in a light bulb due to rapid technology enhancements.

    Lastly, white light can also be generated with multiple colour laser diodes combined with a diffuser. This technology may be used in the future for lighting due to the very high efficiency of lasers, but currently its cost is excessive and technical issues such as speckle have to be overcome. However, my University of Edinburgh colleagues Dobroslav Tsonev, Stefan Videv and I have recently demonstrated a white light beam of 1000 lux covering 1 m2 at a distance of 3 m, and the achievable data rate for this scenario is 100 Gbps (2015 Opt. Express 23 1627).

    As for the modulation, my group at Edinburgh has been pioneering a digital modulation technique called orthogonal frequency division multiplexing (OFDM) for the past 10 years. The principle of OFDM is to divide the entire modulation spectrum (that is, the range of frequencies used to change the light intensity into modulated data) into many smaller frequency bins. Some of these frequencies are less attenuated than others (due to the nature of the propagation channel and LED and photodetector device characteristics), and information theory tells us that the less-attenuated frequency bins are able to carry more information bits than those that are more attenuated. Hence, the dividing of the spectrum into many smaller bins allows us to “load” each individual bin with the optimum number of information bits. This makes it possible to achieve higher data rates than one gets with more traditional modulation techniques, such as on– off keying.

    These high data rates make it easier to adapt to varying propagation channels, where the frequency bin attenuation changes with location – something that is important for a wireless communications system. The whole process can be compared to an audio sound equalizer system that individually adjusts low frequencies (bass), middle frequencies and high frequencies (treble) to suit a particular optimum sound profile, independent of where the listener is in the room. My former students Mostafa Afgani and Hany Elgala, together with me and my colleague Dietmar Knipp, have demonstrated what is, to the best of our knowledge, the first OFDM implementation for visible light communication (2006 IEEE Tridentcom 129).

    The bright future

    LiFi is a disruptive technology that is poised to affect a large number of industries. Most importantly, I expect it to catalyse the merger of wireless communications and lighting, which are at the moment entirely separate businesses. Within the lighting industry, the concept of light as a service, rather than a physical object you buy and replace, will become a dominant theme, requiring industry to develop new business models to succeed in a world where individual LED lamps can last more than 20 years. In combination with LiFi, therefore, light-as-a-service will pull the lighting industry to enter what has traditionally been a wireless communications market.

    In terms of how it affects daily life, I believe LiFi will contribute to the fifth generation of mobile telephony systems (5G) and beyond. As the Internet of Things grows, LiFi will unlock its potential, making it possible to create “smart” cities and homes. In the transport sector, it will enable new intelligent transport systems and enhance road safety as more and more driverless cars begin operating. It will create new cyber-secure wireless networks and enable new ways of health monitoring in ageing societies. Perhaps most importantly, it will offer new ways of closing the “digital divide”; despite considerable advances, there are still about four billion people in the world who cannot access the Internet. The bottom line, though, is that we need to stop thinking of light bulbs as little heaters that also provide light. In 25 years, my colleagues and I believe that the LED light bulb will serve thousands of purposes, not just illumination.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 4:37 pm on July 14, 2016 Permalink | Reply
    Tags: , Blue supernovae, physicsworld.com   

    From physicsworld: ‘Blue is the colour of the universe’s first supernovae” 

    physicsworld
    physicsworld.com

    Jul 14, 2016
    Tushna Commissariat
    tushna.commissariat@iop.org

    1
    Rich and poor: the evolution of old and young supernovae

    Astronomers hoping to spot “first-generation” supernovae explosions from the oldest and most distant stars in our universe should look out for the colour blue. So says an international team of researchers, which has discovered that the colour of the light from a supernova during a specific phase of its evolution is an indicator of its progenitor star’s elemental content. The work will help astronomers to directly detect the oldest stars, and their eventual supernovae explosions, in our universe.

    Early days

    Following the Big Bang, the universe mainly consisted of light elements such as hydrogen, helium and trace amounts of lithium. It was only 200 million years later, after the formation of the first massive stars, that heavier elements such as oxygen, nitrogen, carbon and iron – which astronomers all call “metals” – were forged in their extremely high-pressure centres. The first stars – called “population III” – are thought to have been so massive and unstable that they would have quickly burnt out and exploded in supernovae, which would have scattered the metals across the cosmos. Indeed, these first explosions will most likely have sown the seeds to form the next-generation “population II” stars, which are still “metal poor” compared with “population I” stars like the Sun.

    Unfortunately, astronomers have yet to detect a true first population-III star or spot a first-generation supernova. Astronomers have been hunting for old stars, and the best evidence for them was found last year in an extremely bright and distant galaxy in the early universe. There are also some candidate stars in our own galaxy.

    Old timers

    The constituents and properties of the first-generation of stars and their supernova explosions are still a mystery, thanks to the lack of actual observations, especially when it comes to the supernovae. Studying first-generation supernovae would provide rare insights into the early universe, but astronomers have struggled to distinguish these early explosions from the ordinary supernovae we detect today.

    Now though, Alexey Tolstov and Ken’ichi Nomoto from the Kavli Institute for the Physics and Mathematics of the Universe, together with colleagues, have identified characteristic differences between new and old supernovae, after experimenting with supernovae models based on stars with virtually no metals. Such stars make good candidates because they preserve their chemical abundance at the time of their formation.

    “The explosions of first-generation stars have a great impact on subsequent star and galaxy formation. But first, we need a better understanding of how these explosions look like to discover this phenomenon in the near future,” says Tolstov, adding that the “most difficult thing here is the construction of reliable models based on our current studies and observations. Finding the photometric characteristics of metal-poor supernovae, I am very happy to make one more step to our understanding of the early universe.”

    Blue hue

    Just like ordinary supernovae, the light or luminosity of a first-generation supernova should also show the characteristic rise to a peak in brightness, followed by a steady decline – which astronomers call a “light curve”. Indeed, a bright flash would signal the shock waves that emerge from the star’s surface as its core collapses. This “shock breakout” is followed by a several-month-long “plateau” phase, where the luminosity remains relatively constant, before the slow exponential decay.

    Nomoto’s team calculated the light curves of metal-poor supernovae, produced by blue supergiant stars, and “metal-rich” red supergiant stars. They found that both the shock-breakout and plateau phases are shorter, bluer and fainter for metal-poor supernovae in comparison to the metal-rich ones. The researchers conclude the blue light-curve could be used as an indicator of a low-metallicity star.

    Unfortunately, the expansion of our universe makes it difficult to detect first star and supernova radiation, which is redshifted into the near-infrared wavelength. But the team says that upcoming large telescopes such as the James Webb Space Telescope, currently scheduled for launch in 2018, should be able to detect the distant light from first supernovae, and their method could be used to identify them. Their findings could also help to pick out low-metallicity supernovae in the nearby universe.

    The work is published in the Astrophysical Journal.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 8:44 pm on July 7, 2016 Permalink | Reply
    Tags: , , physicsworld.com, Relativistic codes reveal a clumpy universe   

    From physicsworld: “Relativistic codes reveal a clumpy universe” 

    physicsworld
    physicsworld.com

    Jun 28, 2016
    Keith Cooper

    1
    General universe: visualization of the large-scale structure of the universe. No image credit

    A new set of codes that, for the first time, are able to apply Einstein’s complete general theory of relativity to simulate how our universe evolved, have been independently developed by two international teams of physicists. They pave the way for cosmologists to confirm whether our interpretations of observations of large-scale structure and cosmic expansion are telling us the true story.

    The impetus to develop codes designed to apply general relativity to cosmology stems from the limitations of traditional numerical simulations of the universe. Currently, such models invoke Newtonian gravity and assume a homogenous universe when describing cosmic expansion, for reasons of simplicity and computing power. On the largest scales the universe is homogenous and isotropic, meaning that matter is distributed evenly in all directions; but on smaller scales the universe is clearly inhomogeneous, with matter clumped into chains of galaxies and filaments of dark matter assembled around vast voids.

    Uneven expansion?

    However, the expansion of the universe could be proceeding at different rates in different regions, depending on the density of matter in those areas. Where matter is densely clumped together, its gravity slows the expansion; whereas in the relatively empty voids, the universe can expand unhindered. This could affect how light propagates through such regions, manifesting itself in the relationship between the distance to objects of known intrinsic luminosity (what astronomers refer to as standard candles, whereby we measure their distance, based on how bright they appear to us) and their cosmological redshift.

    Now, James Mertens and Glenn Starkman of Case Western Reserve University in Ohio, together with John T Giblin at Kenyon College, have written one such code; while Eloisa Bentivegna of the University of Catania in Italy and Marco Bruni at the Institute of Cosmology and Gravitation at the University of Portsmouth have independently developed a second similar code.

    Fast voids and slow clumps

    The distances to supernovae and their cosmological redshifts are related to one another in a specific way in a homogeneous universe, but the question is, according to Starkman: “Are they related in the same way in a lumpy universe?” The answer to this will have obvious repercussions for the universe’s expansion rate and the strength of dark energy, which can be measured using standard candles such as supernovae.

    The rate of expansion of our universe is described by the “Hubble parameter”. Its current value of 73 km/s/Mpc is calculated assuming a homogenous universe. However, Bruni and Bentivegna showed that on local scales there are wide variations, with voids expanding up to 28% faster than the average value for the Hubble parameter. This is counteracted by the slowdown of the expansion in dense galaxy clusters. However, Bruni cautions that they must “be careful, as this value depends on the specific coordinate system that we have used”. While the US team used the same system, it is feasible that it creates observer bias and that a different system could lead to a different interpretation of the variation.

    The codes have also been able to rule out a phenomenon known as “back reaction”, which is the idea that large-scale structure can affect the universe around it in such a way as to masquerade as dark energy. By running their codes, both teams have shown, within the limitations of the simulations, that the amount of back reaction is small enough not to account for dark energy.

    Einstein’s toolkit

    Although the US team’s code has not yet been publically released, the code developed by Bentivegna is available. It makes use of a free software collection called the Einstein Toolkit, which includes software called Cactus. This allows code to be developed by downloading modules called “thorns” that each perform specific tasks, such as solving Einstein’s field equations or calculating gravitational waves. These modules are then integrated into the Cactus infrastructure to create new applications.

    “Cactus was already able to integrate Einstein’s equations before I started working on my modifications in 2010,” says Bentivegna. “What I had to supply was a module to prepare the initial conditions for a cosmological model where space is filled with matter that is inhomogeneous on smaller scales but homogeneous on larger ones.”

    Looking ahead

    The US team says it will be releasing its code to the scientific community soon and reports that it performs even better than the Cactus code. However, Giblin believes that both codes are likely to be used equally in the future, since they can provide independent verification for each other. “This is important since we’re starting to be able to make predictions about actual measurements that will be made in the future and having two independent groups working with different tools is an important check,” he says.

    So are the days of numerical simulations with Newtonian gravity numbered? Not necessarily, says Bruni. Even though the general relativity codes are highly accurate, the immense computing resources they require means that achieving the detail of Newtonian gravity simulations will require a lot of extra code development.

    “However, these general relativity simulations should provide a benchmark for Newtonian simulations,” says Bruni, “which we can then use to determine to what point the Newtonian method is accurate. They’re a huge step forward in modelling the universe as a whole.”

    The teams’ work is published in Physical Review Letters (116 251301; 116 251302) and Physical Review D.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 10:15 am on May 13, 2016 Permalink | Reply
    Tags: , Brane theory and testing, , physicsworld.com   

    From physicsworld: “Parallel-universe search focuses on neutrons” 

    physicsworld
    physicsworld.com

    May 10, 2016
    Edwin Cartlidge

    1
    No braner: there is no evidence that ILL neutrons venture into an adjacent universe. No image credit.

    The first results* from a detector designed to look for evidence of particles reaching us from a parallel universe have been unveiled by physicists in France and Belgium. Although they drew a blank, the researchers say that their experiment provides a simple, low-cost way of testing theories beyond the Standard Model of particle physics, and that the detector could be made significantly more sensitive in the future.

    Standard Model
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    A number of quantum theories of gravity predict the existence of dimensions beyond the three of space and one of time that we are familiar with. Those theories envisage our universe as a 4D surface or “brane” in a higher-dimensional space–time “bulk”, just as a 2D sheet of paper exists as a surface within our normal three spatial dimensions. The bulk could contain multiple branes separated from one another by a certain distance within the higher dimensions.

    Physicists have found no empirical evidence for the existence of other branes. However, in 2010, Michaël Sarrazin of the University of Namur in Belgium and Fabrice Petit of the Belgian Ceramic Research Centre put forward a model showing that particles normally trapped within one brane should occasionally be able to tunnel quantum mechanically into an adjacent brane. They said that neutrons should be more affected than charged particles because the tunnelling would be hindered by electromagnetic interactions.

    Nearest neighbour

    The researchers have now teamed up with physicists at the University of Grenoble in France and others at the University of Namur to put their model to the test. This involved setting up a helium-3 detector a few metres from the nuclear reactor at the Institut Laue-Langevin (ILL) in Grenoble and then recording how many neutrons it intercepted. The idea is that neutrons emitted by the reactor would exist in a quantum superposition of being in our brane and being in an adjacent brane (leaving aside the effect of more distant branes). The neutrons’ wavefunctions would then collapse into one or other of the two states when colliding with nuclei within the heavy-water moderator that surrounds the reactor core.

    Most neutrons would end up in our brane, but a small fraction would enter the adjacent one. Those neutrons, so the reasoning goes, would – unlike the neutrons in our brane – escape the reactor, because they would interact extremely weakly with the water and concrete shielding around it. However, because a tiny part of those neutrons’ wavefunction would still exist within our brane even after the initial collapse, they could return to our world by colliding with helium nuclei in the detector. In other words, there would be a small but finite chance that some neutrons emitted by the reactor would disappear into another universe before reappearing in our own – so registering events in the detector.

    Sarrazin says that the biggest challenge in carrying out the experiment was minimizing the considerable background flux of neutrons caused by leakage from neighbouring instruments within the reactor hall. He and his colleagues did this by enclosing the detector in a multilayer shield – a 20 cm-thick polyethylene box on the outside to convert fast neutrons into thermal ones and then a boron box on the inside to capture thermal neutrons. This shielding reduced the background by about a factor of a million.

    Stringent upper limit

    Operating their detector over five days in July last year, Sarrazin and colleagues recorded a small but still significant number of events. The fact that these events could be residual background means they do not constitute evidence for hidden neutrons, say the researchers. But they do allow for a new upper limit on the probability that a neutron enters a parallel universe when colliding with a nucleus – one in two billion, which is about 15,000 times more stringent than a limit the researchers had previously arrived at by studying stored ultra-cold neutrons. This new limit, they say, implies that the distance between branes must be more than 87 times the Planck length (about 1.6 × 10–35 m).

    To try and establish whether any of the residual events could indeed be due to hidden neutrons, Sarrazin and colleagues plan to carry out further, and longer, tests at ILL in about a year’s time. Sarrazin points out that because their model doesn’t predict the strength of inter-brane coupling, these tests cannot be used to completely rule out the existence of hidden branes. Conversely, he says, they could provide “clear evidence” in support of branes, which, he adds, could probably not be obtained using the LHC at CERN. “If the brane energy scale corresponds to the Planck energy scale, there is no hope to observe this kind of new physics in a collider,” he says.

    Axel Lindner of DESY, who carries out similar “shining-particles-through-a-wall” experiments (but using photons rather than neutrons), supports the latest research. He believes it is “very important” to probe such “crazy” ideas experimentally, given presently limited indications about what might supersede the Standard Model. “It would be highly desirable to clarify whether the detected neutron signals can really be attributed to background or whether there is something else behind it,” he says.

    The research is described in Physics Letters B.

    *Science paper:
    Search for passing-through-walls neutrons constrains hidden braneworlds

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: