Tagged: physicsworld.com Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:44 pm on April 14, 2017 Permalink | Reply
    Tags: physicsworld.com, Ten superconducting qubits entangled by physicists in China   

    From physicsworld: “Ten superconducting qubits entangled by physicists in China” 

    physicsworld
    physicsworld.com

    Apr 13, 2017

    1
    Top 10: the quantum device

    A group of physicists in China has taken the lead in the race to couple together increasing numbers of superconducting qubits. The researchers have shown that they can entangle 10 qubits connected to one another via a central resonator – so beating the previous record by one qubit – and say that their result paves the way to quantum simulators that can calculate the behaviour of small molecules and other quantum-mechanical systems much more efficiently than even the most powerful conventional computers.

    Superconducting circuits create qubits by superimposing two electrical currents, and hold the promise of being able to fabricate many qubits on a single chip through the exploitation of silicon-based manufacturing technology. In the latest work, a multi-institutional group led by Jian-Wei Pan of the University of Science and Technology of China in Hefei, built a circuit consisting of 10 qubits, each half a millimetre across and made from slivers of aluminium laid on to a sapphire substrate. The qubits, which act as non-linear LC oscillators, are arranged in a circle around a component known as a bus resonator.

    Initially, the qubits are put into a superposition state of two oscillating currents with different amplitudes by supplying each of them with a very low-energy microwave pulse. To avoid interference at this stage, each qubit is set to a different oscillation frequency. However, for the qubits to interact with one another, they need to have the same frequency. This is where the bus comes in. It allows qubits to transfer energy from one another, but does not absorb any of that energy itself.

    “Magical interaction”

    The end result of this process, says team member Haohua Wang of Zhejiang University, is entanglement, or, as he puts it, “some kind of magical interaction”. To establish just how entangled their qubits were, the researchers used what is known as quantum tomography to find out the probability of detecting each of the thousands of possible states that this entanglement could generate. The outcome: their measured probability distribution yielded the correct state on average about two thirds of the time. The fact that this “fidelity” was above 50%, says Wang, meant that their qubits were “entangled for sure”.

    According to Shibiao Zheng of Fuzhou University, who designed the entangling protocol, the key ingredient in this set-up is the bus. This, he says, allows them to generate entanglement “very quickly”.

    The previous record of nine for the number of entangled qubits in a superconducting circuit was held by John Martinis and colleagues at the University of California, Santa Barbara and Google. That group uses a different architecture for their system; rather than linking qubits via a central hub they instead lay them out in a row and connect each to its nearest neighbour. Doing so allows them to use an error-correction scheme that they developed known as surface code.

    High fidelity

    Error correction will be vital for the functioning of any large-scale quantum computer in order to overcome decoherence – the destruction of delicate quantum states by outside interference. Involving the addition of qubits to provide cross-checking, error correction relies on each gate operation introducing very little error. Otherwise, errors would simply spiral out of control. In 2015, Martinis and co-workers showed that superconducting quantum computers could in principle be scaled up, when they built two-qubit gates with a fidelity above that required by surface code – introducing errors less than 1% of the time.

    Martinis praises Pan and colleagues for their “nicely done experiment”, in particular for their speedy entangling and “good single-qubit operation”. But it is hard to know how much of an advance they have really made, he argues, until they fully measure the fidelity of their single-qubit gates or their entangling gate. “The hard thing is to scale up with good gate fidelity,” he says.

    Wang says that the Chinese collaboration is working on an error-correction scheme for their bus-centred architecture. But he argues that in addition to exceeding the error thresholds for individual gates, it is also important to demonstrate the precise operation of many highly entangled qubits. “We have a global coupling between qubits,” he says. “And that turns out to be very useful.”

    Quantum simulator

    Wang acknowledges that construction of a universal quantum computer – one that would perform any quantum algorithm far quicker than conventional computers could – is not realistic for the foreseeable future given the many millions of qubits such a device is likely to need. For the moment, Wang and his colleagues have a more modest aim in mind: the development of a “quantum simulator” consisting of perhaps 50 qubits, which could outperform classical computers when it comes to simulating the behaviour of small molecules and other quantum systems.

    Xiaobo Zhu of the University of Science and Technology of China, who was in charge of fabricating the 10-qubit device, says that the collaboration aims to build the simulator within the next “5–10 years”, noting that this is similar to the timescale quoted by other groups including the one of Martinis. “We are trying to catch up with the best groups in the world,” he says.

    The research is reported on the arXiv server.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:43 pm on March 21, 2017 Permalink | Reply
    Tags: , , physicsworld.com, Shanghai Synchrotron Radiation Facility (SSRF), Soft X-ray Free Electron Laser (SXFEL) facility   

    From physicsworld.com: “China outlines free-electron laser plans” 

    physicsworld
    physicsworld.com

    Mar 21, 2017
    Michael Banks

    1
    Zhentang Zhao, director of the Shanghai Institute of Applied Physics.

    There was a noticeable step change in the weather today in Shanghai as the Sun finally emerged and the temperature rose somewhat.

    This time I braved the rush-hour metro system to head to the Zhangjiang Technology Park in the south of the city.

    The park is home to the Shanghai Synchrotron Radiation Facility (SSRF), which opened in 2007. The facility accelerates electrons to 3.5 GeV before making them produce X-rays that are then used by researchers to study a range of materials.

    The SSRF currently has 15 beamlines focusing on topics including energy, materials, bioscience and medicine. I was given a tour of the facility by Zhentang Zhao, director of the Shanghai Institute of Applied Physics, which operates the SSRF.

    As I found out this morning, the centre has big plans. Perhaps the sight of building materials and cranes nearby the SSRF should have given it away.

    Over the next six years there are plans to build a further 16 beamlines to put the SSRF at full capacity, some of which will extend 100 m or so from the synchrotron.

    Neighbouring the SSRF, scientists are also building the Soft X-ray Free Electron Laser (SXFEL) facility. The SSRF used to have a test FEL beam line, but since 2014 that has transformed to become a fully fledged centre costing 8bn RMB.

    Currently, the 250 m, 150 MeV linac for the SXFEL has been built and is being commissioned. Over the next couple of years two undulator beamlines will be put in place to generate X-rays with a wavelength of 9 nm and at a repetition rate of 10 Hz. The X-rays will then be sent to five experimental stations that will open to users in 2019.

    There are also plans to upgrade the SXFEL so that it generates X-rays with a 2 nm wavelength (soft X-ray regime) at a frequency of 50 Hz.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 2:33 pm on February 24, 2017 Permalink | Reply
    Tags: Electrochemistry, Nuclear energy may come from the sea, physicsworld.com,   

    From physicsworld.com: “Nuclear energy may come from the sea” 

    physicsworld
    physicsworld.com.com

    Feb 23, 2017
    Sarah Tesh

    1
    Seawater supplies: carbon–polymer electrodes can extract the sea’s uranium. No image credit.

    Uranium has been extracted from seawater using electrochemical methods. A team at Stanford University in California has removed the radioactive material from seawater by using a polymer–carbon electrode and applying a pulsed electric field.

    Uranium is a key component of nuclear fuel. On land, there are about 7.6 million tonnes of identified uranium deposits around the world. This ore is mined, processed and used for nuclear energy. In contrast, there is 4.5 billion tonnes of the heavy metal in seawater as a result of the natural weathering of undersea deposits. If uranium could be extracted from seawater, it could be used to fuel nuclear power stations for hundreds of years. As well as taking advantage of an untapped energy resource, seawater extraction would also avoid the negative environmental impacts of mining processes.

    Tiny concentrations

    Scientists are therefore working on methods to remove and recover uranium from the sea. However, the oceans are vast, and the concentration of uranium is only 3 μg/l, making the development of practical extraction techniques a significant challenge. “Concentrations are tiny, on the order of a single grain of salt dissolved in a litre of water,” says team member Yi Cui. Furthermore, the high salt content of seawater limits traditional extraction methods.

    In water, uranium typically exists as a positively charged uranium oxide, or uranyl, ion (UO2+2). Most methods for extraction involve an adsorbent material where the uranyl ion attaches to the surface but does not chemically react with it. The current leading materials are amidoxime polymers. The performance of adsorbents is, however, limited by their surface area. As there are only a certain number of adsorption sites, and the concentration of uranium is extremely low compared with other positive ions like sodium and calcium, the uranium-adsorbent interaction is slow and sites are quickly taken up by other ions. Furthermore, the adsorbed ions still carry a positive charge and therefore repel other uranyl ions away from the material.

    Electrochemical answer

    Cui and his team turned to electrochemistry and deposition for a solution to this problem. In a basic electrochemical cell, there is an electrolyte solution and two submerged electrodes connected to a power supply. By providing the electrodes with opposite charges, an electrical current is driven through the liquid, forcing positive ions to the negative electrode, and electrons and negative ions to the positive electrode. At the negative electrode, called the anode, the positive ions are reduced, meaning they gain electrons. For most metallic ions, this causes the precipitation of the solid metal and is often deposited on the electrode surface.

    In their electrochemical cell, the team used an anode made of carbon coated with amidoxime polymer, and an inert partner electrode. The electrolyte was seawater, which for some tests contained added uranium. By applying a short pulse of current, the positive uranyl, calcium and sodium ions were drawn to the carbon–polymer electrode. The amidoxime film encouraged the uranyl ions to be preferentially adsorbed over the other ions. The adsorbed uranyl ions were reduced to solid, charge-neutral uranium oxide (UO2) and once the current was switched off, the unwanted ions returned to the bulk of the electrolyte. By repeating the pulsed process, the researchers were able to build up the deposited uranium oxide on the electrode surface, no matter what the initial concentration of the solution was.

    Removal and recovery

    In tests comparing the new method to plain adsorptive amidoxime, the electrochemical cell significantly outperformed the more traditional material. Within the time it took the amidoxime surface to become saturated, the carbon–polymer electrode had extracted nine times the amount of uranium. Furthermore, the team demonstrated that 96.6% of the metal could be recovered from the surface by applying a reverse current and an acidic electrolyte. For an adsorption material, only 76.0% can be recovered with acid elution.

    Despite the researchers’ success, there is a long way to go before large-scale application. To be commercially viable, the benefits of the extracted uranium must outweigh the cost and power demands of the process. Furthermore, the process needs to be streamlined to treat large quantities of water. “We have a lot of work to do still but these are big steps toward practicality,” Cui concludes.

    The extraction method is described in Nature Energy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 1:52 pm on January 5, 2017 Permalink | Reply
    Tags: , , , physicsworld.com, Semiconductor discs could boost night vision   

    From physicsworld.com: “Semiconductor discs could boost night vision” 

    physicsworld
    physicsworld.com.com

    1
    Frequency double: Maria del Rocio Camacho-Morales studies the new optical material.

    A new method of fabricating nanoscale optical crystals capable of converting infrared to visible light has been developed by researchers in Australia, China and Italy. The new technique allows the crystals to be placed onto glass and could lead to improvements in holographic imaging – and even the development of improved night-vision goggles.

    Second-harmonic generation, or frequency doubling, is an optical process whereby two photons with the same frequency are combined within a nonlinear material to form a single photon with twice the frequency (and half the wavelength) of the original photons. The process is commonly used by the laser industry, in which green 532 nm laser light is produced from a 1064 nm infrared source. Recent developments in nanotechnology have opened up the potential for efficient frequency doubling using nanoscale crystals – potentially enabling a variety of novel applications.

    Materials with second-order nonlinear susceptibilities – such as gallium arsenide (GaAs) and aluminium gallium arsenide (AlGaAs) – are of particular interest for these applications because their low-order nonlinearity makes them efficient at conversion.

    Substrate mismatch

    To be able to exploit second-harmonic generation in a practical device, these nanostructures must be fabricated on a substrate with a relatively low refractive index (such as glass), so that light may pass through the optical device. This is challenging, however, because the growth of GaAs-based crystals in a thin film – and type III-V semiconductors in general – requires a crystalline substrate.

    “This is why growing a layer of AlGaAs on top of a low-refractive-index substrate, like glass, leads to unmatched lattice parameters, which causes crystalline defects,” explains Dragomir Neshev, a physicist at the Australian National University (ANU). These defects, he adds, result in unwanted changes in the electronic, mechanical, optical and thermal properties of the films.

    Previous attempts to overcome this issue have led to poor results. One approach, for example, relies on placing a buffer layer under the AlGaAs films, which is then oxidized. However, these buffer layers tend to have higher refractive indices than regular glass substrates. Alternatively, AlGaAs films can be transferred to a glass surface prior to the fabrication of the nanostructures. In this case the result is poor-quality nanocrystals.

    Best of both

    The new study was done by Neshev and colleagues at ANU, Nankai University and the University of Brescia, who combined the advantages of the two different approaches to develop a new fabrication method. First, high-quality disc-shaped nanocrystals about 500 nm in diameter are fabricated using electron-beam lithography on a GaAs wafer, with a layer of AlAs acting as a buffer between the two. The buffer is then dissolved, and the discs are coated in a transparent layer of benzocyclobutene. This can then be attached to the glass substrate, and the GaAs wafer peeled off with minimal damage to the nanostructures.

    The development could have various applications. “The nanocrystals are so small they could be fitted as an ultrathin film to normal eye glasses to enable night vision,” says Neshev, explaining that, by combining frequency doubling with other nonlinear interactions, the film might be used to convert invisible, infrared light to the visible spectrum.

    If they could be made, such modified glasses would be an improvement on conventional night-vision binoculars, which tend to be large and cumbersome. To this end, the team is working to scale up the size of the nanocrystal films to cover the area of typical spectacle lenses, and expects to have a prototype device completed within the next five years.

    Security holograms

    Alongside frequency doubling, the team was also able to tune the nanodiscs to control the direction and polarization of the emitted light, which makes the film more efficient. “Next, maybe we can even engineer the light and make complex shapes such as nonlinear holograms for security markers,” says Neshev, adding: “Engineering of the exact polarization of the emission is also important for other applications such as microscopy, which allows light to be focused to a smaller volume.”

    “Vector beams with spatially arranged polarization distributions have attracted great interest for their applications in a variety of technical areas,” says Qiwen Zhan, an engineer at the University of Dayton in Ohio, who was not involved in this study. The novel fabrication technique, he adds, “opens a new avenue for generating vector fields at different frequencies through nonlinear optical processes”.

    With their initial study complete, Neshev and colleagues are now looking to refine their nanoantennas, both to increase the efficiency of the wavelength conversion process but also to extend the effects to other nonlinear interactions such as down-conversion.

    The research is described in the journal Nano Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:41 am on October 10, 2016 Permalink | Reply
    Tags: Laser-scanning confocal microscopes, Mesolens, physicsworld.com, Radical' new microscope lens combines high resolution with large field of view   

    From physicsworld.com: “‘Radical’ new microscope lens combines high resolution with large field of view” 

    physicsworld
    physicsworld.com.com

    Oct 10, 2016
    Michael Allen

    1
    Zooming in: image of mouse embryo

    A new microscope lens that offers the unique combination of a large field of view with high resolution has been created by researchers in the UK. The new “mesolens” for confocal microscopes can create 3D images of much larger biological samples than was previously possible – while providing detail at the sub-cellular level. According to the researchers, the ability to view whole specimens in a single image could assist in the study of many biological processes and ensure that important details are not overlooked.

    Laser-scanning confocal microscopes are an important tool in modern biological sciences. They emerged in the 1980s as an improvement on fluorescence microscopes, which view specimens that have been dyed with a substance that emits light when illuminated. Standard fluorescence microscopes are not ideal because they pick up fluorescence from behind the focal point, creating images with blurry backgrounds. To eliminate the out-of-focus background, confocal microscopes use a small spot of illuminating laser light and a tiny aperture so that only light close to the focal plane is collected. The laser is scanned across the specimen and many images are taken to create the full picture. Due to the small depth of focus, confocal microscopes are also able to focus a few micrometres through samples to build up a 3D image.

    In microscopy there is a trade-off between resolution and the size of the specimen that can be imaged, or field-of-view – you either have a large field-of-view and low resolution or a small field-of-view and high resolution. Current confocal microscopes struggle to image large specimens, because low magnification produces poor resolution.

    Stitched together

    “Normally, when a large object is imaged with a low-magnification lens, rays of light are collected from only a small range of angles (i.e. the lens has a low numerical aperture),” explains Gail McConnell from the Centre for Biophotonics at the University of Strathclyde, in Glasgow. “This reduces the resolution of the image and has an even more serious effect in increasing the depth of focus, so all the cells in a tissue specimen are superimposed and you cannot see them individually.” Large objects can be imaged by stitching smaller images together. But variations in illumination and focus affect the quality of the final image.

    McConnell and colleagues set out to design a lens that could image larger samples, while retaining the detail produced by confocal microscopy. They focused on creating a lens that could be used to image an entire 12.5 day-old mouse embryo – a specimen that is typically about 5 mm across. This was to “facilitate the recognition of developmental abnormalities” in such embryos, which “are routinely used to screen human genes that are suspected of involvement in disease”, says McConnell.

    Dubbed a mesolens, their optical system is more than half a metre long and contains 15 optical elements. This is unlike most confocal lenses, which are only a few centimetres in length. The mesolens has a magnification of 4× and a numerical aperture of 0.47, which is a significant improvement over the 0.1–0.2 apertures currently available. The system is also able to obtain 3D images of objects 6 mm wide and long, and 3 mm thick.

    The high numerical aperture also provides a very good depth resolution. “This makes it possible to focus through tissue and see a completely different set of sub-cellular structures in focus every 1/500th of a millimetre through a depth of 3 mm,” explains McConnell. The distortion of the images is less than 0.7% at the periphery of the field and the lens works across the full visible spectrum of light, enabling imaging with multiple fluorescent labels.

    Engineering and design

    The lens was made possible through a combination of skilled engineering and optical design, and the use of components with very small aberrations. “Making the new lens is very expensive and difficult: to achieve the required very low field curvature across the full 6 mm field of view and because we need chromatic correction through the entire visible spectrum, the lens fabrication and mounting must be unusually accurate and the glass must be selected very carefully and tested before use,” explains McConnell.

    The researchers used the lens in a customized confocal microscope to image 12.5 day-old mouse embryos. They were able to image single cells, heart muscle fibres and sub-cellular details, not just near the surface of the sample but throughout the depth of the embryo. Writing in the journal eLife, the researchers claim “no existing microscope can show all of these features simultaneously in an intact mouse embryo in a single image.”

    The researchers also write that their mesolens “represents the most radical change in microscope objective design for over a century” and “has the potential to transform optical microscopy through the acquisition of sub-cellular resolution 3D data sets from large tissue specimens”.

    Rafael Yuste, a neuroscientist at Columbia University in New York, saw an earlier prototype of the mesolens microscope. He told physicsworld.com that McConnell and colleagues “have completely redesigned the objective lens to achieve an impressive performance”. He adds that it could enable “wide-field imaging of neuronal circuits and tissues while preserving single-cell resolution”, which could help produce a dynamic picture of how cells and neural circuits in the brain interact.

    Video images taken by the mesolens can be viewed in the eLife paper describing the microscope.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:52 am on October 7, 2016 Permalink | Reply
    Tags: , , Correlation between galaxy rotation and visible matter puzzles astronomers, , , physicsworld.com   

    From physicsworld: “Correlation between galaxy rotation and visible matter puzzles astronomers” 

    physicsworld
    physicsworld.com

    Oct 7, 2016
    Keith Cooper

    1
    Strange correlation: why is galaxy rotation defined by visible mass? No image credit.

    A new study of the rotational velocities of stars in galaxies has revealed a strong correlation between the motion of the stars and the amount of visible mass in the galaxies. This result comes as a surprise because it is not predicted by conventional models of dark matter.

    Stars on the outskirts of rotating galaxies orbit just as fast as those nearer the centre. This appears to be in violation of Newton’s laws, which predict that these outer stars would be flung away from their galaxies. The extra gravitational glue provided by dark matter is the conventional explanation for why these galaxies stay together. Today, our most cherished models of galaxy formation and cosmology rely entirely on the presence of dark matter, even though the substance has never been detected directly.

    These new findings, from Stacy McGaugh and Federico Lelli of Case Western Reserve University, and James Schombert of the University of Oregon, threaten to shake things up. They measured the gravitational acceleration of stars in 153 galaxies with varying sizes, rotations and brightness, and found that the measured accelerations can be expressed as a relatively simple function of the visible matter within the galaxies. Such a correlation does not emerge from conventional dark-matter models.

    Mass and light

    This correlation relies strongly on the calculation of the mass-to-light ratio of the galaxies, from which the distribution of their visible mass and gravity is then determined. McGaugh attempted this measurement in 2002 using visible light data. However, these results were skewed by hot, massive stars that are millions of times more luminous than the Sun. This latest study is based on near-infrared data from the Spitzer Space Telescope.

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    Since near-infrared light is emitted by the more common low-mass stars and red giants, it is a more accurate tracer for the overall stellar mass of a galaxy. Meanwhile, the mass of neutral hydrogen gas in the galaxies was provided by 21 cm radio-wavelength observations.

    McGaugh told physicsworld.com that the team was “amazed by what we saw when Federico Lelli plotted the data.”

    The result is confounding because galaxies are supposedly ensconced within dense haloes of dark matter.

    1
    Spherical halo of dark matter. cerncourier.com

    Furthermore, the team found a systematic deviation from Newtonian predictions, implying that there is some other force is at work beyond simple Newtonian gravity.

    “It’s an impressive demonstration of something, but I don’t know what that something is,” admits James Binney, a theoretical physicist at the University of Oxford, who was not involved in the study.

    This systematic deviation from Newtonian mechanics was predicted more than 30 years ago by an alternate theory of gravity known as modified Newtonian dynamics (MOND). According to MOND’s inventor, Mordehai Milgrom of the Weizmann Institute in Israel, dark matter does not exist, and instead its effects can be explained by modifying how Newton’s laws of gravity operate over large distances.

    “This was predicted in the very first MOND paper of 1983,” says Milgrom. “The MOND prediction is exactly what McGaugh has found, to a tee.”

    However, Milgrom is unhappy that McGaugh hasn’t outright attributed his results to MOND, and suggests that there’s nothing intrinsically new in this latest study. “The data here are much better, which is very important, but this is really the only conceptual novelty in the paper,” says Milgrom.

    No tweaking required

    McGaugh disagrees with Milgrom’s assessment, saying that previous results had incorporated assumptions that tweak the data to get the desired result for MOND, whereas this time the mass-to-light ratio is accurate enough that no tweaking is required.

    Furthermore, McGaugh says he is “trying to be open-minded”, by pointing out that exotic forms of dark matter like superfluid dark matter or even complex galactic dynamics could be consistent with the data. However, he also feels that there is implicit bias against MOND among members of the astronomical community.

    “I have experienced time and again people dismissing the data because they think MOND is wrong, so I am very consciously drawing a red line between the theory and the data.”

    Much of our current understanding of cosmology relies on cold dark matter, so could the result threaten our models of galaxy formation and large-scale structure in the universe? McGaugh thinks it could, but not everyone agrees.

    Way too complex

    Binney points out that dark-matter simulations struggle on the scale of individual galaxies because “the physics of galaxy formation is way too complex to compute properly,” he says, the implication being that it is currently impossible to say whether dark matter can explain these results or not. “It’s unfortunately beyond the powers of humankind at the moment to know.”

    That leaves the battle between dark matter and alternate models of gravitation at an impasse. However, Binney points out that dark matter has an advantage because it can also be studied through observations of galaxy mergers and collisions between galaxy clusters. Also, there are many experiments that are currently searching for evidence of dark-matter particles.

    McGaugh’s next step is to extend the study to elliptical and dwarf spheroidal galaxies, as well as to galaxies at greater distances from the Milky Way.

    The research is to be published in Physical Review Letters and a preprint is available on arXiv.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 10:09 am on August 29, 2016 Permalink | Reply
    Tags: , physicsworld.com,   

    From physicsworld.com: “Nonlinear optical quantum-computing scheme makes a comeback” 

    physicsworld
    physicsworld.com

    Aug 29, 2016
    Hamish Johnston

    A debate that has been raging for 20 years about whether a certain interaction between photons can be used in quantum computing has taken a new twist, thanks to two physicists in Canada. The researchers have shown that it should be possible to use “cross-Kerr nonlinearities” to create a cross-phase (CPHASE) quantum gate. Such a gate has two photons as its input and outputs them in an entangled state. CPHASE gates could play an important role in optical quantum computers of the future.

    Photons are very good carriers of quantum bits (qubits) of information because the particles can travel long distances without the information being disrupted by interactions with the environment. But photons are far from ideal qubits when it comes to creating quantum-logic gates because photons so rarely interact with each other.

    One way around this problem is to design quantum computers in which the photons do not interact with each other. Known as “linear optical quantum computing” (LOQC), it usually involves preparing photons in a specific quantum state and then sending them through a series of optical components, such as beam splitters. The result of the quantum computation is derived by measuring certain properties of the photons.

    Simpler quantum computers

    One big downside of LOQC is that you need lots of optical components to perform basic quantum-logic operations – and the number quickly becomes very large to make an integrated quantum computer that can perform useful calculations. In contrast, quantum computers made from logic gates in which photons interact with each other would be much simpler – at least in principle – which is why some physicists are keen on developing them.

    This recent work on cross-Kerr nonlinearities has been carried out by Daniel Brod and Joshua Combes at the Perimeter Institute for Theoretical Physics and Institute for Quantum Computing in Waterloo, Ontario. Brod explains that a cross-Kerr nonlinearity is a “superidealized” interaction between two photons that can be used to create a CPHASE quantum-logic gate.

    This gate takes zero, one or two photons as input. When the input is zero or one photon, the gate does nothing. But when two photons are present, the gate outputs both with a phase shift between them. One important use of such a gate is to entangle photons, which is vital for quantum computing.

    The problem is that there is no known physical system – trapped atoms, for example – that behaves exactly like a cross-Kerr nonlinearity. Physicists have therefore instead looked for systems that are close enough to create a practical CPHASE. Until recently, it looked like no appropriate system would be found. But now Brod and Combes argue that physicists have been too pessimistic about cross-Kerr nonlinearities and have shown that it could be possible to create a CPHASE gate – at least in principle.

    From A to B via an atom

    Their model is a chain of interaction sites through which the two photons propagate in opposite directions. These sites could be pairs of atoms, in which the atoms themselves interact with each other. The idea is that one photon “A” will interact with one of the atoms in a pair, while the other photon “B” interacts with the other atom. Because the two atoms interact with each other, they will mediate an interaction between photons A and B.

    Unlike some previous designs that implemented quantum error correction to protect the integrity of the quantum information, this latest design is “passive” and therefore simpler.

    Brod and Combes reckon that a high-quality CPHASE gate could be made using five such atomic pairs. Brod told physicsworld.com that creating such a gate in the lab would be difficult, but if successful it could replace hundreds of components in a LOQC system.

    As well as pairs of atoms, Brod says that the gate could be built from other interaction sites such as individual three-level atoms or optical cavities. He and Combes are now hoping that experimentalists will be inspired to test their ideas in the lab. Brod points out that measurements on a system with two interaction sites would be enough to show that their design is valid.

    The work is described in Physical Review Letters. Brod and Combes have also teamed-up with Julio Gea-Banacloche of the University of Arkansas to write a related paper that appears in Physical Review A. This second work looks at their design in more detail.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:10 pm on August 12, 2016 Permalink | Reply
    Tags: , , physicsworld.com, X-ray pulsars   

    From physicsworld.com: “X-ray pulsars plot the way for deep-space GPS” 

    physicsworld
    physicsworld.com

    Aug 11, 2016
    Keith Cooper

    1
    Pulsar phone home: X-ray pulsars could be great for interstellar navigation. No image credit.

    An interstellar navigation technique that taps into the highly periodic signals from X-ray pulsars is being developed by a team of scientists from the National Physical Laboratory (NPL) and the University of Leicester. Using a small X-ray telescope on board a craft, it should be possible to determine its position in deep space to an accuracy of 2 km, according to the researchers.

    Referred to as XNAV, the system would use careful timing of pulsars – which are highly magnetized spinning neutron stars – to triangulate a spacecraft’s position relative to a standardized location, such as the centre-of-mass in the solar system, which lies within the Sun’s corona. As pulsars spin, they emit beams of electromagnetic radiation, including strong radio emission, from their magnetic poles. If these beams point towards Earth, they appear to “pulse” with each rapid rotation.

    Some pulsars in binary systems also accrete gas from their companion star, which can gather over the pulsar’s poles and grow hot enough to emit X-rays. It is these X-ray pulsars that can be used for stellar navigation – radio antennas are big and bulky, whereas X-ray detectors are smaller, often armed with just a single-pixel sensor, and are easier to include within a spacecraft’s payload.

    X-ray payload

    By 2013, theoretical work describing XNAV techniques had developed to the point where the European Space Agency commissioned a team, led by Setnam Shemar at NPL, to conduct a feasibility study, with an eye to one day using it on their spacecraft.

    Shemar’s team analysed two techniques. The simplest is called “delta correction”, and works by timing incoming X-ray pulses – from a single pulsar – using an on-board atomic clock and comparing them to their expected time-of-arrival at the standardized location. The offset between these two timings, taken together with an initial estimated spacecraft position from ground tracking, can be used to obtain a more precise spacecraft position. This method is designed to be used in conjunction with ground-based tracking by NASA’s Deep Space Network or the European Space Tracking Network to provide more positional accuracy. Simulations indicated an accuracy of 2 km when locked onto a pulsar for 10 hours, or 5 km with just one hour of observation.

    The benefits of this method would be most apparent in missions to the outer solar system, says Shemar, where the distance means that ground tracking is less accurate than within the inner solar system, where the XNAV system could be calibrated. However, Werner Becker of the Max Planck Institute for Extraterrestrial Physics, who was not involved in the current work, points out that such a system would not be automated and would still rely on communication with Earth.

    Shemar agrees, which is why his team also considered a second technique, known as “absolute navigation”. To determine a location in 3D space, one must have the x, y and z co-ordinates, plus a time co-ordinate. If a spacecraft has an atomic clock on board, then this could be achieved by monitoring a minimum of three pulsars – if there is no atomic clock, a fourth pulsar would be required. The team’s simulations indicate that at the distance of Neptune, a spacecraft could autonomously measure its position to within 30 km in 3D space using the four-pulsar system.

    Limits to technology

    The downside to absolute navigation is that either more X-ray detectors are required – one for each pulsar – or a mechanism to allow the X-ray detector to slew to each pulsar in turn would need to be implemented. It’s a trade-off, points out Shemar, between accuracy and the practical limits of technology and cost. Becker, for instance, advocates using up to 10 pulsars to provide the highest accuracy, but implementing this on a spacecraft may be more difficult.

    While the engineering behind such a steering mechanism is complex, “it’s not miles out of the scope of existing technology,” says Adrian Martindale of the University of Leicester, who participated in the feasibility study. In terms of the cost, complexity and size of X-ray detector required for XNAV, the team cites the example of the Mercury Imaging and X-ray Spectrometer (MIXS) instrument that will launch to the innermost planet on the upcoming Bepi-Colombo mission in 2018.

    3
    MIX Mercury Imaging X-ray Spectrometer

    BepiColombo II preferred
    ESA/BepiColombo

    “We’ve shown that we think it is feasible to achieve,” Shemar told physicsworld.com, adding the caveat that some of the technology needs to catch up with the theoretical work. “Reducing the mass of the detector as far as possible, reducing the observation time for each pulsar and having a suitable steering mechanism are all significant challenges to be overcome.”

    In February 2017, NASA plans to launch the Neutron star Interior Composition Explorer (NICER), to the International Space Station. Although primarily for X-ray astronomy, NICER will also perform a demonstration of XNAV. As this idea of pulsar-based navigation continues to grow, “space agencies may begin to take a more proactive role and start developing strategies for how an XNAV system could be implemented on a space mission,” says Shemar.

    Becker is a little more sceptical about how soon XNAV will be ushered in for use on spacecraft. “The technology will become available when there is a need for it,” he says. “Autonomous pulsar navigation becomes attractive for deep-space missions but there are none planned for many years.”

    The research is published in the journal Experimental Astronomy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:36 am on August 7, 2016 Permalink | Reply
    Tags: , , , , physicsworld.com,   

    From physicsworld.com: “And so to bed for the 750 GeV bump” 

    physicsworld
    physicsworld.com

    Aug 5, 2016
    Tushna Commissariat

    1
    No bumps: ATLAS diphoton data – the solid black line shows the 2015 and 2016 data combined. (Courtesy: ATLAS Experiment/CERN)

    2
    Smooth dips: CMS diphoton data – blue lines show 2015 data, red are 2016 data and black are the combined result. (Courtesy: CMS collaboration/CERN)

    After months of rumours, speculation and some 500 papers posted to the arXiv in an attempt to explain it, the ATLAS and CMS collaborations have confirmed that the small excess of diphoton events, or “bump”, at 750 GeV detected in their preliminary data is a mere statistical fluctuation that has disappeared in the light of more data. Most folks in the particle-physics community will have been unsurprised if a bit disappointed by today’s announcement at the International Conference on High Energy Physics (ICHEP) 2016, currently taking place in Chicago.

    The story began around this time last year, soon after the LHC was rebooted and began its impressive 13 TeV run, when the ATLAS collaboration saw more events than expected around the 750 GeV mass window. This bump immediately caught the interest of physicists world over, simply because there was a sniff of “new physics” around it, meaning that the Standard Model of particle physics did not predict the existence of a particle at that energy. But also, it was the first interesting data to emerge from the LHC after its momentous discovery of the Higgs boson in 2012 and if it had held, would have been one of the most exciting discoveries in modern particle physics.

    According to ATLAS, “Last year’s result triggered lively discussions in the scientific communities about possible explanations in terms of new physics and the possible production of a new, beyond-Standard-Model particle decaying to two photons. However, with the modest statistical significance from 2015, only more data could give a conclusive answer.”

    And that is precisely what both ATLAS and CMS did, by analysing the 2016 dataset that is nearly four times larger than that of last year. Sadly, both years’ data taken together reveal that the excess is not large enough to be an actual particle. “The compatibility of the 2015 and 2016 datasets, assuming a signal with mass and width given by the largest 2015 excess, is on the level of 2.7 sigma. This suggests that the observation in the 2015 data was an upward statistical fluctuation.” The CMS statement is succinctly similar: “No significant excess is observed over the Standard Model predictions.”

    Tommaso Dorigo, blogger and CMS collaboration member, tells me that it is wisest to “never completely believe in a new physics signal until the data are confirmed over a long time” – preferably by multiple experiments. More interestingly, he tells me that the 750 Gev bump data seemed to be a “similar signal” to the early Higgs-to-gamma-gamma data the LHC physicists saw in 2011, when they were still chasing the particle. In much the same way, more data were obtained and the Higgs “bump” went on to be an official discovery. With the 750 GeV bump, the opposite is true. “Any new physics requires really really strong evidence to be believed because your belief in the Standard Model is so high and you have seen so many fluctuations go away,” says Dorigo.

    And this is precisely what Colombia University’s Peter Woit – who blogs at Not Even Wrong – told me in March this year when I asked him how he thought the bump would play out. Woit pointed out that particle physics has a long history of “bumps” that may look intriguing at first glance, but will most likely be nothing. “If I had to guess, this will disappear,” he said, adding that the real surprise for him was that “there aren’t more bumps” considering how good the LHC team is at analysing its data and teasing out any possibilities.

    It may be fair to wonder just why so many theorists decided to work with the unconfirmed data from last year and look for a possible explanation of what kind of particle it may have been and indeed, Dorigo says that “theorists should have known better”. But on the flip-side, the Standard Model predicted many a particle long before it was eventually discovered and so it is easy to see why many were keen to come up with the perfect new model.

    Despite the hype and the eventual letdown, Dorigo is glad that this bump has got folks talking about high-energy physics. “It doesn’t matter even if it fizzles out; it’s important to keep asking ourselves these questions,” he says. The main reason for this, Dorigo explains, is that “we are at a very special junction in particle physics as we decide what new machine to build” and some input from current colliders is necessary.”Right now there is no clear direction,” he says. In light of the fact that there has been no new physics (or any hint of supersymmetry) from the LHC to date, the most likely future devices would be an electron–positron collider or, in the long term, a muon collider. But a much clearer indication is necessary before these choices are made and for now, much more data are needed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:37 am on August 1, 2016 Permalink | Reply
    Tags: , LiFi, physicsworld.com   

    From physicsworld.com: “A light-connected world” 

    physicsworld
    physicsworld.com

    Aug 1, 2016
    Harald Haas
    h.haas@ed.ac.uk

    The humble household light bulb – once a simple source of illumination – could soon be transformed into the backbone of a revolutionary new wireless communications network based on visible light. Harald Haas explains how this “LiFi” system works and how it could shape our increasingly data-driven world.

    1
    NO image caption.No image credit

    Over the past year the world’s computers, mobile phones and other devices generated an estimated 12 zettabytes (1021 bytes) of information. By 2020 this data deluge is predicted to increase to 44 zettabytes – nearly as many bits as there are stars in the universe. There will also be a corresponding increase in the amount of data transmitted over communications networks, from 1 to 2.3 zettabytes. The total mobile traffic including smartphones will be 30 exabytes (1018 bytes). A vast amount of this increase will come from previously uncommunicative devices such as home appliances, cars, wearable electronics and street furniture as they become part of the so-called “Internet of Things”, transmitting some 335 petabytes (1015 bytes) of status information, maintenance data and video to their owners and users for services such as augmented reality.

    In some fields, this data-intensive future is already here. A wind turbine, for example, creates 10 terabytes of data per day for operational and maintenance purposes and to ensure optimum performance. But by 2020 there could be as many as 80 billion data-generating devices all trying to communicate with us and with each other – often across large distances, and usually without a wired connection.

    2
    1 A crowded field. No image credit

    So far, the resources required to achieve this wireless connectivity have been taken almost entirely from the radio frequency (RF) part of the electromagnetic spectrum (up to 300 GHz). However, the anticipated exponential increase in data volumes during the next decade will make it increasingly hard to accomplish this with RF alone. The RF spectrum “map” of the US is already very crowded (figure 1), with large chunks of frequency space allocated to services such as satellite communication, military and defence, aeronautical communication, terrestrial wireless communication and broadcast. In many cases, the same frequency band is used for multiple services. So how are we going to accommodate perhaps 70 billion additional communication devices?

    At this point it is helpful to remember that RF is only one small part of the electromagnetic spectrum. The visible-light portion of the spectrum stretches from about 430 to 770 THz, more than 1000 times the bandwidth of the RF portion. These frequencies are seldom used for communication, even though visible-light-based data transmission has been successfully demonstrated for decades in the fibre-optics industry. The difference, of course, is that the coherent laser light used in fibre optics is confined to cables rather than being transmitted in free space. But might it be possible to exploit the communication potential of the visible-light region of the spectrum while also benefitting from the convenience and reach of wireless RF?

    With the advent of high-brightness light-emitting diodes (LEDs), I believe the logical answer is “yes”. Using this new “LiFi” system (a term I coined in a TED talk in 2011), it will be possible to achieve high-speed, secure, bi-directional and fully networked wireless communications with data encoded in visible light. In a LiFi network, every light source – a light bulb, a street lamp, the head and/or tail light of a car, a reading light in a train or an aircraft – can become a wireless access point or wireless router like our WiFi routers at home. However, instead of using RF signals, a LiFi network modulates the intensity of visible light to send and receive data at high speeds – 10 gigabits per second (Gbps) per light source are technically feasible. Thus, our lighting networks can be transformed into high-speed wireless communications networks where illumination is only a small part of what they do.

    The ubiquitous nature of light sources means that LiFi would guarantee seamless and mobile wireless services (figure 2). A single LiFi access point will be able to communicate to multiple terminals in a bi-directional fashion, providing access for multiple users. If the terminals move (for example, if someone walks around while using their phone) the wireless connection will not be interrupted, as the next-best-placed light source will take over – a phenomenon referred to as “handover”. And because there are so many light sources, each of them acting as an independent wireless access point, the effective data rate that a mobile user will experience could be orders of magnitude higher than is achievable with current wireless networks. Specifically, the average data rate that is delivered to a user terminal by current WiFi networks is about 10 megabits per second; with a future LiFi network this can be increased to 1 Gbps.

    3
    2 Data delights. No image credit.

    This radically new type of wireless network also offers other advantages. One is security. The next time you walk around in an urban environment, note how many WiFi networks appear in a network search on your smartphone. In contrast, because light does not propagate through opaque objects such as plastered walls, LiFi can be much more tightly controlled, significantly enhancing the security of wireless networks. LiFi networks are also more energy efficient, thanks to the relatively short distance between a light source and the user terminal (in the region of metres) and the relatively small coverage area of a single light source (10 m2 or less). Moreover, because LiFi piggybacks on existing lighting systems, the energy efficiency of this new type of wireless network can be improved by three orders of magnitude compared with WiFi networks. A final advantage is that because LiFi systems don’t use an antenna to receive signals, they can be used in environments that need to be intrinsically safe such as petrochemical plants and oil-drilling platforms, where a spark to or from an antenna can cause an explosion.

    LiFi misconceptions

    A number of misconceptions commonly arise when I talk to people about LiFi. Perhaps the biggest of these is that LiFi must be a “line-of-sight” technology. In other words, people assume that the receiver needs to be directly in line with the light source for the data connection to work. In fact, this is not the case. My colleagues and I have shown that for a particular light-modulation technology, the data rate scales with the signal-to-noise ratio (SNR), and that it is possible to transmit data at SNRs as low as 6 dB. This means LiFi can tolerate signal blockages between 46  and 66 dB (signal attenuation factors of 40,000 – 4 million). This is important because in a typical office environment where the lights are on the ceiling and the minimum level of illumination for reading purposes is 500 lux, the SNR at table height is between 40 and 60 dB, as shown by Jelena Grubor and colleagues at the Fraunhofer Institute for Telecommunications in Berlin, Germany (2008 Proceedings of the 6th International Symposium Communication Systems, Networks and Digital Signal Processing 165). In our own tests we transmitted video to a laptop over a distance of about 3 m. The LED light fixture was pointing against a white wall, in the opposite direction to the location of the receiver, therefore there was no direct line-of-sight component reaching the receiver, yet the video was successfully received via reflected light.

    Another misconception is that LiFi does not work when it is sunny. If true, this would be a serious limitation, but in fact, the interference from sunlight falls outside the bandwidth used for data modulation. The LiFi signal is modulated at frequencies typically greater than 1 MHz, so sunlight (even flickering sunlight) can simply be filtered out, and has negligible impact on the performance as long as the receiver is not saturated (saturation can be avoided by using algorithms that automatically control the gain at the receiver). Indeed, my colleagues and I argue that sunlight is hugely beneficial for LiFi, as it is possible to create solar-cell-based LiFi receivers where the solar cell acts as a data receiver device at the same time as it converts sunlight into electricity.

    A third misconception relates to the behaviour of the light sources. Some have suggested that the light sources used in LiFi cannot be dimmed, but in fact, sophisticated modulation techniques make it possible for LiFi to operate very close to the “turn on voltage” of the LEDs. This means that the lights can be operated at very low light output levels while maintaining high data rates. Another, related concern is that the modulation of LiFi lights might be visible as “flicker”. In reality, the lowest frequency at which the lights are modulated, 1 MHz, is 10,000 times higher than the refresh rate of computer screens (100 Hz). This means the “flicker-rate” of a LiFi light bulb is far too quick for human or animal eyes to perceive.

    A final misconception is that LiFi is a one-way street, good for transmitting data but not for receiving it. Again, this is not true. The fact that LiFi can be combined with LED illumination does not mean that both functions always have to be used together. The two functions – illumination and data – can easily be separated (note my previous comment on dimming), so LiFi can also be used very effectively in situations where lighting is not required. In these circumstances, the infrared output of an LED light on the data-generating device would be very suitable for the “uplink” (i.e. for sending data). Because infrared sensors are already incorporated into many LED lights (as motion sensors, for example), no new technology would be necessary, and sending a signal with infrared requires very little power: my colleagues and I have conducted an experiment where we sent data at a speed of 1.1 Gbps over a distance of 10 m using an LED with an optical output power of just 4.5 mW. Using infrared for the uplink has the added advantage of spectrally separating uplink and downlink transmissions, avoiding interference.

    Nuts and bolts

    Now that we know what LiFi can and cannot do, let’s examine how it works. At the most basic level, you can think of LiFi as a network of point-to-point wireless communication links between LED light sources and receivers equipped with some form of light-detection device, such as a photodiode. The data rate achievable with such a network depends on both the light source and the technology used to encode digital information into the light itself.

    First, let’s consider the available light sources. Most commercial LEDs have a blue high-brightness LED with a phosphorous coating that converts blue light into yellow; the blue light and yellow light then combine to produce white light. This is the most cost-efficient way to produce white light today, but the colour-converting material slows down the light’s response to intensity modulation, meaning that higher frequencies (blue light) are heavily attenuated. Consequently, the light intensity from this type of LED can only be modulated at a fairly low rate, about 2 MHz. It is also not possible to modulate the individual spectral components (red, green and blue) of the resulting white light; all you can do is vary the intensity of the composite light spectrum. Even so, one can achieve data rates of about 100 Mbps with these devices by placing a blue filter placed at the receiver to remove the slow yellow spectral components.

    More advanced red, green and blue (RGB) LEDs produce white light by mixing these base colours instead of using a colour-converting chemical. This eases the restrictions on modulation rates, making it possible to achieve data rates of up to 5 Gbps. In addition, one can encode different data onto each wavelength (a technique known as wavelength division multiplexing), meaning that for an RGB LED there are effectively three independent data channels available. However, because they require three separate light sources, these devices are more expensive than single blue LEDs.

    4
    3 Faster, brighter, longer. No image credit

    A third alternative – gallium-nitride micro LEDs – are small devices that achieve very high current densities, with a bandwidth of up to 1 GHz. Data rates of up to 10 Gbps have recently been demonstrated with these devices by Hyunchae Chun and colleagues (2016 Journal of Lightwave Technology, in press). This type of LED currently is a relatively poor source of illumination compared with phosphor-coated white LEDs or RGB LEDs, but it would be ideal for uplink communications – for example, in an Internet of Things where an indicator light on an oven is capable of sending data to a light bulb in the ceiling – and in the future we may also see these devices in a light bulb due to rapid technology enhancements.

    Lastly, white light can also be generated with multiple colour laser diodes combined with a diffuser. This technology may be used in the future for lighting due to the very high efficiency of lasers, but currently its cost is excessive and technical issues such as speckle have to be overcome. However, my University of Edinburgh colleagues Dobroslav Tsonev, Stefan Videv and I have recently demonstrated a white light beam of 1000 lux covering 1 m2 at a distance of 3 m, and the achievable data rate for this scenario is 100 Gbps (2015 Opt. Express 23 1627).

    As for the modulation, my group at Edinburgh has been pioneering a digital modulation technique called orthogonal frequency division multiplexing (OFDM) for the past 10 years. The principle of OFDM is to divide the entire modulation spectrum (that is, the range of frequencies used to change the light intensity into modulated data) into many smaller frequency bins. Some of these frequencies are less attenuated than others (due to the nature of the propagation channel and LED and photodetector device characteristics), and information theory tells us that the less-attenuated frequency bins are able to carry more information bits than those that are more attenuated. Hence, the dividing of the spectrum into many smaller bins allows us to “load” each individual bin with the optimum number of information bits. This makes it possible to achieve higher data rates than one gets with more traditional modulation techniques, such as on– off keying.

    These high data rates make it easier to adapt to varying propagation channels, where the frequency bin attenuation changes with location – something that is important for a wireless communications system. The whole process can be compared to an audio sound equalizer system that individually adjusts low frequencies (bass), middle frequencies and high frequencies (treble) to suit a particular optimum sound profile, independent of where the listener is in the room. My former students Mostafa Afgani and Hany Elgala, together with me and my colleague Dietmar Knipp, have demonstrated what is, to the best of our knowledge, the first OFDM implementation for visible light communication (2006 IEEE Tridentcom 129).

    The bright future

    LiFi is a disruptive technology that is poised to affect a large number of industries. Most importantly, I expect it to catalyse the merger of wireless communications and lighting, which are at the moment entirely separate businesses. Within the lighting industry, the concept of light as a service, rather than a physical object you buy and replace, will become a dominant theme, requiring industry to develop new business models to succeed in a world where individual LED lamps can last more than 20 years. In combination with LiFi, therefore, light-as-a-service will pull the lighting industry to enter what has traditionally been a wireless communications market.

    In terms of how it affects daily life, I believe LiFi will contribute to the fifth generation of mobile telephony systems (5G) and beyond. As the Internet of Things grows, LiFi will unlock its potential, making it possible to create “smart” cities and homes. In the transport sector, it will enable new intelligent transport systems and enhance road safety as more and more driverless cars begin operating. It will create new cyber-secure wireless networks and enable new ways of health monitoring in ageing societies. Perhaps most importantly, it will offer new ways of closing the “digital divide”; despite considerable advances, there are still about four billion people in the world who cannot access the Internet. The bottom line, though, is that we need to stop thinking of light bulbs as little heaters that also provide light. In 25 years, my colleagues and I believe that the LED light bulb will serve thousands of purposes, not just illumination.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: