Tagged: physicsworld.com Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:09 am on August 29, 2016 Permalink | Reply
    Tags: , physicsworld.com,   

    From physicsworld.com: “Nonlinear optical quantum-computing scheme makes a comeback” 

    physicsworld
    physicsworld.com

    Aug 29, 2016
    Hamish Johnston

    A debate that has been raging for 20 years about whether a certain interaction between photons can be used in quantum computing has taken a new twist, thanks to two physicists in Canada. The researchers have shown that it should be possible to use “cross-Kerr nonlinearities” to create a cross-phase (CPHASE) quantum gate. Such a gate has two photons as its input and outputs them in an entangled state. CPHASE gates could play an important role in optical quantum computers of the future.

    Photons are very good carriers of quantum bits (qubits) of information because the particles can travel long distances without the information being disrupted by interactions with the environment. But photons are far from ideal qubits when it comes to creating quantum-logic gates because photons so rarely interact with each other.

    One way around this problem is to design quantum computers in which the photons do not interact with each other. Known as “linear optical quantum computing” (LOQC), it usually involves preparing photons in a specific quantum state and then sending them through a series of optical components, such as beam splitters. The result of the quantum computation is derived by measuring certain properties of the photons.

    Simpler quantum computers

    One big downside of LOQC is that you need lots of optical components to perform basic quantum-logic operations – and the number quickly becomes very large to make an integrated quantum computer that can perform useful calculations. In contrast, quantum computers made from logic gates in which photons interact with each other would be much simpler – at least in principle – which is why some physicists are keen on developing them.

    This recent work on cross-Kerr nonlinearities has been carried out by Daniel Brod and Joshua Combes at the Perimeter Institute for Theoretical Physics and Institute for Quantum Computing in Waterloo, Ontario. Brod explains that a cross-Kerr nonlinearity is a “superidealized” interaction between two photons that can be used to create a CPHASE quantum-logic gate.

    This gate takes zero, one or two photons as input. When the input is zero or one photon, the gate does nothing. But when two photons are present, the gate outputs both with a phase shift between them. One important use of such a gate is to entangle photons, which is vital for quantum computing.

    The problem is that there is no known physical system – trapped atoms, for example – that behaves exactly like a cross-Kerr nonlinearity. Physicists have therefore instead looked for systems that are close enough to create a practical CPHASE. Until recently, it looked like no appropriate system would be found. But now Brod and Combes argue that physicists have been too pessimistic about cross-Kerr nonlinearities and have shown that it could be possible to create a CPHASE gate – at least in principle.

    From A to B via an atom

    Their model is a chain of interaction sites through which the two photons propagate in opposite directions. These sites could be pairs of atoms, in which the atoms themselves interact with each other. The idea is that one photon “A” will interact with one of the atoms in a pair, while the other photon “B” interacts with the other atom. Because the two atoms interact with each other, they will mediate an interaction between photons A and B.

    Unlike some previous designs that implemented quantum error correction to protect the integrity of the quantum information, this latest design is “passive” and therefore simpler.

    Brod and Combes reckon that a high-quality CPHASE gate could be made using five such atomic pairs. Brod told physicsworld.com that creating such a gate in the lab would be difficult, but if successful it could replace hundreds of components in a LOQC system.

    As well as pairs of atoms, Brod says that the gate could be built from other interaction sites such as individual three-level atoms or optical cavities. He and Combes are now hoping that experimentalists will be inspired to test their ideas in the lab. Brod points out that measurements on a system with two interaction sites would be enough to show that their design is valid.

    The work is described in Physical Review Letters. Brod and Combes have also teamed-up with Julio Gea-Banacloche of the University of Arkansas to write a related paper that appears in Physical Review A. This second work looks at their design in more detail.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:10 pm on August 12, 2016 Permalink | Reply
    Tags: , , physicsworld.com, X-ray pulsars   

    From physicsworld.com: “X-ray pulsars plot the way for deep-space GPS” 

    physicsworld
    physicsworld.com

    Aug 11, 2016
    Keith Cooper

    1
    Pulsar phone home: X-ray pulsars could be great for interstellar navigation. No image credit.

    An interstellar navigation technique that taps into the highly periodic signals from X-ray pulsars is being developed by a team of scientists from the National Physical Laboratory (NPL) and the University of Leicester. Using a small X-ray telescope on board a craft, it should be possible to determine its position in deep space to an accuracy of 2 km, according to the researchers.

    Referred to as XNAV, the system would use careful timing of pulsars – which are highly magnetized spinning neutron stars – to triangulate a spacecraft’s position relative to a standardized location, such as the centre-of-mass in the solar system, which lies within the Sun’s corona. As pulsars spin, they emit beams of electromagnetic radiation, including strong radio emission, from their magnetic poles. If these beams point towards Earth, they appear to “pulse” with each rapid rotation.

    Some pulsars in binary systems also accrete gas from their companion star, which can gather over the pulsar’s poles and grow hot enough to emit X-rays. It is these X-ray pulsars that can be used for stellar navigation – radio antennas are big and bulky, whereas X-ray detectors are smaller, often armed with just a single-pixel sensor, and are easier to include within a spacecraft’s payload.

    X-ray payload

    By 2013, theoretical work describing XNAV techniques had developed to the point where the European Space Agency commissioned a team, led by Setnam Shemar at NPL, to conduct a feasibility study, with an eye to one day using it on their spacecraft.

    Shemar’s team analysed two techniques. The simplest is called “delta correction”, and works by timing incoming X-ray pulses – from a single pulsar – using an on-board atomic clock and comparing them to their expected time-of-arrival at the standardized location. The offset between these two timings, taken together with an initial estimated spacecraft position from ground tracking, can be used to obtain a more precise spacecraft position. This method is designed to be used in conjunction with ground-based tracking by NASA’s Deep Space Network or the European Space Tracking Network to provide more positional accuracy. Simulations indicated an accuracy of 2 km when locked onto a pulsar for 10 hours, or 5 km with just one hour of observation.

    The benefits of this method would be most apparent in missions to the outer solar system, says Shemar, where the distance means that ground tracking is less accurate than within the inner solar system, where the XNAV system could be calibrated. However, Werner Becker of the Max Planck Institute for Extraterrestrial Physics, who was not involved in the current work, points out that such a system would not be automated and would still rely on communication with Earth.

    Shemar agrees, which is why his team also considered a second technique, known as “absolute navigation”. To determine a location in 3D space, one must have the x, y and z co-ordinates, plus a time co-ordinate. If a spacecraft has an atomic clock on board, then this could be achieved by monitoring a minimum of three pulsars – if there is no atomic clock, a fourth pulsar would be required. The team’s simulations indicate that at the distance of Neptune, a spacecraft could autonomously measure its position to within 30 km in 3D space using the four-pulsar system.

    Limits to technology

    The downside to absolute navigation is that either more X-ray detectors are required – one for each pulsar – or a mechanism to allow the X-ray detector to slew to each pulsar in turn would need to be implemented. It’s a trade-off, points out Shemar, between accuracy and the practical limits of technology and cost. Becker, for instance, advocates using up to 10 pulsars to provide the highest accuracy, but implementing this on a spacecraft may be more difficult.

    While the engineering behind such a steering mechanism is complex, “it’s not miles out of the scope of existing technology,” says Adrian Martindale of the University of Leicester, who participated in the feasibility study. In terms of the cost, complexity and size of X-ray detector required for XNAV, the team cites the example of the Mercury Imaging and X-ray Spectrometer (MIXS) instrument that will launch to the innermost planet on the upcoming Bepi-Colombo mission in 2018.

    3
    MIX Mercury Imaging X-ray Spectrometer

    BepiColombo II preferred
    ESA/BepiColombo

    “We’ve shown that we think it is feasible to achieve,” Shemar told physicsworld.com, adding the caveat that some of the technology needs to catch up with the theoretical work. “Reducing the mass of the detector as far as possible, reducing the observation time for each pulsar and having a suitable steering mechanism are all significant challenges to be overcome.”

    In February 2017, NASA plans to launch the Neutron star Interior Composition Explorer (NICER), to the International Space Station. Although primarily for X-ray astronomy, NICER will also perform a demonstration of XNAV. As this idea of pulsar-based navigation continues to grow, “space agencies may begin to take a more proactive role and start developing strategies for how an XNAV system could be implemented on a space mission,” says Shemar.

    Becker is a little more sceptical about how soon XNAV will be ushered in for use on spacecraft. “The technology will become available when there is a need for it,” he says. “Autonomous pulsar navigation becomes attractive for deep-space missions but there are none planned for many years.”

    The research is published in the journal Experimental Astronomy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:36 am on August 7, 2016 Permalink | Reply
    Tags: , , , , physicsworld.com,   

    From physicsworld.com: “And so to bed for the 750 GeV bump” 

    physicsworld
    physicsworld.com

    Aug 5, 2016
    Tushna Commissariat

    1
    No bumps: ATLAS diphoton data – the solid black line shows the 2015 and 2016 data combined. (Courtesy: ATLAS Experiment/CERN)

    2
    Smooth dips: CMS diphoton data – blue lines show 2015 data, red are 2016 data and black are the combined result. (Courtesy: CMS collaboration/CERN)

    After months of rumours, speculation and some 500 papers posted to the arXiv in an attempt to explain it, the ATLAS and CMS collaborations have confirmed that the small excess of diphoton events, or “bump”, at 750 GeV detected in their preliminary data is a mere statistical fluctuation that has disappeared in the light of more data. Most folks in the particle-physics community will have been unsurprised if a bit disappointed by today’s announcement at the International Conference on High Energy Physics (ICHEP) 2016, currently taking place in Chicago.

    The story began around this time last year, soon after the LHC was rebooted and began its impressive 13 TeV run, when the ATLAS collaboration saw more events than expected around the 750 GeV mass window. This bump immediately caught the interest of physicists world over, simply because there was a sniff of “new physics” around it, meaning that the Standard Model of particle physics did not predict the existence of a particle at that energy. But also, it was the first interesting data to emerge from the LHC after its momentous discovery of the Higgs boson in 2012 and if it had held, would have been one of the most exciting discoveries in modern particle physics.

    According to ATLAS, “Last year’s result triggered lively discussions in the scientific communities about possible explanations in terms of new physics and the possible production of a new, beyond-Standard-Model particle decaying to two photons. However, with the modest statistical significance from 2015, only more data could give a conclusive answer.”

    And that is precisely what both ATLAS and CMS did, by analysing the 2016 dataset that is nearly four times larger than that of last year. Sadly, both years’ data taken together reveal that the excess is not large enough to be an actual particle. “The compatibility of the 2015 and 2016 datasets, assuming a signal with mass and width given by the largest 2015 excess, is on the level of 2.7 sigma. This suggests that the observation in the 2015 data was an upward statistical fluctuation.” The CMS statement is succinctly similar: “No significant excess is observed over the Standard Model predictions.”

    Tommaso Dorigo, blogger and CMS collaboration member, tells me that it is wisest to “never completely believe in a new physics signal until the data are confirmed over a long time” – preferably by multiple experiments. More interestingly, he tells me that the 750 Gev bump data seemed to be a “similar signal” to the early Higgs-to-gamma-gamma data the LHC physicists saw in 2011, when they were still chasing the particle. In much the same way, more data were obtained and the Higgs “bump” went on to be an official discovery. With the 750 GeV bump, the opposite is true. “Any new physics requires really really strong evidence to be believed because your belief in the Standard Model is so high and you have seen so many fluctuations go away,” says Dorigo.

    And this is precisely what Colombia University’s Peter Woit – who blogs at Not Even Wrong – told me in March this year when I asked him how he thought the bump would play out. Woit pointed out that particle physics has a long history of “bumps” that may look intriguing at first glance, but will most likely be nothing. “If I had to guess, this will disappear,” he said, adding that the real surprise for him was that “there aren’t more bumps” considering how good the LHC team is at analysing its data and teasing out any possibilities.

    It may be fair to wonder just why so many theorists decided to work with the unconfirmed data from last year and look for a possible explanation of what kind of particle it may have been and indeed, Dorigo says that “theorists should have known better”. But on the flip-side, the Standard Model predicted many a particle long before it was eventually discovered and so it is easy to see why many were keen to come up with the perfect new model.

    Despite the hype and the eventual letdown, Dorigo is glad that this bump has got folks talking about high-energy physics. “It doesn’t matter even if it fizzles out; it’s important to keep asking ourselves these questions,” he says. The main reason for this, Dorigo explains, is that “we are at a very special junction in particle physics as we decide what new machine to build” and some input from current colliders is necessary.”Right now there is no clear direction,” he says. In light of the fact that there has been no new physics (or any hint of supersymmetry) from the LHC to date, the most likely future devices would be an electron–positron collider or, in the long term, a muon collider. But a much clearer indication is necessary before these choices are made and for now, much more data are needed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:37 am on August 1, 2016 Permalink | Reply
    Tags: , LiFi, physicsworld.com   

    From physicsworld.com: “A light-connected world” 

    physicsworld
    physicsworld.com

    Aug 1, 2016
    Harald Haas
    h.haas@ed.ac.uk

    The humble household light bulb – once a simple source of illumination – could soon be transformed into the backbone of a revolutionary new wireless communications network based on visible light. Harald Haas explains how this “LiFi” system works and how it could shape our increasingly data-driven world.

    1
    NO image caption.No image credit

    Over the past year the world’s computers, mobile phones and other devices generated an estimated 12 zettabytes (1021 bytes) of information. By 2020 this data deluge is predicted to increase to 44 zettabytes – nearly as many bits as there are stars in the universe. There will also be a corresponding increase in the amount of data transmitted over communications networks, from 1 to 2.3 zettabytes. The total mobile traffic including smartphones will be 30 exabytes (1018 bytes). A vast amount of this increase will come from previously uncommunicative devices such as home appliances, cars, wearable electronics and street furniture as they become part of the so-called “Internet of Things”, transmitting some 335 petabytes (1015 bytes) of status information, maintenance data and video to their owners and users for services such as augmented reality.

    In some fields, this data-intensive future is already here. A wind turbine, for example, creates 10 terabytes of data per day for operational and maintenance purposes and to ensure optimum performance. But by 2020 there could be as many as 80 billion data-generating devices all trying to communicate with us and with each other – often across large distances, and usually without a wired connection.

    2
    1 A crowded field. No image credit

    So far, the resources required to achieve this wireless connectivity have been taken almost entirely from the radio frequency (RF) part of the electromagnetic spectrum (up to 300 GHz). However, the anticipated exponential increase in data volumes during the next decade will make it increasingly hard to accomplish this with RF alone. The RF spectrum “map” of the US is already very crowded (figure 1), with large chunks of frequency space allocated to services such as satellite communication, military and defence, aeronautical communication, terrestrial wireless communication and broadcast. In many cases, the same frequency band is used for multiple services. So how are we going to accommodate perhaps 70 billion additional communication devices?

    At this point it is helpful to remember that RF is only one small part of the electromagnetic spectrum. The visible-light portion of the spectrum stretches from about 430 to 770 THz, more than 1000 times the bandwidth of the RF portion. These frequencies are seldom used for communication, even though visible-light-based data transmission has been successfully demonstrated for decades in the fibre-optics industry. The difference, of course, is that the coherent laser light used in fibre optics is confined to cables rather than being transmitted in free space. But might it be possible to exploit the communication potential of the visible-light region of the spectrum while also benefitting from the convenience and reach of wireless RF?

    With the advent of high-brightness light-emitting diodes (LEDs), I believe the logical answer is “yes”. Using this new “LiFi” system (a term I coined in a TED talk in 2011), it will be possible to achieve high-speed, secure, bi-directional and fully networked wireless communications with data encoded in visible light. In a LiFi network, every light source – a light bulb, a street lamp, the head and/or tail light of a car, a reading light in a train or an aircraft – can become a wireless access point or wireless router like our WiFi routers at home. However, instead of using RF signals, a LiFi network modulates the intensity of visible light to send and receive data at high speeds – 10 gigabits per second (Gbps) per light source are technically feasible. Thus, our lighting networks can be transformed into high-speed wireless communications networks where illumination is only a small part of what they do.

    The ubiquitous nature of light sources means that LiFi would guarantee seamless and mobile wireless services (figure 2). A single LiFi access point will be able to communicate to multiple terminals in a bi-directional fashion, providing access for multiple users. If the terminals move (for example, if someone walks around while using their phone) the wireless connection will not be interrupted, as the next-best-placed light source will take over – a phenomenon referred to as “handover”. And because there are so many light sources, each of them acting as an independent wireless access point, the effective data rate that a mobile user will experience could be orders of magnitude higher than is achievable with current wireless networks. Specifically, the average data rate that is delivered to a user terminal by current WiFi networks is about 10 megabits per second; with a future LiFi network this can be increased to 1 Gbps.

    3
    2 Data delights. No image credit.

    This radically new type of wireless network also offers other advantages. One is security. The next time you walk around in an urban environment, note how many WiFi networks appear in a network search on your smartphone. In contrast, because light does not propagate through opaque objects such as plastered walls, LiFi can be much more tightly controlled, significantly enhancing the security of wireless networks. LiFi networks are also more energy efficient, thanks to the relatively short distance between a light source and the user terminal (in the region of metres) and the relatively small coverage area of a single light source (10 m2 or less). Moreover, because LiFi piggybacks on existing lighting systems, the energy efficiency of this new type of wireless network can be improved by three orders of magnitude compared with WiFi networks. A final advantage is that because LiFi systems don’t use an antenna to receive signals, they can be used in environments that need to be intrinsically safe such as petrochemical plants and oil-drilling platforms, where a spark to or from an antenna can cause an explosion.

    LiFi misconceptions

    A number of misconceptions commonly arise when I talk to people about LiFi. Perhaps the biggest of these is that LiFi must be a “line-of-sight” technology. In other words, people assume that the receiver needs to be directly in line with the light source for the data connection to work. In fact, this is not the case. My colleagues and I have shown that for a particular light-modulation technology, the data rate scales with the signal-to-noise ratio (SNR), and that it is possible to transmit data at SNRs as low as 6 dB. This means LiFi can tolerate signal blockages between 46  and 66 dB (signal attenuation factors of 40,000 – 4 million). This is important because in a typical office environment where the lights are on the ceiling and the minimum level of illumination for reading purposes is 500 lux, the SNR at table height is between 40 and 60 dB, as shown by Jelena Grubor and colleagues at the Fraunhofer Institute for Telecommunications in Berlin, Germany (2008 Proceedings of the 6th International Symposium Communication Systems, Networks and Digital Signal Processing 165). In our own tests we transmitted video to a laptop over a distance of about 3 m. The LED light fixture was pointing against a white wall, in the opposite direction to the location of the receiver, therefore there was no direct line-of-sight component reaching the receiver, yet the video was successfully received via reflected light.

    Another misconception is that LiFi does not work when it is sunny. If true, this would be a serious limitation, but in fact, the interference from sunlight falls outside the bandwidth used for data modulation. The LiFi signal is modulated at frequencies typically greater than 1 MHz, so sunlight (even flickering sunlight) can simply be filtered out, and has negligible impact on the performance as long as the receiver is not saturated (saturation can be avoided by using algorithms that automatically control the gain at the receiver). Indeed, my colleagues and I argue that sunlight is hugely beneficial for LiFi, as it is possible to create solar-cell-based LiFi receivers where the solar cell acts as a data receiver device at the same time as it converts sunlight into electricity.

    A third misconception relates to the behaviour of the light sources. Some have suggested that the light sources used in LiFi cannot be dimmed, but in fact, sophisticated modulation techniques make it possible for LiFi to operate very close to the “turn on voltage” of the LEDs. This means that the lights can be operated at very low light output levels while maintaining high data rates. Another, related concern is that the modulation of LiFi lights might be visible as “flicker”. In reality, the lowest frequency at which the lights are modulated, 1 MHz, is 10,000 times higher than the refresh rate of computer screens (100 Hz). This means the “flicker-rate” of a LiFi light bulb is far too quick for human or animal eyes to perceive.

    A final misconception is that LiFi is a one-way street, good for transmitting data but not for receiving it. Again, this is not true. The fact that LiFi can be combined with LED illumination does not mean that both functions always have to be used together. The two functions – illumination and data – can easily be separated (note my previous comment on dimming), so LiFi can also be used very effectively in situations where lighting is not required. In these circumstances, the infrared output of an LED light on the data-generating device would be very suitable for the “uplink” (i.e. for sending data). Because infrared sensors are already incorporated into many LED lights (as motion sensors, for example), no new technology would be necessary, and sending a signal with infrared requires very little power: my colleagues and I have conducted an experiment where we sent data at a speed of 1.1 Gbps over a distance of 10 m using an LED with an optical output power of just 4.5 mW. Using infrared for the uplink has the added advantage of spectrally separating uplink and downlink transmissions, avoiding interference.

    Nuts and bolts

    Now that we know what LiFi can and cannot do, let’s examine how it works. At the most basic level, you can think of LiFi as a network of point-to-point wireless communication links between LED light sources and receivers equipped with some form of light-detection device, such as a photodiode. The data rate achievable with such a network depends on both the light source and the technology used to encode digital information into the light itself.

    First, let’s consider the available light sources. Most commercial LEDs have a blue high-brightness LED with a phosphorous coating that converts blue light into yellow; the blue light and yellow light then combine to produce white light. This is the most cost-efficient way to produce white light today, but the colour-converting material slows down the light’s response to intensity modulation, meaning that higher frequencies (blue light) are heavily attenuated. Consequently, the light intensity from this type of LED can only be modulated at a fairly low rate, about 2 MHz. It is also not possible to modulate the individual spectral components (red, green and blue) of the resulting white light; all you can do is vary the intensity of the composite light spectrum. Even so, one can achieve data rates of about 100 Mbps with these devices by placing a blue filter placed at the receiver to remove the slow yellow spectral components.

    More advanced red, green and blue (RGB) LEDs produce white light by mixing these base colours instead of using a colour-converting chemical. This eases the restrictions on modulation rates, making it possible to achieve data rates of up to 5 Gbps. In addition, one can encode different data onto each wavelength (a technique known as wavelength division multiplexing), meaning that for an RGB LED there are effectively three independent data channels available. However, because they require three separate light sources, these devices are more expensive than single blue LEDs.

    4
    3 Faster, brighter, longer. No image credit

    A third alternative – gallium-nitride micro LEDs – are small devices that achieve very high current densities, with a bandwidth of up to 1 GHz. Data rates of up to 10 Gbps have recently been demonstrated with these devices by Hyunchae Chun and colleagues (2016 Journal of Lightwave Technology, in press). This type of LED currently is a relatively poor source of illumination compared with phosphor-coated white LEDs or RGB LEDs, but it would be ideal for uplink communications – for example, in an Internet of Things where an indicator light on an oven is capable of sending data to a light bulb in the ceiling – and in the future we may also see these devices in a light bulb due to rapid technology enhancements.

    Lastly, white light can also be generated with multiple colour laser diodes combined with a diffuser. This technology may be used in the future for lighting due to the very high efficiency of lasers, but currently its cost is excessive and technical issues such as speckle have to be overcome. However, my University of Edinburgh colleagues Dobroslav Tsonev, Stefan Videv and I have recently demonstrated a white light beam of 1000 lux covering 1 m2 at a distance of 3 m, and the achievable data rate for this scenario is 100 Gbps (2015 Opt. Express 23 1627).

    As for the modulation, my group at Edinburgh has been pioneering a digital modulation technique called orthogonal frequency division multiplexing (OFDM) for the past 10 years. The principle of OFDM is to divide the entire modulation spectrum (that is, the range of frequencies used to change the light intensity into modulated data) into many smaller frequency bins. Some of these frequencies are less attenuated than others (due to the nature of the propagation channel and LED and photodetector device characteristics), and information theory tells us that the less-attenuated frequency bins are able to carry more information bits than those that are more attenuated. Hence, the dividing of the spectrum into many smaller bins allows us to “load” each individual bin with the optimum number of information bits. This makes it possible to achieve higher data rates than one gets with more traditional modulation techniques, such as on– off keying.

    These high data rates make it easier to adapt to varying propagation channels, where the frequency bin attenuation changes with location – something that is important for a wireless communications system. The whole process can be compared to an audio sound equalizer system that individually adjusts low frequencies (bass), middle frequencies and high frequencies (treble) to suit a particular optimum sound profile, independent of where the listener is in the room. My former students Mostafa Afgani and Hany Elgala, together with me and my colleague Dietmar Knipp, have demonstrated what is, to the best of our knowledge, the first OFDM implementation for visible light communication (2006 IEEE Tridentcom 129).

    The bright future

    LiFi is a disruptive technology that is poised to affect a large number of industries. Most importantly, I expect it to catalyse the merger of wireless communications and lighting, which are at the moment entirely separate businesses. Within the lighting industry, the concept of light as a service, rather than a physical object you buy and replace, will become a dominant theme, requiring industry to develop new business models to succeed in a world where individual LED lamps can last more than 20 years. In combination with LiFi, therefore, light-as-a-service will pull the lighting industry to enter what has traditionally been a wireless communications market.

    In terms of how it affects daily life, I believe LiFi will contribute to the fifth generation of mobile telephony systems (5G) and beyond. As the Internet of Things grows, LiFi will unlock its potential, making it possible to create “smart” cities and homes. In the transport sector, it will enable new intelligent transport systems and enhance road safety as more and more driverless cars begin operating. It will create new cyber-secure wireless networks and enable new ways of health monitoring in ageing societies. Perhaps most importantly, it will offer new ways of closing the “digital divide”; despite considerable advances, there are still about four billion people in the world who cannot access the Internet. The bottom line, though, is that we need to stop thinking of light bulbs as little heaters that also provide light. In 25 years, my colleagues and I believe that the LED light bulb will serve thousands of purposes, not just illumination.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 4:37 pm on July 14, 2016 Permalink | Reply
    Tags: , Blue supernovae, physicsworld.com   

    From physicsworld: ‘Blue is the colour of the universe’s first supernovae” 

    physicsworld
    physicsworld.com

    Jul 14, 2016
    Tushna Commissariat
    tushna.commissariat@iop.org

    1
    Rich and poor: the evolution of old and young supernovae

    Astronomers hoping to spot “first-generation” supernovae explosions from the oldest and most distant stars in our universe should look out for the colour blue. So says an international team of researchers, which has discovered that the colour of the light from a supernova during a specific phase of its evolution is an indicator of its progenitor star’s elemental content. The work will help astronomers to directly detect the oldest stars, and their eventual supernovae explosions, in our universe.

    Early days

    Following the Big Bang, the universe mainly consisted of light elements such as hydrogen, helium and trace amounts of lithium. It was only 200 million years later, after the formation of the first massive stars, that heavier elements such as oxygen, nitrogen, carbon and iron – which astronomers all call “metals” – were forged in their extremely high-pressure centres. The first stars – called “population III” – are thought to have been so massive and unstable that they would have quickly burnt out and exploded in supernovae, which would have scattered the metals across the cosmos. Indeed, these first explosions will most likely have sown the seeds to form the next-generation “population II” stars, which are still “metal poor” compared with “population I” stars like the Sun.

    Unfortunately, astronomers have yet to detect a true first population-III star or spot a first-generation supernova. Astronomers have been hunting for old stars, and the best evidence for them was found last year in an extremely bright and distant galaxy in the early universe. There are also some candidate stars in our own galaxy.

    Old timers

    The constituents and properties of the first-generation of stars and their supernova explosions are still a mystery, thanks to the lack of actual observations, especially when it comes to the supernovae. Studying first-generation supernovae would provide rare insights into the early universe, but astronomers have struggled to distinguish these early explosions from the ordinary supernovae we detect today.

    Now though, Alexey Tolstov and Ken’ichi Nomoto from the Kavli Institute for the Physics and Mathematics of the Universe, together with colleagues, have identified characteristic differences between new and old supernovae, after experimenting with supernovae models based on stars with virtually no metals. Such stars make good candidates because they preserve their chemical abundance at the time of their formation.

    “The explosions of first-generation stars have a great impact on subsequent star and galaxy formation. But first, we need a better understanding of how these explosions look like to discover this phenomenon in the near future,” says Tolstov, adding that the “most difficult thing here is the construction of reliable models based on our current studies and observations. Finding the photometric characteristics of metal-poor supernovae, I am very happy to make one more step to our understanding of the early universe.”

    Blue hue

    Just like ordinary supernovae, the light or luminosity of a first-generation supernova should also show the characteristic rise to a peak in brightness, followed by a steady decline – which astronomers call a “light curve”. Indeed, a bright flash would signal the shock waves that emerge from the star’s surface as its core collapses. This “shock breakout” is followed by a several-month-long “plateau” phase, where the luminosity remains relatively constant, before the slow exponential decay.

    Nomoto’s team calculated the light curves of metal-poor supernovae, produced by blue supergiant stars, and “metal-rich” red supergiant stars. They found that both the shock-breakout and plateau phases are shorter, bluer and fainter for metal-poor supernovae in comparison to the metal-rich ones. The researchers conclude the blue light-curve could be used as an indicator of a low-metallicity star.

    Unfortunately, the expansion of our universe makes it difficult to detect first star and supernova radiation, which is redshifted into the near-infrared wavelength. But the team says that upcoming large telescopes such as the James Webb Space Telescope, currently scheduled for launch in 2018, should be able to detect the distant light from first supernovae, and their method could be used to identify them. Their findings could also help to pick out low-metallicity supernovae in the nearby universe.

    The work is published in the Astrophysical Journal.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 8:44 pm on July 7, 2016 Permalink | Reply
    Tags: , , physicsworld.com, Relativistic codes reveal a clumpy universe   

    From physicsworld: “Relativistic codes reveal a clumpy universe” 

    physicsworld
    physicsworld.com

    Jun 28, 2016
    Keith Cooper

    1
    General universe: visualization of the large-scale structure of the universe. No image credit

    A new set of codes that, for the first time, are able to apply Einstein’s complete general theory of relativity to simulate how our universe evolved, have been independently developed by two international teams of physicists. They pave the way for cosmologists to confirm whether our interpretations of observations of large-scale structure and cosmic expansion are telling us the true story.

    The impetus to develop codes designed to apply general relativity to cosmology stems from the limitations of traditional numerical simulations of the universe. Currently, such models invoke Newtonian gravity and assume a homogenous universe when describing cosmic expansion, for reasons of simplicity and computing power. On the largest scales the universe is homogenous and isotropic, meaning that matter is distributed evenly in all directions; but on smaller scales the universe is clearly inhomogeneous, with matter clumped into chains of galaxies and filaments of dark matter assembled around vast voids.

    Uneven expansion?

    However, the expansion of the universe could be proceeding at different rates in different regions, depending on the density of matter in those areas. Where matter is densely clumped together, its gravity slows the expansion; whereas in the relatively empty voids, the universe can expand unhindered. This could affect how light propagates through such regions, manifesting itself in the relationship between the distance to objects of known intrinsic luminosity (what astronomers refer to as standard candles, whereby we measure their distance, based on how bright they appear to us) and their cosmological redshift.

    Now, James Mertens and Glenn Starkman of Case Western Reserve University in Ohio, together with John T Giblin at Kenyon College, have written one such code; while Eloisa Bentivegna of the University of Catania in Italy and Marco Bruni at the Institute of Cosmology and Gravitation at the University of Portsmouth have independently developed a second similar code.

    Fast voids and slow clumps

    The distances to supernovae and their cosmological redshifts are related to one another in a specific way in a homogeneous universe, but the question is, according to Starkman: “Are they related in the same way in a lumpy universe?” The answer to this will have obvious repercussions for the universe’s expansion rate and the strength of dark energy, which can be measured using standard candles such as supernovae.

    The rate of expansion of our universe is described by the “Hubble parameter”. Its current value of 73 km/s/Mpc is calculated assuming a homogenous universe. However, Bruni and Bentivegna showed that on local scales there are wide variations, with voids expanding up to 28% faster than the average value for the Hubble parameter. This is counteracted by the slowdown of the expansion in dense galaxy clusters. However, Bruni cautions that they must “be careful, as this value depends on the specific coordinate system that we have used”. While the US team used the same system, it is feasible that it creates observer bias and that a different system could lead to a different interpretation of the variation.

    The codes have also been able to rule out a phenomenon known as “back reaction”, which is the idea that large-scale structure can affect the universe around it in such a way as to masquerade as dark energy. By running their codes, both teams have shown, within the limitations of the simulations, that the amount of back reaction is small enough not to account for dark energy.

    Einstein’s toolkit

    Although the US team’s code has not yet been publically released, the code developed by Bentivegna is available. It makes use of a free software collection called the Einstein Toolkit, which includes software called Cactus. This allows code to be developed by downloading modules called “thorns” that each perform specific tasks, such as solving Einstein’s field equations or calculating gravitational waves. These modules are then integrated into the Cactus infrastructure to create new applications.

    “Cactus was already able to integrate Einstein’s equations before I started working on my modifications in 2010,” says Bentivegna. “What I had to supply was a module to prepare the initial conditions for a cosmological model where space is filled with matter that is inhomogeneous on smaller scales but homogeneous on larger ones.”

    Looking ahead

    The US team says it will be releasing its code to the scientific community soon and reports that it performs even better than the Cactus code. However, Giblin believes that both codes are likely to be used equally in the future, since they can provide independent verification for each other. “This is important since we’re starting to be able to make predictions about actual measurements that will be made in the future and having two independent groups working with different tools is an important check,” he says.

    So are the days of numerical simulations with Newtonian gravity numbered? Not necessarily, says Bruni. Even though the general relativity codes are highly accurate, the immense computing resources they require means that achieving the detail of Newtonian gravity simulations will require a lot of extra code development.

    “However, these general relativity simulations should provide a benchmark for Newtonian simulations,” says Bruni, “which we can then use to determine to what point the Newtonian method is accurate. They’re a huge step forward in modelling the universe as a whole.”

    The teams’ work is published in Physical Review Letters (116 251301; 116 251302) and Physical Review D.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 10:15 am on May 13, 2016 Permalink | Reply
    Tags: , Brane theory and testing, , physicsworld.com   

    From physicsworld: “Parallel-universe search focuses on neutrons” 

    physicsworld
    physicsworld.com

    May 10, 2016
    Edwin Cartlidge

    1
    No braner: there is no evidence that ILL neutrons venture into an adjacent universe. No image credit.

    The first results* from a detector designed to look for evidence of particles reaching us from a parallel universe have been unveiled by physicists in France and Belgium. Although they drew a blank, the researchers say that their experiment provides a simple, low-cost way of testing theories beyond the Standard Model of particle physics, and that the detector could be made significantly more sensitive in the future.

    Standard Model
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    A number of quantum theories of gravity predict the existence of dimensions beyond the three of space and one of time that we are familiar with. Those theories envisage our universe as a 4D surface or “brane” in a higher-dimensional space–time “bulk”, just as a 2D sheet of paper exists as a surface within our normal three spatial dimensions. The bulk could contain multiple branes separated from one another by a certain distance within the higher dimensions.

    Physicists have found no empirical evidence for the existence of other branes. However, in 2010, Michaël Sarrazin of the University of Namur in Belgium and Fabrice Petit of the Belgian Ceramic Research Centre put forward a model showing that particles normally trapped within one brane should occasionally be able to tunnel quantum mechanically into an adjacent brane. They said that neutrons should be more affected than charged particles because the tunnelling would be hindered by electromagnetic interactions.

    Nearest neighbour

    The researchers have now teamed up with physicists at the University of Grenoble in France and others at the University of Namur to put their model to the test. This involved setting up a helium-3 detector a few metres from the nuclear reactor at the Institut Laue-Langevin (ILL) in Grenoble and then recording how many neutrons it intercepted. The idea is that neutrons emitted by the reactor would exist in a quantum superposition of being in our brane and being in an adjacent brane (leaving aside the effect of more distant branes). The neutrons’ wavefunctions would then collapse into one or other of the two states when colliding with nuclei within the heavy-water moderator that surrounds the reactor core.

    Most neutrons would end up in our brane, but a small fraction would enter the adjacent one. Those neutrons, so the reasoning goes, would – unlike the neutrons in our brane – escape the reactor, because they would interact extremely weakly with the water and concrete shielding around it. However, because a tiny part of those neutrons’ wavefunction would still exist within our brane even after the initial collapse, they could return to our world by colliding with helium nuclei in the detector. In other words, there would be a small but finite chance that some neutrons emitted by the reactor would disappear into another universe before reappearing in our own – so registering events in the detector.

    Sarrazin says that the biggest challenge in carrying out the experiment was minimizing the considerable background flux of neutrons caused by leakage from neighbouring instruments within the reactor hall. He and his colleagues did this by enclosing the detector in a multilayer shield – a 20 cm-thick polyethylene box on the outside to convert fast neutrons into thermal ones and then a boron box on the inside to capture thermal neutrons. This shielding reduced the background by about a factor of a million.

    Stringent upper limit

    Operating their detector over five days in July last year, Sarrazin and colleagues recorded a small but still significant number of events. The fact that these events could be residual background means they do not constitute evidence for hidden neutrons, say the researchers. But they do allow for a new upper limit on the probability that a neutron enters a parallel universe when colliding with a nucleus – one in two billion, which is about 15,000 times more stringent than a limit the researchers had previously arrived at by studying stored ultra-cold neutrons. This new limit, they say, implies that the distance between branes must be more than 87 times the Planck length (about 1.6 × 10–35 m).

    To try and establish whether any of the residual events could indeed be due to hidden neutrons, Sarrazin and colleagues plan to carry out further, and longer, tests at ILL in about a year’s time. Sarrazin points out that because their model doesn’t predict the strength of inter-brane coupling, these tests cannot be used to completely rule out the existence of hidden branes. Conversely, he says, they could provide “clear evidence” in support of branes, which, he adds, could probably not be obtained using the LHC at CERN. “If the brane energy scale corresponds to the Planck energy scale, there is no hope to observe this kind of new physics in a collider,” he says.

    Axel Lindner of DESY, who carries out similar “shining-particles-through-a-wall” experiments (but using photons rather than neutrons), supports the latest research. He believes it is “very important” to probe such “crazy” ideas experimentally, given presently limited indications about what might supersede the Standard Model. “It would be highly desirable to clarify whether the detected neutron signals can really be attributed to background or whether there is something else behind it,” he says.

    The research is described in Physics Letters B.

    *Science paper:
    Search for passing-through-walls neutrons constrains hidden braneworlds

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:24 pm on April 30, 2016 Permalink | Reply
    Tags: , , physicsworld.com   

    From physicsworld.com: “Are wormholes or ‘gravastars’ mimicking gravitational-wave signals from black holes?” 

    physicsworld
    physicsworld.com

    Apr 29, 2016
    Tushna Commissariat

    1
    Into a wormhole: characteristic modes are light-ring potential wells. No image credit

    Earlier this year, researchers working on the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) made the first ever detection of gravitational waves.

    Caltech/MIT Advanced aLigo detector in Livingston, Louisiana
    Caltech/MIT Advanced aLigo detector in Livingston, Louisiana

    The waves are believed to have been created by the merger of two binary black holes, in an event dubbed GW150914.

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib
    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    Black holes merging Swinburne Astronomy Productions
    Black holes merging Swinburne Astronomy Productions

    Now, however, new theoretical work done by an international team of researchers suggests that other hypothetical exotic stellar objects – such as wormholes or “gravastars” – could produce a very similar gravitational-wave signal. While it is theoretically possible to differentiate between the different sources, it is impossible to tell whether GW150914 had a more exotic origin than merging black holes because the signal was not strong enough to be resolved.

    The researchers point out that, in the future, the detection of stronger gravitational-wave signals could reveal more information about their sources – especially once the sensitivity of aLIGO is increased to its ultimate design level. In addition, future space-based detectors, such as the European Space Agency’s Evolved Laser Interferometer Space Antenna (eLISA), could reveal tiny discrepancies between detected and predicted signals, if they exist.

    ESA/LISA Pathfinder
    ESA/LISA Pathfinder

    Ringing frequencies

    Einstein’s general theory of relativity provides a very clear theoretical framework for the type of gravitational-wave signal that would be produced during the collision and subsequent merger of massive, compact bodies, such as black holes. Gravitational waves are produced constantly before, during and just after a merger. The waves’ frequencies will vary, telling us when the black holes’ orbit begins to reduce and they begin their slow inward collapse, or “inspiral”. The smaller the initial distance between the two, the more radiation is emitted as the black holes plunge into one another. This produces a characteristic “chirp” waveform, wherein the frequency and the amplitude of the waves increase until they peak at the merger.

    But such a cataclysmic merger initially gives birth to a highly distorted black hole, which rids itself of its deformity almost instantly, by ringing like a bell and producing further gravitational radiation. The system quickly loses energy and the strength of the waves decays exponentially to form a “ringdown” signal, all of which was picked up by aLIGO for GW150914.

    2
    Ringing chirp: the waveform of event GW150914. No image credit.

    The chirp and the ringdown signal are of immense interest as these carry crucial information about the mass and spin of both the initial black holes, and of the newly formed one. “This ringdown phase is very important: just as a Stradivarius violin vibrates in a characteristic way, so too do black holes. Thus, by studying carefully how it rings, you hope to know the black hole itself,” says physicist Vitor Cardoso from the University of Lisbon, Portugal.

    These vibrational modes of a nascent black hole – known as quasinormal modes – must be detected within the signal, to be absolutely certain that the gravitational waves have arisen from coalescing black holes. Our current understanding suggests that these vibrational modes are inherently linked to a black hole’s key feature – its event horizon, or the boundary past which nothing, not even light, can escape from its gravitational pull.

    Light rings

    But new simulations and analysis – carried out by Cardoso together with team members Paolo Pani and Edgardo Franzin – have shown that a virtually indistinguishable ringdown signal can be produced by a “black-hole mimicker”, thereby potentially allowing us to detect these exotic objects. These mimickers are hypothetical objects that could be as compact as black holes but do not have an event horizon. They could be gravastars – celestial objects whose interior is made of dark energy – or wormholes – a tunnel through space–time connecting two distant regions of the universe.

    These exotic objects possess “light rings”, which are yet another artefact of general relativity – a circular photon orbit which is predicted to exist around very compact objects. “A light ring is very different from an event horizon, because signals can escape from regions within the light ring – although they would be highly red-shifted – whereas nothing can escape from the event horizon,” explains Cardoso. All compact objects would in theory possess a light ring. Indeed back holes have one that is associated with the border of their silhouette. These are the so-called “black-hole shadows” that lie just outside of their event horizon. On the other hand, neutron stars, while very compact, are not compact enough to develop a light ring.

    Cardoso and colleagues looked into objects with only light rings and found that “if an object is compact enough to possess a light ring, then the ringdown would be almost identical to that of a black hole. The more compact the object, the more similar the ringdown”. Indeed, the team’s simulations showed that the ringdown signal is mostly associated with the light ring. It is the light ring itself that is vibrating, not the event horizon.
    Mimicking wormholes

    The team’s simulations calculated this explicitly for a wormhole, but Pani told physicsworld.com that the “same result is valid for gravastars and, as we claim, for all ultracompact black-hole mimickers”. But the researchers’ analysis also showed that these mimickers eventually leave an imprint in the gravitational-wave signal in the form of “echoes”, which are reflections of the waves from the surface of these objects. “These echoes may take a long time to reach our detectors, so it is important to scrutinize the data even long after the main pulse has arrived,” says Cardoso. More precisely, the mimicker signal will ultimately deviate from that predicted for a black hole, but only at late times.

    LIGO scientist Amber Stuver, who is based at the LIGO Livingston Observatory in Louisiana, US, is “thrilled” by the possibility that aLIGO may have detected an exotic object, but she confirms that “there is nothing in our observations that is inconsistent with this being a normal stellar mass black-hole system possessing an event horizon. Until we have evidence otherwise, we can’t claim that this was anything but a stellar mass black hole binary merger.” She tells physicsworld.com that “advanced detectors such as aLIGO, aVirgo, and KAGRA will need to increase their sensitivity” to pick up such signals. She also points out that the GW150914 event “was detected with aLIGO at about 30% of its design sensitivity. The potential is real that, if these exotic horizonless objects are out there mimicking black holes, we may very well find them in the near future”.

    B S Sathyaprakash from Cardiff University in the UK, who is also a part of the LIGO team, agrees with the theorists’ work, saying that “Our signal is consistent with both the formation of a black hole and a horizonless object – we just can’t tell.” He further explains that, although Einstein’s equations predict how slightly deformed black holes vibrate, our understanding is incomplete when their deformation is large. “That’s why we need a signal in which the post-merger oscillations of the merged object are large, and this can happen if we detect even more massive objects than GW150914, or if GW150914 is at least two to four times closer.” Then, it would be possible to distinguish the signals, he says.

    Cardoso acknowledges that “black-hole mimickers are very exotic objects and by far black holes remain the most natural hypothesis”. But he adds: “It is important to understand whether these exotic objects can be formed (for example in a stellar collapse) and if they are stable. Most importantly, we only focused on the ringdown part, but it is equally relevant to explain the entire gravitational-wave signal, including the inspiral and the merger phases. This would require performing numerical simulations with supercomputers to understand whether this picture is viable or not. We are currently working on this.”

    The research* is published in Physical Review Letters.

    *Science paper:
    Is the Gravitational-Wave Ringdown a Probe of the Event Horizon?

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:14 pm on March 11, 2016 Permalink | Reply
    Tags: , , , physicsworld.com   

    From physicsworld.com: “Einstein meets the dark sector in a new numerical code that simulates the universe” 

    physicsworld
    physicsworld.com

    Mar 10, 2016
    Keith Cooper

    A powerful numerical code that uses [Albert] Einstein’s general theory of relativity to describe how large-scale structures form in the universe has been created by physicists in Switzerland and South Africa. The program promises to help researchers to better incorporate dark matter and dark energy into huge computer simulations of how the universe has evolved over time.

    At the largest length scales, the dynamics of the universe are dominated by gravity. The force binds galaxies together into giant clusters and, in turn, holds these clusters tight within the grasp of immense haloes of dark matter. The cold dark matter (CDM) model assumes that dark matter comprises slow-moving particles. This means that non-relativistic Newtonian physics should be sufficient to describe the effects of gravity on the assembly of large-scale structure in the universe.

    Universe map  2MASS Extended Source Catalog XSC
    Universe map 2MASS Extended Source Catalog XSC

    However, if dark matter moves at speeds approaching that of light, the Newtonian description breaks down and Einstein’s general theory of relativity must be incorporated into the simulation – something that has proven difficult to do.

    Upcoming galaxy surveys, such as those to be performed by the Large Synoptic Survey Telescope in Chile or the European Space Agency’s Euclid mission, will observe the universe on a wider scale and to a higher level of precision than ever before.

    LSST Camera
    LSST Interior
    LSST Exterior
    The LSST, camera built at SLAC, and the building in Chile which will house the telescope

    ESA Euclid spacecraft
    ESA/Euclid spacecraft

    Computer simulations based on Newtonian assumptions may not be able to reproduce this level of precision, making observational results difficult to interpret. More importantly, we don’t know enough about what dark matter and dark energy are, to be able to conclusively say which treatment of gravity is most appropriate for them.

    Evolving geometry

    Now, Julian Adamek of the Observatoire de Paris and colleagues have developed a numerical code called “gevolution”, which provides a framework for introducing the effects of general relativity into complex simulations of the cosmos. “We wanted to provide a tool that describes the evolution of the geometry of space–time,” Adamek told physicsworld.com.

    General relativity describes gravity as the warp created in space–time by the mass of an object. This gives the cosmos a complex geometry, rather than the linear space described by Newtonian gravity. The gevolution code is able to compute the Friedmann–Lemaítre–Robertson–Walker metric that solves Einstein’s field equations to describe [spacetime’s] complex geometry and how particles move through that geometry. The downside is that it sucks up a lot of resources: 115,000 central-processing-unit (CPU) hours compared to 25,000 CPU hours for a similarly sized Newtonian simulation.

    Other uncertainties

    Not everyone is convinced that the code is urgently required, and Joachim Harnois-Déraps of the Institute for Astronomy at the Royal Observatory in Edinburgh points out that there are other challenges facing physicists running cosmological simulations. “There are many places where things could go wrong in simulations.”

    Harnois-Déraps cites inaccuracies in modelling the nonlinear clustering of matter in the universe, as well as feedback from supermassive black holes in active galaxies blowing matter out from galaxies and redistributing it. A recent study led by Markus Haider of the University of Innsbruck in Austria, for example, showed that jets from black holes could be sufficient to blow gas all the way into the voids within the cosmic web of matter that spans the universe.

    “Central and shining”

    “In my opinion, the bulk of our effort should instead go into improving our knowledge about these dominant sources of uncertainty,” says Harnois-Déraps who, despite his scepticism, hails gevolution as a great achievement in coding. “If suddenly a scenario arises where general relativity is needed, the gevolution numerical code would be central and shining.”

    Indeed, Adamek views the gevolution code as a tool, ready and waiting should it be required. Newtonian physics works surprisingly well for the current standard model of cold dark matter and dark energy as the cosmological constant. However, should dark matter prove to have relativistic properties, or if dark energy is a dynamic, changing field rather than a constant, then Newtonian approximations will have to make way for the more precise predictions of general relativity.

    “The Newtonian approach works well in some cases,” says Adamek, “But there might be other situations where we’re better off using the correct gravitational field.”

    The research is described in Nature Physics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 8:58 pm on February 19, 2016 Permalink | Reply
    Tags: , , , physicsworld.com   

    From physicsworld.com: “How LIGO will change our view of the universe” 

    physicsworld
    physicsworld.com

    Feb 19, 2016
    Tushna Commissariat

    Gravitational waves
    Gravitational waves, Werner Benger, Zuse-Institut Berlin and Max-Planck-Institut für Gravitationsphysik

    Results and data from the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) collaboration – which revealed last week that it had observed a gravitational wave for the first time – are already providing astronomers and cosmologists the world over with previously unknown information about our universe. While the current results have posed intriguing questions for astronomers regarding binary black-hole systems, gravitational-wave astronomy will also revolutionize our understanding of the universe during its infancy, according to cosmologist and Perimeter Institute director Neil Turok.

    Many scientists, such as LIGO veteran Kip Thorne, have pointed out that the collaboration’s results have opened a new window onto the universe. Each time that this has happened in the past, unexpected phenomena have come to light – for example, the advent of radio astronomy revealed the universe’s most luminous objects in the form of quasars and pulsars.

    NRAO VLA
    NRAO/ Very Large Array

    Pristine objects

    Turok told physicsworld.com that black holes – some of the most prolific producers of these ripples – are some of the simplest objects in the universe. He points out that when it comes to these “perfectly pristine objects”, there are “not too many parameters that need to be determined” because a black hole’s dynamics are mainly determined by its mass. Turok also points out that gravitational waves will provide even deeper insights, as they involve the fundamental force of gravity, which itself is still something of a puzzle.

    Indeed, for Turok, this is what is most exciting about aLIGO’s discovery, which he says “may mark a bit of a transition as gravitational-wave observatories become the high-energy colliders of the future as we probe gravity and other extremely basic physics”. Gravitational waves can go to a time/place that, currently, we have very little information about – the early universe, which is opaque to all electromagnetic radiation.

    Looking back in time

    Thankfully, gravitational waves can travel freely through the hot plasma of the early universe and could be used “to look back to a trillionth of a second after the Big Bang”, according to Turok. For him, the discovery is very timely, as he is currently working with colleagues on a new theoretical proposal for “shockwaves” produced a millionth of a second after the Big Bang, which would have been present across all scales in the early universe. If these shockwaves exist, they would have an effect on the measured density variation that is seen in the cosmic microwave background, and could only be detected by gravitational radiation. Once they have a more complete theoretical description, Turok is convinced that LIGO and its successors such as the LISA Pathfinder and other space-based experiments could pick up the shockwave signal, if it exists.

    ESA LISA Pathfinder
    ESA\LISA Pathfinder

    Ultimately, Turok is delighted by LIGO’s discovery, and although he says that it is “much more important than any prize”, he is sure that it will win not only a Nobel prize, but also a slew of others, such as the Breakthrough prize.

    A preprint of Turok’s paper on shockwaves is available on the arXiv server.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: