Tagged: physicsworld.com Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:17 am on May 13, 2017 Permalink | Reply
    Tags: , Laser-plasma accelerator, Optocouplers, , physicsworld.com, Space radiation brought down to Earth   

    From physicsworld.com: “Space radiation brought down to Earth” 

    physicsworld
    physicsworld.com

    May 12, 2017
    Sarah Tesh

    1
    Space on Earth: scientists mimic the radiation of space

    Space radiation has been reproduced in a lab on Earth. Scientists have used a laser-plasma accelerator to replicate the high-energy particle radiation that surrounds our planet. The research could help study the effects of space exploration on humans and lead to more resilient satellite and rocket equipment.

    The radiation in space is a major obstacle for our ambitions to explore the solar system. Highly energetic ionizing particles from the Sun and deep space are extremely dangerous for human health because they can pass right through the skin and deposit energy, irreversibly damaging cells and DNA. On top of that, the radiation can also wreak havoc on satellites and equipment.

    While the most obvious way to study these effects is to take experiments into space, this is very expensive and impractical. Yet doing the reverse – producing space-like radiation on Earth – is surprisingly difficult. Scientists have tried using conventional cyclotrons and linear particle accelerators. However, these can only produce monoenergetic particles that do not accurately represent the broad range of particle energies found in space radiation.

    Now, researchers led by Bernhard Hidding from the University of Strathclyde in the UK have found a solution. The team used laser-plasma accelerators at the University of Dusseldorf and the Rutherford Appleton Laboratory to produce broadband electrons and protons typical of those found in the van-Allen belts – zones of particle radiation caused by Earth’s protective magnetic fields.

    2
    Laser-plasma accelerator. LBNL

    Laser to plasma

    The accelerator works by firing a high-energy, high-intensity laser at a tiny spot just a few μm2 on a thin-metal-foil target. “The sheer intensity of the laser pulse means that the electric fields involved are orders of magnitude larger than the inneratomic Coulomb forces,” explains Hidding, “The metal-foil target is therefore instantly converted into a plasma.” The plasma particles – electrons and protons – are accelerated by the intense electromagnetic fields of the laser and the collective fields of the other plasma particles. The extent at which this happens depends on the particle’s initial position, resulting in the huge range of energies.

    The team studied its plasma particles using electron-sensitive image plates, radiochromic films for protons and scintillating phosphor screens. Then, to prove the lab-made radiation was comparable to space radiation, the team used simulations from NASA. “The NASA codes are based on models as well as a few measurements, so they represent the best knowledge we have,” says Hidding.

    Monitoring the damage

    The next task was to prove that the system could be used to test the effects of space radiation by subjecting optocouplers to the particle radiation. Optocouplers are common devices that transfer electric signals between isolated circuits. As they are characterized by their current transfer ratio, Hidding and team were able to monitor the radiation-induced degradation by measuring this performance.

    The proof-of-concept experiment, described in Scientific Reports, could represent a major breakthrough towards understanding the effects of space radiation without the need to leave Earth. The next step will be to develop a testing standard that can be used to test electronics and biological samples – “After all, radiation in space is one of the key showstoppers for human spaceflight,” Hidding remarks.

    Strathclyde’s newly installed laser will also play a key role in future research – “[It is] the highest-average-power laser system in the world today,” says Hiddings. Housed in three radiation-shielded bunkers at the Scottish Centre for the Application of Plasma-based Accelerators (SCAPA), the system will power up to seven beamlines. “The vision is to develop a dedicated beamline for space-radiation reproduction and testing, and to put this to use for the growing space industry in the UK and beyond.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 2:38 pm on May 6, 2017 Permalink | Reply
    Tags: , , , , physicsworld.com, Simulating the universe,   

    From physicsworld.com: “Simulating the universe” 

    physicsworld
    physicsworld.com

    May 4, 2017
    Tom Giblin
    James Mertens
    Glenn Starkman

    Powerful computers are now allowing cosmologists to solve Einstein’s frighteningly complex equations of general relativity in a cosmological setting for the first time. Tom Giblin, James Mertens and Glenn Starkman describe how this new era of simulations could transform our understanding of the universe.

    1
    A visualization of the curved space–time “sea” No image credit.

    From the Genesis story in the Old Testament to the Greek tale of Gaia (Mother Earth) emerging from chaos and giving birth to Uranus (the god of the sky), people have always wondered about the universe and woven creation myths to explain why it looks the way it does. One hundred years ago, however, Albert Einstein gave us a different way to ask that question. Newton’s law of universal gravitation, which was until then our best theory of gravity, describes how objects in the universe interact. But in Einstein’s general theory of relativity, spacetime (the marriage of space and time) itself evolves together with its contents. And so cosmology, which studies the universe and its evolution, became at least in principle a modern science – amenable to precise description by mathematical equations, able to make firm predictions, and open to observational tests that could falsify those predictions.

    Our understanding of the mathematics of the universe has advanced alongside observations of ever-increasing precision, leading us to an astonishing contemporary picture. We live in an expanding universe in which the ordinary material of our everyday lives – protons, neutrons and electrons – makes up only about 5% of the contents of the universe. Roughly 25% is in the form of “dark matter” – material that behaves like ordinary matter as far as gravity is concerned, but is so far invisible except through its gravitational pull. The other 70% of the universe is something completely different, whose gravity pushes things apart rather than pulling them together, causing the expansion of the universe to accelerate over the last few billion years. Naming this unknown substance “dark energy” teaches us nothing about its true nature.

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Now, a century into its work, cosmology is brimming with existential questions. If there is dark matter, what is it and how can we find it? Is dark energy the energy of empty space, also known as vacuum energy, or is it the cosmological constant, Λ, as first suggested by Einstein in 1917? He introduced the constant after mistakenly thinking it would stop the universe from expanding or contracting, and so – in what he later called his “greatest blunder” – failed to predict the expansion of the universe, which was discovered a dozen years later. Or is one or both of these invisible substances a figment of the cosmologist’s imagination and it is general relativity that must be changed?

    At the same time as being faced with these fundamental questions, cosmologists are testing their currently accepted model of the universe – dubbed ΛCDM – to greater and greater precision observationally.

    2
    Lambda Coild Dark Matter. No image credit

    (CDM indicates the dark-matter particles are cold because they must move slowly, like the molecules in a cold drink, so as not to evaporate from the galaxies they help bind together.) And yet, while we can use general relativity to describe how the universe expanded throughout its history, we are only just starting to use the full theory to model specific details and observations of how galaxies, clusters of galaxies and superclusters are formed and created. How this happens is simple – the equations of general relativity aren’t.

    Horribly complex

    While they fit neatly onto a T-shirt or a coffee mug, Einstein’s field equations are horrible to solve even using a computer. The equations involve 10 separate functions of the four dimensions of space and time, which characterize the curvature of space–time in each location, along with 40 functions describing how those 10 functions change, as well as 100 further functions describing how those 40 changes change, all multiplied and added together in complicated ways. Exact solutions exist only in highly simplified approximations to the real universe. So for decades cosmologists have used those idealized solutions and taken the departures from them to be small perturbations – reckoning, in particular, that any departures from homogeneity can be treated independently from the homogeneous part and from one another.

    3
    Not at your leisure. No image credit.

    This “first-order perturbation theory” has taught us a lot about the early development of cosmic structures – galaxies, clusters of galaxies and superclusters – from barely perceptible concentrations of matter and dark matter in the early universe. The theory also has the advantage that we can do much of the analysis by hand, and follow the rest on computer. But to track the development of galaxies and other structures from after they were formed to the present day, we’ve mostly reverted to Newton’s theory of gravity, which is probably a good approximation.

    To make progress, we will need to improve on first-order perturbation theory, which treats cosmic structures as independent entities that are affected by the average expansion of the universe, but neither alter the average expansion themselves, nor influence one another. Unfortunately, higher-order perturbation theory is much more complicated – everything affects everything else. Indeed, it’s not clear there is anything to gain from using these higher-order approximations rather than “just solving” the full equations of general relativity instead.

    Improving the precision of our calculations – how well we think we know the answer – is one thing, as discussed above. But the complexity of Einstein’s equations has made us wonder just how accurate the perturbative description really is. In other words, it might give us answers, but are they the right ones? Nonlinear equations, after all, can have surprising features that appear unexpectedly when you solve them in their full glory, and it is hard to predict surprises. Some leading cosmologists, for example, claim that the accelerating expansion of the universe, which dark energy was invented to explain, is caused instead by the collective effects of cosmic structures in the universe acting through the magic of general relativity. Other cosmologists argue this is nonsense.

    The only way to be sure is to use the full equations of general relativity. And the good news is that computers are finally becoming fast enough that modelling the universe using the full power of general relativity – without the traditional approximations – is not such a crazy prospect. With some hard work, it may finally be feasible over the next decade.

    Computers to the rescue

    Numerical general relativity itself is not new. As far back as the late 1950s, Richard Arnowitt, Stanley Deser and Charles Misner – together known as ADM – laid out a basic framework in which space–time could be carefully separated into space and time – a vital first step in solving general relativity with a computer. Other researchers also got in on the act, including Thomas Baumgarte, Stuart Shapiro, Masaru Shibata and Takashi Nakamura, who made important improvements to the numerical properties of the ADM system in the 1980s and 1990s so that the dynamics of systems could be followed accurately over long enough times to be interesting.

    4
    Beam on. No image credit.

    Other techniques for obtaining such long-time stability were also developed, including one imported from fluid mechanics. Known as adaptive mesh refinement, it allowed scarce computer memory resources to be focused only on those parts of problems where they were needed most. Such advances have allowed numerical relativists to simulate with great precision what happens when two black holes merge and create gravitational waves – ripples in space–time. The resulting images are more than eye candy; they were essential in allowing members of the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO) collaboration to announce last year that they had directly detected gravitational waves for the first time.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    By modelling many different possible configurations of pairs of black holes – different masses, different spins and different orbits – LIGO’s numerical relativists produced a template of the gravitational-wave signal that would result in each case. Other researchers then compared those simulations over and over again to what the experiment had been measuring, until the moment came when a signal was found that matched one of the templates. The signal in question was coming to us from a pair of black holes a billion light-years away spiralling into one another and merging to form a single larger black hole.

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Using numerical relativity to model cosmology has its own challenges compared to simulating black-hole mergers, which are just single astrophysical events. Some qualitative cosmological questions can be answered by reasonably small-scale simulations, and there are state-of-the-art “N-body” simulations that use Newtonian gravity to follow trillions of independent masses over billions of years to see where gravity takes them. But general relativity offers at least one big advantage over Newtonian gravity – it is local.

    The difficulty with calculating the gravity experienced by any particular mass in a Newtonian simulation is that you need to add up the effects of all the other masses. Even Isaac Newton himself regarded this “action at a distance” as a failing of his model, since it means that information travels from one side of the simulated universe to the other instantly, violating the speed-of-light limit. In general relativity, however, all the equations are “local”, which means that to determine the gravity at any time or location you only need to know what the gravity and matter distribution were nearby just moments before. This should, in other words, simplify the numerical calculations.

    Recently, the three of us at Kenyon College and Case Western Reserve University showed that the cosmological problem is finally becoming tractable (Phys. Rev. Lett. 116 251301 and Phys. Rev. D 93 124059). Just days after our paper appeared, Eloisa Bentivegna at the University of Catania in Italy and Marco Bruni at the University of Portsmouth, UK, had similar success (Phys. Rev. Lett. 116 251302). The two groups each presented the results of low-resolution simulations, where grid points are separated by 40 million light-years, with only long-wavelength perturbations. The simulations followed the universe for only a short time by cosmic standards – long enough only for the universe to somewhat more than double in size – but both tracked the evolution of these perturbations in full general relativity with no simplifications or approximations whatsoever. As the eminent Italian cosmologist Sabino Matarese wrote in Nature Physics, “the era of general relativistic numerical simulations in cosmology ha[s] begun”.

    These preliminary studies are still a long way from competing with modern N-body simulations for resolution, duration or dynamic range. To do so will require advances in the software so that the code can run on much larger computer clusters. We will also need to make the code more stable numerically so that it can model much longer periods of cosmic expansion. The long-term goal is for our numerical simulations to match as far as possible the actual evolution of the universe and its contents, which means using the full theory of general relativity. But given that our existing simulations using full general relativity have revealed no fluctuations driving the accelerated expansion of the universe, it appears instead that accelerated expansion will need new physics – whether dark energy or a modified gravitational theory.

    Both groups also observe what appear to be small corrections to the dynamics of space–time when compared with simple perturbation theory. Bentivegna and Bruni studied the collapse of structures in the early universe and suggested that they appear to coalesce somewhat more quickly than in the standard simplified theory.

    Future perfect

    Drawing specific conclusions about simulations is a subtle matter in general relativity. At the mathematical heart of the theory is the principle of “co-ordinate invariance”, which essentially says that the laws of physics should be the same no matter what set of labels you use for the locations and times of events. We are all familiar with milder versions of this symmetry: we wouldn’t expect the equations governing basic scientific laws to depend on whether we measure our positions in, say, New York or London, and we don’t need new versions of science textbooks whenever we switch from standard time to daylight savings time and back. Co-ordinate invariance in the context of general relativity is just a more extreme version of that, but it means we must ensure that any information we extract from our simulations does not depend on how we label the points in our simulations.

    Our Ohio group has taken particular care with this subtlety by sending simulated beams of light from distant points in the distant past at the speed of light through space–time to arrive at the here and now. We then use those beams to simulate observations of the expansion history of our universe. The universe that emerges exhibits an average behaviour that agrees with a corresponding smooth, homogeneous model, but with inhomogeneous structures on top. These additional structures contribute to deviations in observable quantities across the simulated observer’s sky that should soon be accessible to real observers.

    This work is therefore just the start of a journey. Creating codes that are accurate and sensitive enough to make realistic predictions for future observational programmes – such as the all-sky surveys to be carried out by the Large Scale Synoptic Telescope or the Euclid satellite – will require us to study larger volumes of space.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    ESA/Euclid spacecraft

    These studies will also have to incorporate ultra-large-scale structures some hundreds of millions of light-years across as well as much smaller-scale structures, such as galaxies and clusters of galaxies. They will also have to follow these volumes for longer stretches of time than is currently possible.

    All this will require us to introduce some of the same refinements that made it possible to predict the gravitational-wave ripples produced by a merging black hole, such as adaptive mesh refinement to resolve the smaller structures like galaxies, and N-body simulations to allow matter to flow naturally across these structures. These refinements will let us characterize more precisely and more accurately the statistical properties of galaxies and clusters of galaxies – as well as the observations we make of them – taking general relativity fully into account. Doing so will, however, require clusters of computers with millions of cores, rather than the hundreds we use now.

    These improvements to code will take time, effort and collaboration. Groups around the world – in addition to the two mentioned – are likely to make important contributions. Numerical general-relativistic cosmology is still in its infancy, but the next decade will see huge strides to make the best use of the new generation of cosmological surveys that are being designed and built today. This work will either give us increased confidence in our own scientific genesis story – ΛCDM – or teach us that we still have a lot more thinking to do about how the universe got itself to where it is today.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 1:07 pm on May 5, 2017 Permalink | Reply
    Tags: , , , , physicsworld.com   

    From physicsworld.com: “Flash Physics: Matter-wave tractor beams” 

    physicsworld
    physicsworld.com

    May 5, 2017
    Sarah Tesh

    Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World’s team of editors and reporters

    Tractor beams could be made from matter waves

    1
    Grabbing hold: a matter-wave tractor beam

    It should be possible to create a matter-wave tractor beam that grabs hold of an object by firing particles at it – according to calculations by an international team of physicists. Tractor beams work by firing cone-like “Bessel beams” of light or sound at an object. Under the right conditions, the light or sound waves will bounce off the object in such a way that the object experiences a force in the opposite direction to that of the beam. If this force is greater than the outward pressure of the beam, the object will be pulled inwards. Now, Andrey Novitsky and colleagues at Belarusian State University, ITMO University in St Petersburg and the Technical University of Denmark have done calculations that show that beams of particles can also function as tractor beams. Quantum mechanics dictates that these particles also behave as waves and the team found that cone-like beams of matter waves should also be able to grab hold of objects. There is, however, an important difference regarding the nature of the interaction between the particles and the object. Novitsky and colleagues found that if the scattering is defined by the Coulomb interaction between charged particles, then it is not possible to create a matter-wave tractor beam. However, tractor beams are possible if the scattering is defined by a Yukawa potential, which is used to describe interactions between some subatomic particles. The calculations are described in Physical Review Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 5:12 pm on May 4, 2017 Permalink | Reply
    Tags: Nanoscopy-super-resolution microscopy, , , , physicsworld.com,   

    From physicsworld.com: “Optical chip gives microscopes nanoscale resolution” 

    physicsworld
    physicsworld.com

    May 3, 2017
    Michael Allen

    1
    Super resolution: image taken using the new chip. No image credit.

    A photonic chip that allows a conventional microscope to work at nanoscale resolution has been developed by a team of physicists in Germany and Norway. The researchers claim that as well as opening up nanoscopy to many more people, the mass-producible optical chip also offers a much larger field of view than current nanoscopy techniques, which rely on complex microscopes.

    Nanoscopy, which is also known as super-resolution microscopy, allows scientists to see features smaller than the diffraction limit – about half the wavelength of visible light. It can be used to produce images with resolutions as high as 20–30 nm – approximately 10 times better than a normal microscope. Such techniques have important implications for biological and medical research, with the potential to provide new insights into disease and improve medical diagnostics.

    “The resolution of the standard optical microscope is basically limited by the diffraction barrier of light, which restricts the resolution to 200–300 nm for visible light,” explains Mark Schüttpelz, a physicist at Bielefeld University in Germany. “But many structures, especially biological structures like compartments of cells, are well below the diffraction limit. Here, super-resolution will open up new insights into cells, visualizing proteins ‘at work’ in the cell in order to understand structures and dynamics of cells.”

    Expensive and complex

    There are a number of different nanoscopy techniques that rely on fluorescent dyes to label molecules within the specimen being imaged. A special microscope illuminates and determines the position of individual fluorescent molecules with nanometre precision to build up an image. The problem with these techniques, however, is that they use expensive and complex equipment. “It is not very straightforward to acquire super-resolved images,” says Schüttpelz. “Although there are some rather expensive nanoscopes on the market, trained and experienced operators are required to obtain high-quality images with nanometer resolution.”

    To tackle this, Schüttpelz and his colleagues turned current techniques on their head. Instead of using a complex microscope with a simple glass slide to hold the sample, their method uses a simple microscope for imaging combined with a complex, but mass-producible, optical chip to hold and illuminate the sample.

    “Our photonic chip technology can be retrofitted to any standard microscope to convert it into an optical nanoscope,” explains Balpreet Ahluwalia, a physicist at The Arctic University of Norway, who was also involved in the research.

    Etched channels

    The chip is essentially a waveguide that completely removes the need for the microscope to contain a light source that excites the fluorescent molecules. It consists of five 25–500 μm-wide channels etched into a combination of materials that causes total internal reflection of light.

    The chip is illuminated by two solid-state lasers that are coupled to the chip by a lens or lensed fibres. Light with two different wavelengths is tightly confined within the channels and illuminates the sample, which sits on top of the chip. A lens and camera on the microscope record the resulting fluorescent signal, and the data obtained are used to construct a high-resolution image of the sample.

    To test the effectiveness of the chip, the researchers imaged liver cells. They demonstrated that a field of view of 0.5 × 0.5 mm2 can be achieved at a resolution of around 340 nm in less than half a minute. In principle, this is fast enough to capture live events in cells. For imaging times of up to 30 min, a similar field of view at a resolution better than 140 nm is possible. Resolutions of less than 50 nm are also achievable with the chip, but require higher magnification lenses, which limit the field of view to around 150 μm.

    Many cells

    Ahluwalia told Physics World that the advantage of using the photonic chip for nanoscopy is that it “decouples illumination and detection light paths” and the “waveguide generates illumination over large fields of view”. He adds that this has enabled the team to acquire super-resolved images over an area 100 times larger than with other techniques. This makes single images of as many as 50 living cells possible.

    According to Schüttpelz, the technique represents “a paradigm shift in optical nanoscopy”. “Not only highly specialized laboratories will have access to super-resolution imaging, but many scientists all over the world can convert their standard microscope into a super-resolution microscope just by retrofitting the microscope in order to use waveguide chips,” he says. “Nanoscopy will then be available to everyone at low costs in the near future.”

    The chip is described in Nature Photonics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 1:44 pm on April 14, 2017 Permalink | Reply
    Tags: physicsworld.com, Ten superconducting qubits entangled by physicists in China   

    From physicsworld: “Ten superconducting qubits entangled by physicists in China” 

    physicsworld
    physicsworld.com

    Apr 13, 2017

    1
    Top 10: the quantum device

    A group of physicists in China has taken the lead in the race to couple together increasing numbers of superconducting qubits. The researchers have shown that they can entangle 10 qubits connected to one another via a central resonator – so beating the previous record by one qubit – and say that their result paves the way to quantum simulators that can calculate the behaviour of small molecules and other quantum-mechanical systems much more efficiently than even the most powerful conventional computers.

    Superconducting circuits create qubits by superimposing two electrical currents, and hold the promise of being able to fabricate many qubits on a single chip through the exploitation of silicon-based manufacturing technology. In the latest work, a multi-institutional group led by Jian-Wei Pan of the University of Science and Technology of China in Hefei, built a circuit consisting of 10 qubits, each half a millimetre across and made from slivers of aluminium laid on to a sapphire substrate. The qubits, which act as non-linear LC oscillators, are arranged in a circle around a component known as a bus resonator.

    Initially, the qubits are put into a superposition state of two oscillating currents with different amplitudes by supplying each of them with a very low-energy microwave pulse. To avoid interference at this stage, each qubit is set to a different oscillation frequency. However, for the qubits to interact with one another, they need to have the same frequency. This is where the bus comes in. It allows qubits to transfer energy from one another, but does not absorb any of that energy itself.

    “Magical interaction”

    The end result of this process, says team member Haohua Wang of Zhejiang University, is entanglement, or, as he puts it, “some kind of magical interaction”. To establish just how entangled their qubits were, the researchers used what is known as quantum tomography to find out the probability of detecting each of the thousands of possible states that this entanglement could generate. The outcome: their measured probability distribution yielded the correct state on average about two thirds of the time. The fact that this “fidelity” was above 50%, says Wang, meant that their qubits were “entangled for sure”.

    According to Shibiao Zheng of Fuzhou University, who designed the entangling protocol, the key ingredient in this set-up is the bus. This, he says, allows them to generate entanglement “very quickly”.

    The previous record of nine for the number of entangled qubits in a superconducting circuit was held by John Martinis and colleagues at the University of California, Santa Barbara and Google. That group uses a different architecture for their system; rather than linking qubits via a central hub they instead lay them out in a row and connect each to its nearest neighbour. Doing so allows them to use an error-correction scheme that they developed known as surface code.

    High fidelity

    Error correction will be vital for the functioning of any large-scale quantum computer in order to overcome decoherence – the destruction of delicate quantum states by outside interference. Involving the addition of qubits to provide cross-checking, error correction relies on each gate operation introducing very little error. Otherwise, errors would simply spiral out of control. In 2015, Martinis and co-workers showed that superconducting quantum computers could in principle be scaled up, when they built two-qubit gates with a fidelity above that required by surface code – introducing errors less than 1% of the time.

    Martinis praises Pan and colleagues for their “nicely done experiment”, in particular for their speedy entangling and “good single-qubit operation”. But it is hard to know how much of an advance they have really made, he argues, until they fully measure the fidelity of their single-qubit gates or their entangling gate. “The hard thing is to scale up with good gate fidelity,” he says.

    Wang says that the Chinese collaboration is working on an error-correction scheme for their bus-centred architecture. But he argues that in addition to exceeding the error thresholds for individual gates, it is also important to demonstrate the precise operation of many highly entangled qubits. “We have a global coupling between qubits,” he says. “And that turns out to be very useful.”

    Quantum simulator

    Wang acknowledges that construction of a universal quantum computer – one that would perform any quantum algorithm far quicker than conventional computers could – is not realistic for the foreseeable future given the many millions of qubits such a device is likely to need. For the moment, Wang and his colleagues have a more modest aim in mind: the development of a “quantum simulator” consisting of perhaps 50 qubits, which could outperform classical computers when it comes to simulating the behaviour of small molecules and other quantum systems.

    Xiaobo Zhu of the University of Science and Technology of China, who was in charge of fabricating the 10-qubit device, says that the collaboration aims to build the simulator within the next “5–10 years”, noting that this is similar to the timescale quoted by other groups including the one of Martinis. “We are trying to catch up with the best groups in the world,” he says.

    The research is reported on the arXiv server.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:43 pm on March 21, 2017 Permalink | Reply
    Tags: , , physicsworld.com, Shanghai Synchrotron Radiation Facility (SSRF), Soft X-ray Free Electron Laser (SXFEL) facility   

    From physicsworld.com: “China outlines free-electron laser plans” 

    physicsworld
    physicsworld.com

    Mar 21, 2017
    Michael Banks

    1
    Zhentang Zhao, director of the Shanghai Institute of Applied Physics.

    There was a noticeable step change in the weather today in Shanghai as the Sun finally emerged and the temperature rose somewhat.

    This time I braved the rush-hour metro system to head to the Zhangjiang Technology Park in the south of the city.

    The park is home to the Shanghai Synchrotron Radiation Facility (SSRF), which opened in 2007. The facility accelerates electrons to 3.5 GeV before making them produce X-rays that are then used by researchers to study a range of materials.

    The SSRF currently has 15 beamlines focusing on topics including energy, materials, bioscience and medicine. I was given a tour of the facility by Zhentang Zhao, director of the Shanghai Institute of Applied Physics, which operates the SSRF.

    As I found out this morning, the centre has big plans. Perhaps the sight of building materials and cranes nearby the SSRF should have given it away.

    Over the next six years there are plans to build a further 16 beamlines to put the SSRF at full capacity, some of which will extend 100 m or so from the synchrotron.

    Neighbouring the SSRF, scientists are also building the Soft X-ray Free Electron Laser (SXFEL) facility. The SSRF used to have a test FEL beam line, but since 2014 that has transformed to become a fully fledged centre costing 8bn RMB.

    Currently, the 250 m, 150 MeV linac for the SXFEL has been built and is being commissioned. Over the next couple of years two undulator beamlines will be put in place to generate X-rays with a wavelength of 9 nm and at a repetition rate of 10 Hz. The X-rays will then be sent to five experimental stations that will open to users in 2019.

    There are also plans to upgrade the SXFEL so that it generates X-rays with a 2 nm wavelength (soft X-ray regime) at a frequency of 50 Hz.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 2:33 pm on February 24, 2017 Permalink | Reply
    Tags: Electrochemistry, Nuclear energy may come from the sea, physicsworld.com,   

    From physicsworld.com: “Nuclear energy may come from the sea” 

    physicsworld
    physicsworld.com.com

    Feb 23, 2017
    Sarah Tesh

    1
    Seawater supplies: carbon–polymer electrodes can extract the sea’s uranium. No image credit.

    Uranium has been extracted from seawater using electrochemical methods. A team at Stanford University in California has removed the radioactive material from seawater by using a polymer–carbon electrode and applying a pulsed electric field.

    Uranium is a key component of nuclear fuel. On land, there are about 7.6 million tonnes of identified uranium deposits around the world. This ore is mined, processed and used for nuclear energy. In contrast, there is 4.5 billion tonnes of the heavy metal in seawater as a result of the natural weathering of undersea deposits. If uranium could be extracted from seawater, it could be used to fuel nuclear power stations for hundreds of years. As well as taking advantage of an untapped energy resource, seawater extraction would also avoid the negative environmental impacts of mining processes.

    Tiny concentrations

    Scientists are therefore working on methods to remove and recover uranium from the sea. However, the oceans are vast, and the concentration of uranium is only 3 μg/l, making the development of practical extraction techniques a significant challenge. “Concentrations are tiny, on the order of a single grain of salt dissolved in a litre of water,” says team member Yi Cui. Furthermore, the high salt content of seawater limits traditional extraction methods.

    In water, uranium typically exists as a positively charged uranium oxide, or uranyl, ion (UO2+2). Most methods for extraction involve an adsorbent material where the uranyl ion attaches to the surface but does not chemically react with it. The current leading materials are amidoxime polymers. The performance of adsorbents is, however, limited by their surface area. As there are only a certain number of adsorption sites, and the concentration of uranium is extremely low compared with other positive ions like sodium and calcium, the uranium-adsorbent interaction is slow and sites are quickly taken up by other ions. Furthermore, the adsorbed ions still carry a positive charge and therefore repel other uranyl ions away from the material.

    Electrochemical answer

    Cui and his team turned to electrochemistry and deposition for a solution to this problem. In a basic electrochemical cell, there is an electrolyte solution and two submerged electrodes connected to a power supply. By providing the electrodes with opposite charges, an electrical current is driven through the liquid, forcing positive ions to the negative electrode, and electrons and negative ions to the positive electrode. At the negative electrode, called the anode, the positive ions are reduced, meaning they gain electrons. For most metallic ions, this causes the precipitation of the solid metal and is often deposited on the electrode surface.

    In their electrochemical cell, the team used an anode made of carbon coated with amidoxime polymer, and an inert partner electrode. The electrolyte was seawater, which for some tests contained added uranium. By applying a short pulse of current, the positive uranyl, calcium and sodium ions were drawn to the carbon–polymer electrode. The amidoxime film encouraged the uranyl ions to be preferentially adsorbed over the other ions. The adsorbed uranyl ions were reduced to solid, charge-neutral uranium oxide (UO2) and once the current was switched off, the unwanted ions returned to the bulk of the electrolyte. By repeating the pulsed process, the researchers were able to build up the deposited uranium oxide on the electrode surface, no matter what the initial concentration of the solution was.

    Removal and recovery

    In tests comparing the new method to plain adsorptive amidoxime, the electrochemical cell significantly outperformed the more traditional material. Within the time it took the amidoxime surface to become saturated, the carbon–polymer electrode had extracted nine times the amount of uranium. Furthermore, the team demonstrated that 96.6% of the metal could be recovered from the surface by applying a reverse current and an acidic electrolyte. For an adsorption material, only 76.0% can be recovered with acid elution.

    Despite the researchers’ success, there is a long way to go before large-scale application. To be commercially viable, the benefits of the extracted uranium must outweigh the cost and power demands of the process. Furthermore, the process needs to be streamlined to treat large quantities of water. “We have a lot of work to do still but these are big steps toward practicality,” Cui concludes.

    The extraction method is described in Nature Energy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 1:52 pm on January 5, 2017 Permalink | Reply
    Tags: , , , physicsworld.com, Semiconductor discs could boost night vision   

    From physicsworld.com: “Semiconductor discs could boost night vision” 

    physicsworld
    physicsworld.com.com

    1
    Frequency double: Maria del Rocio Camacho-Morales studies the new optical material.

    A new method of fabricating nanoscale optical crystals capable of converting infrared to visible light has been developed by researchers in Australia, China and Italy. The new technique allows the crystals to be placed onto glass and could lead to improvements in holographic imaging – and even the development of improved night-vision goggles.

    Second-harmonic generation, or frequency doubling, is an optical process whereby two photons with the same frequency are combined within a nonlinear material to form a single photon with twice the frequency (and half the wavelength) of the original photons. The process is commonly used by the laser industry, in which green 532 nm laser light is produced from a 1064 nm infrared source. Recent developments in nanotechnology have opened up the potential for efficient frequency doubling using nanoscale crystals – potentially enabling a variety of novel applications.

    Materials with second-order nonlinear susceptibilities – such as gallium arsenide (GaAs) and aluminium gallium arsenide (AlGaAs) – are of particular interest for these applications because their low-order nonlinearity makes them efficient at conversion.

    Substrate mismatch

    To be able to exploit second-harmonic generation in a practical device, these nanostructures must be fabricated on a substrate with a relatively low refractive index (such as glass), so that light may pass through the optical device. This is challenging, however, because the growth of GaAs-based crystals in a thin film – and type III-V semiconductors in general – requires a crystalline substrate.

    “This is why growing a layer of AlGaAs on top of a low-refractive-index substrate, like glass, leads to unmatched lattice parameters, which causes crystalline defects,” explains Dragomir Neshev, a physicist at the Australian National University (ANU). These defects, he adds, result in unwanted changes in the electronic, mechanical, optical and thermal properties of the films.

    Previous attempts to overcome this issue have led to poor results. One approach, for example, relies on placing a buffer layer under the AlGaAs films, which is then oxidized. However, these buffer layers tend to have higher refractive indices than regular glass substrates. Alternatively, AlGaAs films can be transferred to a glass surface prior to the fabrication of the nanostructures. In this case the result is poor-quality nanocrystals.

    Best of both

    The new study was done by Neshev and colleagues at ANU, Nankai University and the University of Brescia, who combined the advantages of the two different approaches to develop a new fabrication method. First, high-quality disc-shaped nanocrystals about 500 nm in diameter are fabricated using electron-beam lithography on a GaAs wafer, with a layer of AlAs acting as a buffer between the two. The buffer is then dissolved, and the discs are coated in a transparent layer of benzocyclobutene. This can then be attached to the glass substrate, and the GaAs wafer peeled off with minimal damage to the nanostructures.

    The development could have various applications. “The nanocrystals are so small they could be fitted as an ultrathin film to normal eye glasses to enable night vision,” says Neshev, explaining that, by combining frequency doubling with other nonlinear interactions, the film might be used to convert invisible, infrared light to the visible spectrum.

    If they could be made, such modified glasses would be an improvement on conventional night-vision binoculars, which tend to be large and cumbersome. To this end, the team is working to scale up the size of the nanocrystal films to cover the area of typical spectacle lenses, and expects to have a prototype device completed within the next five years.

    Security holograms

    Alongside frequency doubling, the team was also able to tune the nanodiscs to control the direction and polarization of the emitted light, which makes the film more efficient. “Next, maybe we can even engineer the light and make complex shapes such as nonlinear holograms for security markers,” says Neshev, adding: “Engineering of the exact polarization of the emission is also important for other applications such as microscopy, which allows light to be focused to a smaller volume.”

    “Vector beams with spatially arranged polarization distributions have attracted great interest for their applications in a variety of technical areas,” says Qiwen Zhan, an engineer at the University of Dayton in Ohio, who was not involved in this study. The novel fabrication technique, he adds, “opens a new avenue for generating vector fields at different frequencies through nonlinear optical processes”.

    With their initial study complete, Neshev and colleagues are now looking to refine their nanoantennas, both to increase the efficiency of the wavelength conversion process but also to extend the effects to other nonlinear interactions such as down-conversion.

    The research is described in the journal Nano Letters.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 9:41 am on October 10, 2016 Permalink | Reply
    Tags: Laser-scanning confocal microscopes, Mesolens, physicsworld.com, Radical' new microscope lens combines high resolution with large field of view   

    From physicsworld.com: “‘Radical’ new microscope lens combines high resolution with large field of view” 

    physicsworld
    physicsworld.com.com

    Oct 10, 2016
    Michael Allen

    1
    Zooming in: image of mouse embryo

    A new microscope lens that offers the unique combination of a large field of view with high resolution has been created by researchers in the UK. The new “mesolens” for confocal microscopes can create 3D images of much larger biological samples than was previously possible – while providing detail at the sub-cellular level. According to the researchers, the ability to view whole specimens in a single image could assist in the study of many biological processes and ensure that important details are not overlooked.

    Laser-scanning confocal microscopes are an important tool in modern biological sciences. They emerged in the 1980s as an improvement on fluorescence microscopes, which view specimens that have been dyed with a substance that emits light when illuminated. Standard fluorescence microscopes are not ideal because they pick up fluorescence from behind the focal point, creating images with blurry backgrounds. To eliminate the out-of-focus background, confocal microscopes use a small spot of illuminating laser light and a tiny aperture so that only light close to the focal plane is collected. The laser is scanned across the specimen and many images are taken to create the full picture. Due to the small depth of focus, confocal microscopes are also able to focus a few micrometres through samples to build up a 3D image.

    In microscopy there is a trade-off between resolution and the size of the specimen that can be imaged, or field-of-view – you either have a large field-of-view and low resolution or a small field-of-view and high resolution. Current confocal microscopes struggle to image large specimens, because low magnification produces poor resolution.

    Stitched together

    “Normally, when a large object is imaged with a low-magnification lens, rays of light are collected from only a small range of angles (i.e. the lens has a low numerical aperture),” explains Gail McConnell from the Centre for Biophotonics at the University of Strathclyde, in Glasgow. “This reduces the resolution of the image and has an even more serious effect in increasing the depth of focus, so all the cells in a tissue specimen are superimposed and you cannot see them individually.” Large objects can be imaged by stitching smaller images together. But variations in illumination and focus affect the quality of the final image.

    McConnell and colleagues set out to design a lens that could image larger samples, while retaining the detail produced by confocal microscopy. They focused on creating a lens that could be used to image an entire 12.5 day-old mouse embryo – a specimen that is typically about 5 mm across. This was to “facilitate the recognition of developmental abnormalities” in such embryos, which “are routinely used to screen human genes that are suspected of involvement in disease”, says McConnell.

    Dubbed a mesolens, their optical system is more than half a metre long and contains 15 optical elements. This is unlike most confocal lenses, which are only a few centimetres in length. The mesolens has a magnification of 4× and a numerical aperture of 0.47, which is a significant improvement over the 0.1–0.2 apertures currently available. The system is also able to obtain 3D images of objects 6 mm wide and long, and 3 mm thick.

    The high numerical aperture also provides a very good depth resolution. “This makes it possible to focus through tissue and see a completely different set of sub-cellular structures in focus every 1/500th of a millimetre through a depth of 3 mm,” explains McConnell. The distortion of the images is less than 0.7% at the periphery of the field and the lens works across the full visible spectrum of light, enabling imaging with multiple fluorescent labels.

    Engineering and design

    The lens was made possible through a combination of skilled engineering and optical design, and the use of components with very small aberrations. “Making the new lens is very expensive and difficult: to achieve the required very low field curvature across the full 6 mm field of view and because we need chromatic correction through the entire visible spectrum, the lens fabrication and mounting must be unusually accurate and the glass must be selected very carefully and tested before use,” explains McConnell.

    The researchers used the lens in a customized confocal microscope to image 12.5 day-old mouse embryos. They were able to image single cells, heart muscle fibres and sub-cellular details, not just near the surface of the sample but throughout the depth of the embryo. Writing in the journal eLife, the researchers claim “no existing microscope can show all of these features simultaneously in an intact mouse embryo in a single image.”

    The researchers also write that their mesolens “represents the most radical change in microscope objective design for over a century” and “has the potential to transform optical microscopy through the acquisition of sub-cellular resolution 3D data sets from large tissue specimens”.

    Rafael Yuste, a neuroscientist at Columbia University in New York, saw an earlier prototype of the mesolens microscope. He told physicsworld.com that McConnell and colleagues “have completely redesigned the objective lens to achieve an impressive performance”. He adds that it could enable “wide-field imaging of neuronal circuits and tissues while preserving single-cell resolution”, which could help produce a dynamic picture of how cells and neural circuits in the brain interact.

    Video images taken by the mesolens can be viewed in the eLife paper describing the microscope.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 11:52 am on October 7, 2016 Permalink | Reply
    Tags: , , Correlation between galaxy rotation and visible matter puzzles astronomers, , , physicsworld.com   

    From physicsworld: “Correlation between galaxy rotation and visible matter puzzles astronomers” 

    physicsworld
    physicsworld.com

    Oct 7, 2016
    Keith Cooper

    1
    Strange correlation: why is galaxy rotation defined by visible mass? No image credit.

    A new study of the rotational velocities of stars in galaxies has revealed a strong correlation between the motion of the stars and the amount of visible mass in the galaxies. This result comes as a surprise because it is not predicted by conventional models of dark matter.

    Stars on the outskirts of rotating galaxies orbit just as fast as those nearer the centre. This appears to be in violation of Newton’s laws, which predict that these outer stars would be flung away from their galaxies. The extra gravitational glue provided by dark matter is the conventional explanation for why these galaxies stay together. Today, our most cherished models of galaxy formation and cosmology rely entirely on the presence of dark matter, even though the substance has never been detected directly.

    These new findings, from Stacy McGaugh and Federico Lelli of Case Western Reserve University, and James Schombert of the University of Oregon, threaten to shake things up. They measured the gravitational acceleration of stars in 153 galaxies with varying sizes, rotations and brightness, and found that the measured accelerations can be expressed as a relatively simple function of the visible matter within the galaxies. Such a correlation does not emerge from conventional dark-matter models.

    Mass and light

    This correlation relies strongly on the calculation of the mass-to-light ratio of the galaxies, from which the distribution of their visible mass and gravity is then determined. McGaugh attempted this measurement in 2002 using visible light data. However, these results were skewed by hot, massive stars that are millions of times more luminous than the Sun. This latest study is based on near-infrared data from the Spitzer Space Telescope.

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    Since near-infrared light is emitted by the more common low-mass stars and red giants, it is a more accurate tracer for the overall stellar mass of a galaxy. Meanwhile, the mass of neutral hydrogen gas in the galaxies was provided by 21 cm radio-wavelength observations.

    McGaugh told physicsworld.com that the team was “amazed by what we saw when Federico Lelli plotted the data.”

    The result is confounding because galaxies are supposedly ensconced within dense haloes of dark matter.

    1
    Spherical halo of dark matter. cerncourier.com

    Furthermore, the team found a systematic deviation from Newtonian predictions, implying that there is some other force is at work beyond simple Newtonian gravity.

    “It’s an impressive demonstration of something, but I don’t know what that something is,” admits James Binney, a theoretical physicist at the University of Oxford, who was not involved in the study.

    This systematic deviation from Newtonian mechanics was predicted more than 30 years ago by an alternate theory of gravity known as modified Newtonian dynamics (MOND). According to MOND’s inventor, Mordehai Milgrom of the Weizmann Institute in Israel, dark matter does not exist, and instead its effects can be explained by modifying how Newton’s laws of gravity operate over large distances.

    “This was predicted in the very first MOND paper of 1983,” says Milgrom. “The MOND prediction is exactly what McGaugh has found, to a tee.”

    However, Milgrom is unhappy that McGaugh hasn’t outright attributed his results to MOND, and suggests that there’s nothing intrinsically new in this latest study. “The data here are much better, which is very important, but this is really the only conceptual novelty in the paper,” says Milgrom.

    No tweaking required

    McGaugh disagrees with Milgrom’s assessment, saying that previous results had incorporated assumptions that tweak the data to get the desired result for MOND, whereas this time the mass-to-light ratio is accurate enough that no tweaking is required.

    Furthermore, McGaugh says he is “trying to be open-minded”, by pointing out that exotic forms of dark matter like superfluid dark matter or even complex galactic dynamics could be consistent with the data. However, he also feels that there is implicit bias against MOND among members of the astronomical community.

    “I have experienced time and again people dismissing the data because they think MOND is wrong, so I am very consciously drawing a red line between the theory and the data.”

    Much of our current understanding of cosmology relies on cold dark matter, so could the result threaten our models of galaxy formation and large-scale structure in the universe? McGaugh thinks it could, but not everyone agrees.

    Way too complex

    Binney points out that dark-matter simulations struggle on the scale of individual galaxies because “the physics of galaxy formation is way too complex to compute properly,” he says, the implication being that it is currently impossible to say whether dark matter can explain these results or not. “It’s unfortunately beyond the powers of humankind at the moment to know.”

    That leaves the battle between dark matter and alternate models of gravitation at an impasse. However, Binney points out that dark matter has an advantage because it can also be studied through observations of galaxy mergers and collisions between galaxy clusters. Also, there are many experiments that are currently searching for evidence of dark-matter particles.

    McGaugh’s next step is to extend the study to elliptical and dwarf spheroidal galaxies, as well as to galaxies at greater distances from the Milky Way.

    The research is to be published in Physical Review Letters and a preprint is available on arXiv.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: