Tagged: PhysicsWorld Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:03 pm on August 14, 2015 Permalink | Reply
    Tags: , Borexino detector, , PhysicsWorld   

    From IOP PhysicsWorld: “Physicists isolate neutrinos from Earth’s mantle for first time” 

    physicsworld
    physicsworld.com

    Aug 14, 2015
    Hamish Johnston

    1
    Seeing the light: some of Borexino’s many light detectors

    The first confirmed sightings of antineutrinos produced by radioactive decay in the Earth’s mantle have been made by researchers at the Borexino detector in Italy. While such “geoneutrinos” have been detected before, it is the first time that physicists can say with confidence that about half of the antineutrinos they measured came from the Earth’s mantle, with the rest coming from the crust. The Borexino team has also been able to make a new calculation of how much heat is produced in the Earth by radioactive decay, finding it to be greater than previously thought. The researchers say that in the future, the experiment should be able to measure the quantities of radioactive elements in the mantle as well.

    According to the bulk silicate Earth model (BSE) model, most of the radioactive uranium, thorium and potassium in our planet’s interior lies in the crust and mantle. Accounting for about 84% of our planet’s total volume, the mantle is the large rocky layer sandwiched between the crust and the Earth’s core. Heat flows from the interior of the Earth into space at a rate of about 47 TW, but one of the big mysteries of geophysics is how much of this heat is left over from when the Earth formed, and how much comes from the radioactive decay chains of uranium-238, thorium-232 and potassium-40.

    Peering deep underground

    One way to settle the question is to measure the antineutrinos produced by these decay chains. These tiny particles travel easily through the Earth, which means that detectors located near the surface could give geophysicists a way of measuring the abundance of radioactive elements deep within the Earth – and thus the heat produced deep underground.

    Back in 2005 physicists working on the KamLAND neutrino detector in Japan announced that they had detected 22 geoneutrinos, while Borexino, which has been running since 2007, reported in 2010 that it had seen 10 such particles.

    KamLAND
    KamLAND

    Both detectors have since spotted more geoneutrinos and, taken together, their measurements suggest that about one half of the heat flowing out of the Earth is generated by radioactive decay, although there is large uncertainty in this value.

    Italian adventure

    The Borexino detector is made up of 300 tonnes of an organic liquid, and is located deep beneath a mountain at Italy’s Gran Sasso National Laboratory to shield the experiment from unwanted cosmic rays that would otherwise drown out the neutrino signal.

    Gran Sasso LABORATORI NAZIONALI del GRAN SASSO
    Gran Sasso National Laboratory

    Whenever electrons in the liquid are struck by an antineutrino, they recoil and create a flash of light. In the latest work, Borexino physicists have analysed a total of 77 detector events, with the team calculating – using data from the International Atomic Energy Agency – that about 53 of these antineutrinos were produced by nuclear reactors.

    The remaining 24 geoneutrinos could have come from either the Earth’s crust or its core. However, scientists have a pretty good idea of how much uranium and thorium are in the crust, allowing the Borexino physicists to say that half of these geoneutrinos were produced in the mantle and the other half in the crust. Furthermore, the physicists can say with 98% confidence that they have detected mantle neutrinos – a much greater level of confidence than achieved in previous studies.

    The team also calculated the heat generated by radioactive decay in the Earth and found it to be in the 23–36 TW range. This is larger than estimates based on assumptions about the amount of radioactive elements in the Earth, which are in the 12–30 TW range, and also larger than an estimate based on previous antineutrino measurements.

    The Borexino team also tried to work out what proportion of the geoneutrinos came from the uranium decay chain and what proportion from the thorium chain. Potassium decays were not considered because they are not expected to make a significant contribution to the numbers detected. The data suggest that the currently accepted ratio of thorium to uranium in the Earth is correct, but that the uncertainty in the Borexino values is very large. More data, the Borexino physicists say, should let them make more precise measurements of the contributions of uranium and thorium to the heating of the Earth.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

    Advertisements
     
  • richardmitnick 9:15 am on April 17, 2015 Permalink | Reply
    Tags: , , PhysicsWorld   

    From physicsworld: “How to efficiently capture carbon dioxide out of thin air” 

    physicsworld
    physicsworld.com

    Apr 16, 2015
    Tamela Maciel

    1
    Captive gas: prototype carbon-collection system

    A novel synthetic material that is a thousand times more efficient than trees at capturing carbon dioxide from the atmosphere was presented by Klaus Lackner, director of Arizona State University’s new Center for Negative Carbon Emissions, at a meeting of the American Physical Society in Maryland last Sunday. According to Lackner, the amount of carbon dioxide in the atmosphere has reached the point where simply reducing emissions will not be enough to tackle climate change. Referring to recent environmental reports, Lackner emphasized the need for prolonged periods of carbon capture and storage – also known as “negative carbon emission”.

    Trees and other biological matter are natural sinks of carbon dioxide but they do not trap it permanently and the amount of land required is prohibitive. “There is no practical solution that doesn’t include large periods of negative emission,” says Lackner, adding that “we need means that are faster than just growing a tree.” During the past few years, Lackner and his colleagues have developed a synthetic membrane that can capture carbon dioxide from the air passing through it. The membrane consists of an “ion-exchange” resin – positive anions in the resin attract carbon dioxide, with a maximum load of one carbon-dioxide molecule for every positive charge. This process is moisture sensitive, such that the resin absorbs carbon dioxide in dry air and releases it again in humid air. As a result, this material works best in warm, dry climates.

    Show and tell

    Lackner plans to install corrugated collecting panels incorporating the membrane material on the roof of the Center for Negative Carbon Emissions this summer. The researchers hope that this public installation will demonstrate the economic feasibility and efficiency of a new technology that can address the issue of climate change, and help shift the debate from reduced carbon emissions to negative carbon emissions.

    To keep costs low, the first step – capturing the carbon from the air – is free. “We made it cheap by being passive. We can’t afford to be blowing air around,” says Lackner. The resin itself is readily available and can be mass-produced, because it is already widely used to soften and purify water. The collectors trap between 10 and 50% of the total carbon dioxide that passes through. Compared with the amount of carbon dioxide that a typical tree collects during the course of its lifetime, these panels are a thousand times more efficient.

    2
    Able membrane: panels of carbon-capture resin

    “I believe we have reached a point where it is really paramount for substantive public research and development of direct air capture,” says Lackner. “The Center for Negative Carbon Emissions cannot do it alone.”
    Post trappings

    Lackner estimates that about a hundred-million shipping-container-sized collectors would be needed to deal with the world’s current level of carbon emissions. As these collectors would typically become saturated within an hour, Lackner envisions a possible “ski-lift” approach where saturated panels are taken away to a humid environment to release their carbon dioxide and then recycled back to the dry air for more carbon capture.

    The question also remains of what to do with the carbon dioxide once it is trapped. Burying it is one option, which is something Lackner says is likely, given the sheer quantity of carbon that must be captured. His centre is also testing ways to recycle the carbon dioxide and sell it to industries that could use it to make products such as fire extinguishers, fizzy drinks and carbon-dioxide-enhanced greenhouses, and even synthetic fuel oil.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 7:24 am on February 17, 2015 Permalink | Reply
    Tags: , , , , PhysicsWorld,   

    From physicsworld: “Smaller fusion reactors could deliver big gains” 

    physicsworld
    physicsworld.com

    Feb 16, 2015
    Michael Banks

    1
    Hot topic: size may not be everything in tokamak design

    Researchers from the UK firm Tokamak Energy say that future fusion reactors could be made much smaller than previously envisaged – yet still deliver the same energy output. That claim is based on calculations showing that the fusion power gain – a measure of the ratio of the power from a fusion reactor to the power required to maintain the plasma in steady state – does not depend strongly on the size of the reactor. The company’s finding goes against conventional thinking, which says that a large power output is only possible by building bigger fusion reactors.

    The largest fusion reactor currently under construction is the €16bn ITER facility in Cadarache, France.

    ITER Tokamak

    This will weigh about 23,000 tonnes when completed in the coming decade and consist of a deuterium–tritium plasma held in a 60 m-tall, doughnut-shaped “tokamak”. ITER aims to produce a fusion power gain (Q) of 10, meaning that, in theory, the reactor will emit 10 times the power it expends by producing 500 MW from 50 MW of input power. While ITER has a “major” plasma radius of 6.21 m, it is thought that an actual future fusion power plant delivering power to the grid would need a 9 m radius to generate 1 GW.

    Low power brings high performance

    The new study, led by Alan Costley from Tokamak Energy, which builds compact tokamaks, shows that smaller, lower-power, and therefore lower-cost reactors could still deliver a value of Q similar to ITER. The work focused on a key parameter in determining plasma performance called the plasma “beta”, which is the ratio of the plasma pressure to the magnetic pressure. By using scaling expressions consistent with existing experiments, the researchers show that the power needed for high fusion performance can be three or four times lower than previously thought.

    Combined with the finding on the size-dependence of Q, these results imply the possibility of building lower-power, smaller and cheaper pilot plants and reactors. “The consequence of beta-independent scaling is that tokamaks could be much smaller, but still have a high power gain,” David Kingham, Tokamak Energy chief executive, told Physics World.

    The researchers propose that a reactor with a radius of just 1.35 m would be able to generate 180 MW, with a Q of 5. This would result in a reactor just 1/20th of the size of ITER. “Although there are still engineering challenges to overcome, this result is underpinned by good science,” says Kingham. “We hope that this work will attract further investment in fusion energy.”

    Many challenges remain

    Howard Wilson, director of the York Plasma Institute at the University of York in the UK, points out, however, that the result relies on being able to achieve a very high magnetic field. “We have long been aware that a high magnetic field enables compact fusion devices – the breakthrough would be in discovering how to create such high magnetic fields in the tokamak,” he says. “A compact fusion device may indeed be possible, provided one can achieve high confinement of the fuel, demonstrate efficient current drive in the plasma, exhaust the heat and particles effectively without damaging material surfaces, and create the necessary high magnetic fields.”

    The work by Tokamak Energy follows an announcement late last year that the US firm Lockheed Martin plans to build a “truck-sized” compact fusion reactor by 2019 that would be capable of delivering 100 MW. However, the latest results from Tokamak Energy might not be such bad news for ITER. Kingham adds that his firm’s work means that, in principle, ITER is actually being built much larger than necessary – and so should outperform its Q target of 10.

    The research is published in Nuclear Fusion.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 2:23 pm on February 14, 2015 Permalink | Reply
    Tags: , Laser cooling, , PhysicsWorld   

    From physicsworld: “Physicists reveal new way of cooling large objects with light” 

    physicsworld
    physicsworld.com

    Feb 13, 2015
    Hamish Johnston

    1
    Cold light: Dispersive and dissipative coupling join forces

    A new technique for cooling a macroscopic object with laser light has been demonstrated by a team of physicists in Germany and Russia. Making clever use of the noise in an optical cavity, which normally heats an object up, the technique could lead to the development of “stable optical springs” that would boost the sensitivity of gravitational-wave detectors. It could also be used to create large quantum-mechanical oscillators for studying the quantum properties of macroscopic objects or to create components for quantum computers.

    Physicists already have ways of cooling tiny mirrors by placing them in an optical cavity containing laser light. When the mirror is warm, it vibrates – creating a series of “sidebands” that resonate with light at certain frequencies. The first lower sideband has a frequency equal to the difference between the resonant frequency of the cavity and the vibrational frequency of the mirror. So when a photon at that frequency enters the cavity, it can be absorbed and re-emitted with an extra quantum of vibrational energy. As a result of this “dispersive coupling” process, the mirror cools because energy from it is removed.

    Dispersive coupling works best when the bandwidth of the cavity is much smaller than the vibrational frequency of the mirror. This is possible for relatively small mirrors with vibrational frequencies in the hundreds of megahertz. However, for more massive mirrors with vibrational frequencies in the hundreds of kilohertz, optical cavities with sufficiently narrow bandwidths are simply not available.

    Cooling with noise

    In this latest work, a large object was cooled using a new technique that involves “dissipative coupling” as well as dispersive coupling. Dissipative coupling was first proposed in 2009 by Florian Elste and Aashish Clerk of McGill University and Steven Girvin at Yale University. It makes clever use of quantum “shot noise” in laser light, which would normally be absorbed by the mirror and cause it to heat up.

    However, if the mirror is in an optical cavity and its motion couples to the mirror’s reflectivity in just the right way, then there are two ways that the noise can reach the mirror: it can travel directly from the laser to the mirror or it can bounce around the cavity before driving the mirror. Just as in an interferometer, noise taking these two paths can interfere destructively or constructively.

    Clerk and colleagues realized that the system can be set up so that destructive interference stops this quantum noise from heating the mirror but does not prevent the mirror from losing energy to the noise. The net effect is a strong cooling of the mirror’s motion, which could in principle take the mirror to its quantum ground state. “Unlike standard cavity cooling schemes, this interference doesn’t rely on having a very large mechanical frequency,” explains Clerk – meaning that it can be used to cool large mirrors that have low vibrational frequencies.

    Couplings working together

    In the latest work, Roman Schnabel and colleagues at the Max Planck Institute for Gravitational Physics in Hannover, Moscow State University and the Leibniz University of Hannover have now shown that dissipative and dispersive coupling can work together to cool relatively large mirrors. Based on an idea first proposed by the researchers in 2013, the technique uses a cavity created by a Michelson–Sagnac interferometer (see figure).

    What they have done is to fire laser light at a beamsplitter to create two beams that go off at right angles to each other. These beams then bounce off two mirrors, making their paths form a right-angled triangle. Light from the output port of the interferometer is sent to a “signal-recycling mirror“, or SRM, where some of the light is reflected back into the interferometer and some is sent to a detector. The optical cavity is fine-tuned by adjusting the position of the SRM, while the cavity properties are monitored using a frequency analyser connected to the detector.

    The object to be cooled is a silicon-nitride mirror just 40 nm thick, which is placed at the centre of the cavity. The mirror is about 1.2 mm across, weighs 80 ng and has a fundamental vibrational frequency of 136 kHz. The vibrational motion of the mirror changes not only the resonant frequency of the cavity – leading to the emergence of sidebands and dispersive cooling – but also the bandwidth of the cavity. When the rate of change of the bandwidth is large, energy can be exchanged between the cavity and the mirror. By carefully adjusting the phase between the vibrating mirror and the light, energy alone will flow from the mirror to the cavity, thereby cooling the mirror.

    Sub-kelvin cooling

    The researchers monitored the temperature of the mirror by using the laser light to measure its motion. They found that by using a combination of dispersive and dissipative cooling, they could cool the mirror from room temperature to 126 mK. Commenting on the experiment, Clerk told physicsworld.com that “Schnabel’s is the first experimental system where you have the special kind of dissipative optomechanical coupling that can let you do something truly new”.

    One possible application of the technique is to use it to cool relatively large objects to their quantum ground states of vibration. Such quantum oscillators would comprise billions of atoms and could be used as “Schrödinger’s cats” to study quantum mechanics on a macroscopic scale. Other applications include using such quantum oscillators as components in quantum computers and other quantum-information systems.
    Stabilizing an optical spring

    However, it is not the cooling power of the technique that most interests Schnabel and colleagues. Schnabel told physicsworld.com that the demonstration is a proof-of-principle of their model of how light interacts with an oscillating mirror within a gravitational-wave detector. Their goal is to create a “stable optical spring” whereby a mirror in a huge interferometer undergoes a stable oscillation when laser light is shone on it. A gravitational wave travelling through the mirror would cause a tiny disruption in the oscillation, which would be detected by the interferometer. The problem is that noise in the system heats the mirror and causes it to vibrate erratically. This makes the measurement extremely difficult in existing set-ups.

    “Our goal is to avoid uncontrolled heating of the mirror,” explains Schnabel, who says that the team will now use the model to try to create a stable optical spring using a 100 g pendulum as a mirror in a small interferometer. The ultimate goal of the research is use a mirror of about 40 kg for use in gravitational-wave detectors of the future.

    The research is reported in Physical Review Letters.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 8:01 am on February 12, 2015 Permalink | Reply
    Tags: , PhysicsWorld,   

    From physicsworld: “Geophysicists blast their way to the bottom of tectonic plates” 

    physicsworld
    physicsworld.com

    Feb 11, 2015
    Ian Randall

    https://i0.wp.com/images.iop.org/objects/phw/news/thumb/19/2/8/PW-2015-02-11-randall-tectonic-ship.jpg

    A thin, low-viscosity layer at the base of a tectonic plate has been imaged at a depth of 100 km beneath North Island, New Zealand, by an international team of researchers. Using underground dynamite explosions, the high-resolution seismic imaging – which has revealed previously unknown information about what happens underneath tectonic plates – may help to explain how tectonic plates are able to slide.

    While the movement of tectonic plates across the Earth’s surface has long been studied, some underlying features of the process are not understood, leaving many questions unanswered. Developing an improved understanding of the lithosphere–asthenosphere boundary is essential. Previous approaches to studying these depths have been based on the recording of seismic waves, sourced from distant earthquakes, which have been reflected from the boundary up to the surface. As the waves travel they split into both longitudinal and transverse waves, which travel at different speeds. The relative arrival times reveal the depth of reflective boundaries, while the shape of the converted waveforms offers information on each boundary’s sharpness. With a wavelength of 10–40 km, however, such waves offer only low-resolution imaging.

    https://i0.wp.com/images.iop.org/objects/phw/news/thumb/19/2/8/PW-2015-02-11-randall-tectonic-man.jpg

    Slipping plates

    Deploying the seismographs that will "image" the tectonic plate
    Sending down seismographs

    “The idea that the Earth’s surface consists of a mosaic of moving plates is a well-established scientific paradigm, but it had never been clear about what actually moves the plates around,” says Tim Stern, a geophysicist at Victoria University of Wellington. To obtain a clearer picture, Stern and colleagues used man-made, higher-frequency seismic waves, with a wavelength of around 0.5 km, generated by setting off explosions at the bottom of boreholes 50 m deep. A similar technique is commonly used in the petroleum industry in exploration.

    To measure the reflected waves, the team laid out 877 portable seismographs along a 85 km line at the bottom of New Zealand’s North Island, in a region where the 120 million-year-old Pacific Plate and Hikurangi Plateau are subducting under continental New Zealand. This region was chosen because it meets a combination of useful criteria – an oceanic plate, with a shallow-enough dip to be seismically imaged, under a continental landmass on which sizable dynamite explosions can be detonated.

    Boundary reflections

    Originally, the researchers had only set out to image the boundary between the subducting Pacific plate and the Australian plate that lies over it. Their aim was to learn more about the plate interface, which lies at a depth of 15–30 km, and the potential risk it poses to the nearby Wellington region. “The big surprise was getting the coherent reflections from the lithosphere–asthenosphere boundary in the first place,” says Stern. “We just happened to run long records after each explosion, and were surprised to see these much deeper (~100 km deep) reflections emerge.”

    At this depth, the researchers’ analysis revealed not only a boundary (<1 km) between the plate and the underlying mantle – contrasting with previous models of a simple thermal boundary transition – but also a 10 km-thick sheared channel underneath. From the decrease in seismic velocity, the team inferred it to be a low-viscosity layer. This may possibly represent a phase change to rock with a small percentage of melt or water content, pooled by plate motion through a process referred to as "strain localization".

    Push or pull?

    With similar layers having also been proposed elsewhere – including a thicker channel beneath a younger section of the Pacific plate and a possible channel at the base of a continental plate off the Norwegian coast – the researchers suggest that such channels could be a universal feature of the lithosphere–asthenosphere boundary. Such a finding would help support the proposed "slab-pull mechanism" of plate tectonic movement – allowing the plates to glide with little resistance over the asthenosphere with subduction driven by their own weight. The layer would also serve to decouple the plates from the underlying mantle, making the convection-driven theory of plate tectonics – wherein the driving force is thought to be large-scale convection currents in the upper mantle that are transmitted through the asthenosphere – less probable.

    "The results are striking. Changes in seismic velocity have to occur more rapidly (over distances less than 1 km) than previously suggested in order to generate the reflection," says Stewart Fishwick, a geophysicist at the University of Leicester in the UK who was not involved in this study. He adds that "further interpretations suggest a very thin (less than 10 km) low-viscosity channel, which has implications for the dynamics of the mantle".

    The researchers are now looking at the possibility of reproducing their study perpendicular to the current line, along the strike of the eastern North Island. In addition, the team will also be exploring how such a low-viscosity channel might form.

    The research is described in Nature.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 3:56 pm on January 21, 2015 Permalink | Reply
    Tags: , , PhysicsWorld   

    From physicsworld: “Cellular model of tissue growth could shed light on metastasis” 

    physicsworld
    physicsworld.com

    Jan 21, 2015
    Tim Wogan

    1
    Moving out: why do cancer cells spread throughout the body?

    A simple yet potentially very useful model of how living cells interact to create tissue has been created by Anatolij Gelimson and Ramin Golestanian of the University of Oxford in the UK. The simulation considers how individual cells in a colony are simultaneously drawn together by chemical signalling and driven apart by cell division and death. The research suggests that below a certain rate of division and death, the colony tends towards a compact and tissue-like steady state. Above this optimum rate, however, the cells spread increasingly far apart. Although the researchers stress that the model is highly schematic, they hope it could one day provide insights into what causes cancer cells to spread around the body.

    As they grow, living cells communicate with each other by excreting chemicals that attract or repel neighbouring cells – a process called chemotaxis. While some bacteria and other single-celled organisms can respond to these chemicals by swimming, the cells that make up the tissues in our bodies move by complex mechanisms that are not well understood. The rate at which the cells move depends on the concentration of the chemical signal in the surrounding environment. Cell movement is also affected by the local density of cells, with cells moving from regions containing lots of cells – such as the bulk of a tissue – to regions with fewer cells, such as the surface of a tissue. This means that the faster the cells divide and die, the faster the cells will expand outwards from the surface of the tissue.

    Opposing attraction

    Gelimson and Golestanian set out to understand how these two phenomena work together to control the density of tissue. They considered cases in which chemotaxis is attractive – that is, it draws cells together. In this scenario, the amount of chemical attractant increases as the density of cells increases, and this encourages cells to stick together. This also counteracts the tendency of cell division and death to push the cells apart. By calculating the equation of motion for a particular cell, the researchers found that when the rate of cell division and death is below a certain threshold value, these opposing tendencies keep each other in check, and the density of the tissue remains constant as the tissue grows. This is analogous to the situation seen in healthy organs, the researchers suggest, and also in benign tumours.

    Above a specific rate of cell division and death, however, the cells move away from each other so rapidly that chemotaxis cannot draw the colony back together. In this regime, the area occupied by the tissue diverges, causing the cells to spread all over the body. The researchers suggest that this discrete cut-off between a stable tissue and the cells spreading apart might provide insight into how a previously benign tumour can suddenly become metastatic and spread throughout the body.
    Unexpected competition

    The nature of the interplay between density and chemotaxis surprised Gelimson and Golestanian. “Population density is a local effect, whereas chemotaxis is non-local – cells send and receive these signals as chemicals that will travel all the way across the system,” says Golestanian. “But in our equations they appeared in very similar ways and thus they could actually compete with each other, which was not something we expected.”

    The biophysicist Herbert Levine of Rice University in the US told physicsworld.com that “There is a new idea here, and there is a methodology to try to figure out what that new idea might lead to.” However, he adds that “The open issue is: is there really a match between these assumptions and some actual experimental realization right now? That’s less clear to me.” In particular, he is sceptical about any direct connection with cancer, as he says cancer cells often stick to each other because of interactions between their protein coats, and the cancers where cells stick to each other are often more likely to spread effectively because they survive transport through the bloodstream better. “To me, that’s an important part of the problem, and I think the work needs to be extended in that direction,” he concludes.

    The research is published in Physical Review Letters.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 2:33 pm on September 12, 2014 Permalink | Reply
    Tags: , , , PhysicsWorld,   

    From physicsworld: “Synchrotron X-rays track fluids in the lungs” 

    physicsworld
    physicsworld.com

    Sep 12, 2014
    Ian Randall

    A new method of soft-tissue imaging could allow doctors to monitor respiratory treatments of cystic-fibrosis patients, reports an international research team. The technique – which measures the refraction of a grid pattern of X-rays passing through the lungs – has been successfully demonstrated in live mice, and could eventually find application in visualizing other soft tissues, such as the brain and heart.

    synchro
    Synchrotron source: phase-contrast imaging in the lab

    Cystic fibrosis is a life-threatening genetic disorder that affects the exocrine glands, resulting in unusually thick secretions of mucus. In lungs, mucus is supposed to keep the airways moist, along with forming a conveyor belt, moved by beating cilia, which carries away foreign particles and pathogens. In cystic-fibrosis patients, however, the thicker mucus flows less easily – resulting in a build-up that can cause inflammation, breathing difficulties and increased susceptibility to bacterial infection.

    Respiratory therapies for cystic-fibrosis patients typically focus on increasing hydration of the airways to improve mucus flow. Tracking the progress of these treatments, however, is challenging. “At the moment, we typically need to wait for a cystic-fibrosis treatment to have an effect on lung health, measured by either a lung CT scan or breath measurement, to see how effective it is,” explains lead researcher Kaye Morgan from Monash University in Australia. With successful medications often taking months to have a measurable impact, progress in developing new treatments is correspondingly slow.

    Fast yet sensitive

    The challenge lies in imaging the surface layers of liquid in the airways. These are usually only a few tens of microns across, bear a close resemblance to the underlying tissue and – given the passage of air in and out of lungs – constantly move around. Consequently, any technique for imaging this interface needs to be high-resolution, as well as sufficiently fast and sensitive.

    mouse
    Single-grid-based phase-contrast X-ray imaging reveals a liquid surface layer in the lungs of a live mouse
    Sharper image: X-rays reveal a liquid surface layer

    Morgan and colleagues have developed an imaging method that they call single-grid-based phase-contrast X-ray imaging. Unlike conventional radiography, which measures the absorption of X-rays, the new approach measures the refraction of a grid pattern of radiation as it passes through the soft tissues.

    “A good analogy is the patterns we see on the bottom of swimming pools,” explains Morgan. At the detector, the X-ray grid will appear distorted in accordance with the properties of the tissues that the rays have passed through – much in the same way that tiles in a pool appear distorted when seen through water. “By tracking the distortions in the grid pattern, we can reconstruct the airway structures.”

    Anaesthetized mice

    To test their method, the researchers imaged the airways of eight anaesthetized mice. Using a nebulizer, each mouse was treated first with a saline control solution, and then with a treatment designed to block the dehydrating effect of the cells lining the airway. X-rays from a synchrotron travelled through a 25.4 µm grid to create the desired pattern; this produced images at the detector with 0.18 µm-sized pixels. Images were recorded at three-minute intervals for 15 minutes after each treatment.

    By tracking the distortions in the grid pattern, we can reconstruct the airway structures
    Kaye Morgan, Monash University

    The method successfully imaged the airway, surface liquids and underlying tissues. A noticeable increase in the surface hydration depth was observed after treatment in comparison with the control. “The new imaging method allows us, for the first time, to non-invasively see how the treatment is working, ‘live’ on the airway surface,” Morgan says.

    “This is a novel and interesting biomedical application,” says Mark Anastasio, a biomedical engineer from Washington University in St Louis. With existing solutions unable to reveal such subtle soft-tissue interfaces, he adds, this result “motivates the further development of X-ray phase-contrast imaging technologies”.
    Practical issues

    Ke Li, a medical physicist from the University of Wisconsin-Madison, points out that making measurements on live mice “is a huge step along the course of applying phase-contrast X-ray projection imaging to medical imaging”. However, Li questions the practicality of using a synchrotron X-ray source in a clinical environment, especially given the high radiation dose necessary for such an ultra-fine pixel size.

    Some of these concerns could soon be addressed, with Morgan and colleagues now exploring how their work might translate into a clinical setting. At the same time, the team is investigating other possible medical applications, looking both at lungs and other soft tissues, such as the brain and heart.

    See the full article here.

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:25 pm on April 14, 2014 Permalink | Reply
    Tags: , , , PhysicsWorld   

    From PhysicsWorld.com: “Interferometry tips the scales on antimatter” 

    physicsworld

    Apr 7, 2014

    Tushna Commissariat

    A new technique for measuring how antimatter falls under gravity has been proposed by researchers in the US. The team says that its device – based on cooling atoms of antimatter and making them interfere – could also help to test Einstein’s equivalence principle with antihydrogen – something that could have far-reaching consequences for cosmology. Finding even the smallest of differences between the behaviour of matter and antimatter could shine a light on why there is more matter than antimatter in the universe today, as well as help us to better understand the nature of the dark universe.

    alpha
    Trapping potential: The ALPHA experiment at CERN

    Up or down?

    First detected at CERN in 1995, physicists have long wondered how antimatter is affected by gravity – does it fall up or down? Most theoretical and experimental work suggests that gravity probably acts in exactly the same way on antimatter as it does on matter. The problem is that antimatter is difficult to produce and study, meaning that no direct experimental measurements of its behaviour under gravity have been made to date.

    One big step forward took place last year, when researchers at the ALPHA experiment at CERN measured how long it takes atoms of antihydrogen – made up of a positron surrounding an antiproton – to reach the edges of a magnetic trap after it is switched off. Although ALPHA did not find any evidence of the antihydrogen responding differently to gravity, the team was able to rule out the possibility that antimatter responds much more strongly to gravity than matter.

    CERN ALPHA New
    Alpha Collaboration’s Official image

    Waving matter

    Such experiments are hard to carry out, however – antimatter is difficult to produce on a large scale and it annihilates when it comes into contact with regular matter, making it difficult to trap and hold. The new interferometry technique – proposed by Holger Müller and colleagues at the University of California, Berkeley, and Auburn University in Alabama – exploits the fact that a beam of antimatter atoms can, like light, be split, sent along two paths and made to interfere, with the amount of interference depending on the phase shift between the two beams. The researchers say the light-pulse atom interferometer, which they plan to install at the ALPHA experiment, could work not only with almost any type of atom or anti-atom, but also with electrons and protons.

    In the proposed interferometer, the matter waves would be split and recombined using pulses of laser light. If an atom interacts with the laser beam, it will receive a “kick” from the momentum of a pair of photons, creating the split, explains Müller. By tuning the laser to the correct pulse energy, this process can be made to happen with a probability of 50%, sending the matter waves along either of the two arms of the interferometer. When the paths join again, the probability of detecting the anti-atom depends on the amplitude of the matter wave, which becomes a function of the phase shift.

    Annihilation danger?

    Müller adds that the phase shift depends on the acceleration due to gravity (g), the momentum of the photons (and so the magnitude of the kick) and the time interval between each laser pulse. Measuring the phase shift is therefore a way of measuring g, because the momentum and the time interval are both known. The biggest advantage of the technique is that the anti-atoms will not be in danger of annihilating because they will never come close to any mechanical objects, being moved with light and magnetic fields only.

    Müller’s idea is to combine two proven technologies: light-pulse atom interferometry and ALPHA’s method of producing antihydrogen using its Penning trap. He points out that the team’s proposed method does not assume availability of a laser resonant with the Lyman-alpha transition in hydrogen, which can be very difficult to build. To make the whole experiment even more efficient, the team has also developed what Müller describes as an “atom recycling method”, which allows the researchers to work with “realistic” atom numbers. “The atom is enclosed inside magnetic fields that prevent it from going away. Thus, an atom that hasn’t been hit by the laser on our first attempt has a chance to get hit later. This way, we can use almost every single atom – a crucial feat at a production rate of one every 15 minutes,” he explains. This would let ALPHA measure the gravitational acceleration of antihydrogen to a precision of 1%.

    Precise and accurate

    The team plans to build a demo set-up at Berkeley, which will work with regular hydrogen, and hopes to secure funding for this soon. Müller and colleagues are now also part of the APLHA collaboration. “The work at CERN will proceed in several steps,” he says. “The first is an up/down measurement telling [us] whether the antimatter will go up or down,” he says. This will be followed by a measurement of per-cent-level accuracy. Müller’s long-term aim is get to a precision of 10–6, which would be vastly superior to ALPHA’s measurement last year, which has an error bar of 102. ALPHA can currently trap and hold atoms at the rate of four each hour, but thanks to recent upgrades at its source of antiprotons – the ELENA ring – CERN could theoretically produce nearly 3000 atoms per month. In addition to ALPHA, the GBAR and AEgIS collaborations are also planning to measure gravity’s effects on antimatter.

    While Müller agrees that the gravitational behaviour of antimatter can be studied from experiments with normal matter, a direct observation is essential, and that is what Müller, the ALPHA collaboration and the other teams at CERN are keen to accomplish in the near future. “No matter how sound one’s theory, there is no substitute in science for a direct observation,” he says.

    See the full article here.

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.

    IOP Institute of Physics


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: