Tagged: COSMOS Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:33 pm on July 27, 2017 Permalink | Reply
    Tags: , , Atoms in your body may come from distant galaxies, , , COSMOS, , supercomputer simulations   

    From COSMOS: “Atoms in your body may come from distant galaxies” 

    Cosmos Magazine bloc

    COSMOS Magazine

    27 July 2017

    Previously covered, https://sciencesprings.wordpress.com/2017/01/24/from-jpl-caltech-nustar-finds-new-clues-to-chameleon-supernova/, but lacking the science paper, The cosmic baryon cycle and galaxy mass assembly in the FIRE simulations in MNRAS.

    It seems natural to assume that the matter from which the Milky Way is made was formed within the galaxy itself, but a series of new supercomputer simulations suggests that up to half of this material could actually be derived from any number of other distant galaxies.

    From the previous report of this study:

    1
    This visible-light image from the Sloan Digital Sky Survey shows spiral galaxy NGC 7331, center, where astronomers observed the unusual supernova SN 2014C .

    SDSS Telescope at Apache Point Observatory, NM, USA

    This phenomenon, described in a paper by group of astrophysicists from Northwestern University in the US who refer to it as “intergalactic transfer”, is expected to open up a new line of research into the scientific understanding of galaxy formation.

    Led by Daniel Anglés-Alcázar, the astrophysicists reached this intriguing conclusion by implementing sophisticated numerical simulations which produced realistic 3D models of galaxies and followed their formation from shortly after the Big Bang to the present day.

    The researchers then employed state-of-the-art algorithms to mine this sea of data for information related to the matter acquisition patterns of galaxies.

    Through their analysis of the simulated flows of matter, Anglés-Alcázar and his colleagues found that supernova explosions eject large amounts of gas from galaxies, which causes atoms to be conveyed from one system to the next via galactic winds.

    In addition, the researchers note that this flow of material tends to move from smaller systems to larger ones and can contribute to up to 50 percent of the matter in some galaxies.

    From previous report of this study:

    In the new study, NASA’s NuSTAR (Nuclear Spectroscopic Telescope Array) satellite, with its unique ability to observe radiation in the hard X-ray energy range — the highest-energy X-rays — allowed scientists to watch how the temperature of electrons accelerated by the supernova shock changed over time. They used this measurement to estimate how fast the supernova expanded and how much material is in the external shell.

    3
    NASA/NuSTAR

    Anglés-Alcázar and his colleagues use this evidence, which is published in Monthly Notices of the Royal Astronomical Society [See above], to suggest that the origin of matter in our own galaxy – including the matter that makes up the Sun, the Earth, and even the people who live on it – may be far less local than traditionally believed.

    “It is likely that much of the Milky Way’s matter was in other galaxies before it was kicked out by a powerful wind, traveled across intergalactic space and eventually found its new home in the Milky Way,” Anglés-Alcázar says.

    The team of astrophysicists now hopes to test the predictions made by their simulations using real-world evidence collected by the Hubble Space Telescope and other ground-based observatories.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:57 am on July 24, 2017 Permalink | Reply
    Tags: , , , , COSMOS, Gamma ray telescopes, How non-optical telescopes see the universe, Infrared telescopes, , Optical telescopes, Pair production telescope, , Ultraviolet telescopes, X-ray telescopes   

    From COSMOS: “How non-optical telescopes see the universe” 

    Cosmos Magazine bloc

    COSMOS Magazine

    24 July 2017
    Jake Port

    The human eye can only see a tiny band of the electromagnetic spectrum. That tiny band is enough for most day-to-day things you might want to do on Earth, but stars and other celestial objects radiate energy at wavelengths from the shortest (high-energy, high-frequency gamma rays) to the longest (low-energy, low-frequency radio waves).

    1
    The electromagnetic spectrum is made up of radiation of all frequencies and wavelengths. Only a tiny range is visible to the human eye. NASA.

    Beyond the visible spectrum

    To see what’s happening in the distant reaches of the spectrum, astronomers use non-optical telescopes. There are several varieties, each specialised to catch radiation of particular wavelengths.

    Non-optical telescopes utilise many of the techniques found in regular telescopes, but also employ a variety of techniques to convert invisible light into spectacular imagery. In all cases, a detector is used to capture the image rather than an eyepiece, with a computer then processing the data and constructing the final image.

    There are also more exotic ways of looking at the universe that don’t use electromagnetic radiation at all, like neutrino telescopes and the cutting-edge gravitational wave telescopes, but they’re a separate subject of their own.

    To start off, let’s go straight to the top with the highest-energy radiation, gamma rays.

    Gamma ray telescopes

    Gamma radiation is generally defined as radiation of wavelengths less than 10−11 m, or a hundredth of a nanometre.

    Gamma-ray telescopes focus on the highest-energy phenomena in the universe, such as black holes and exploding stars. A high-energy gamma ray may contain a billion times as much energy as a photon of visible light, which can make them difficult to study.

    Unlike photons of visible light, that can be redirected using mirrors and reflectors, gamma rays simply pass through most materials. This means that gamma-ray telescopes must use sophisticated techniques that track the movement of individual gamma rays to construct an image.

    One technology that does this, in use in the Fermi Gamma-ray Space Telescope among other places, is called a pair production telescope.

    NASA/Fermi Telescope

    It uses a multi-layer sandwich of converter and detector materials. When a gamma ray enters the front of the detector it hits a converter layer, made of dense material such as lead, which causes the gamma-ray to produce an electron and a positron (known as a particle-antiparticle pair).

    The electron and the positron then continue to traverse the telescope, passing through layers of detector material. These layers track the movement of each particle by recording slight bursts of electrical charge along the layer. This trail of bursts allows astronomers to reconstruct the energy and direction of the original gamma ray. Tracing back along that path points to the source of the ray out in space. This data can then be used to create an image.

    The video below shows how this works in the space-based Fermi Large Area Telescope.

    NASA/Fermi LAT

    X-ray telescopes

    X-rays are radiation with wavelengths between 10 nanometres and 0.01 nanometres. They are used every day to image broken bones and scan suitcases in airports and can also be used to image hot gases floating in space. Celestial gas clouds and remnants of the explosive deaths of large stars, known as supernovas, are the focus of X-ray telescopes.

    Like gamma rays, X-rays are a high-energy form of radiation that can pass straight through most materials. To catch X-rays you need to use materials that are very dense.

    X-ray telescopes often use highly reflective mirrors that are coated with dense metals such as gold, nickel or iridium. Unlike optical mirrors, which can bounce light in any direction, these mirrors can only slightly deflect the path of the X-ray. The mirror is orientated almost parallel to the direction of the incoming X-rays. The X-rays lightly graze the mirror before moving on, a little like a stone skipping on a pond. By using lots of mirrors, each changing the direction of the radiation by a small amount, enough X-rays can be collected at the detector to produce an image.

    To maximise image quality the mirrors are loosely stacked, creating an internal structure resembling the layers of an onion.

    2
    Diagram showing how ‘grazing incidence’ mirrors are used in X-ray telescopes. NASA.

    NASA/Chandra X-ray Telescope

    ESA/XMM Newton X-ray telescope

    NASA NuSTAR X-ray telescope


    Ultraviolet telescopes

    Ultraviolet light is radiation with wavelengths just too short to be visible to human eyes, between 400 nanometres and 0.01 nanometres. It has less energy than X-rays and gamma rays, and ultraviolet telescopes are more like optical ones.

    Mirrors coated in materials that reflect UV radiation, such as silicon carbide, can be used to redirect and focus incoming light. The Hopkins Ultraviolet Telescope, which flew two short missions aboard the space shuttle in the 1990s, used a parabolic mirror coated with this material.

    3
    A schematic of the Hopkins Ultraviolet Telescope. NASA.

    NASA Hopkins Ultraviolet Telescope which flew on the ISS

    As redirected light reaches the focal point, a central point where all light beams converge, they are detected using a spectrogram. This specialised device can separate the UV light into individual wavelength bands in a way akin to splitting visible light into a rainbow.

    Analysis of this spectrogram can indicate what the observation target is made of. This allows astronomers to analyse the composition of interstellar gas clouds, galactic centres and planets in our solar system. This can be particularly useful when looking for elements essential to carbon-based life such as oxygen and carbon.

    Optical telescopes

    Optical telescopes are used to view the visible spectrum: wavelengths roughly between 400 and 700 nanometres. See separate article here.


    Keck Observatory, Maunakea, Hawaii, USA

    ESO/VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, Spain, sited on a volcanic peak 2,267 metres (7,438 ft) above sea level

    Gemini/North telescope at Maunakea, Hawaii, USA

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile

    Infrared telescopes

    Sitting just below visible light on the electromagnetic spectrum is infrared light, with wavelengths between 700 nanometres and 1 millimetre.

    It’s used in night vision goggles, heaters and tracking devices as found in heat-seeking missiles. Any object or material that is hotter than absolute zero will emit some amount of infrared radiation, so the infrared band is a useful window to look at the universe through.

    Much infrared radiation is absorbed by water vapour in the atmosphere, so infrared telescopes are usually at high altitudes in dry places or even in space, like the Spitzer Space Telescope.

    Infrared telescopes are often very similar to optical ones. Mirrors and reflectors are used to direct the infrared light to a detector at the focal point. The detector registers the incoming radiation, which a computer then converts into a digital image.

    NASA/Spitzer Infrared Telescope

    Radio telescopes

    At the far end of the electromagnetic spectrum we find the radio waves, with frequencies less than 1000 megahertz and wavelengths of a metre and more. Radio waves penetrate the atmosphere easily, unlike higher-frequency radiation, so ground-based observatories can catch them.

    Radio telescopes feature three main components that each play an important role in capturing and processing incoming radio signals.

    The first is the massive antenna or ‘dish’ that faces the sky. The Parkes radio telescope in New South Wales, Australia, for instance, has a dish with a diameter of 64 metres, while the Aperture Spherical Telescope in southwest China is has a whopping 500-metre diameter.

    The great size allows for the collection of long wavelengths and very quiet signals. The dish is parabolic, directing radio waves collected over a large area to be focused to a receiver sitting in front of the dish. The larger the antenna, the weaker the radio source that can be detected, allowing larger telescopes to see more distant and faint objects billions of light years away.

    The receiver works with an amplifier to boost the very weak radio signal to make it strong enough for measurement. Receivers today are so sensitive that they use powerful coolers to minimise thermal noise generated by the movement of atoms in the metal of the structure.

    Finally, a recorder stores the radio signal for later processing and analysis.

    Radio telescopes are used to observe a wide array of subjects, including energetic pulsar and quasar systems, galaxies, nebulae, and of course to listen out for potential alien signals.

    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia



    GBO radio telescope, Green Bank, West Virginia, USA

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:06 pm on July 18, 2017 Permalink | Reply
    Tags: , COSMOS, ,   

    From COSMOS: “How giant atoms may help catch gravitational waves from the Big Bang” 

    Cosmos Magazine bloc

    COSMOS

    7.18.17
    Diego A. Quiñones, U Leeds

    Huge, highly excited atoms may give off flashes of light when hit by a gravitational wave.

    1
    Some of the earliest known galaxies in the universe, seen by the Hubble Space Telescope. NASA/ESA

    NASA/ESA Hubble Telescope

    There was a lot of excitement last year when the LIGO collaboration detected gravitational waves, which are ripples in the fabric of space itself.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    And it’s no wonder – it was one of the most important discoveries of the century. By measuring gravitational waves from intense astrophysical processes like merging black holes, the experiment opens up a completely new way of observing and understanding the universe.

    But there are limits to what LIGO can do. While gravitational waves exist with a big variety of frequencies, LIGO can only detect those within a certain range. In particular, there’s no way of measuring the type of high frequency gravitational waves that were generated in the Big Bang itself. Catching such waves would revolutionise cosmology, giving us crucial information about how the universe came to be. Our research presents a model that may one day enable this.

    In the theory of general relativity developed by Einstein, the mass of an object curves space and time – the more mass, the more curvature. This is similar to how a person stretches the fabric of a trampoline when stepping on it. If the person starts moving up and down, this would generate undulations in the fabric that will move outwards from the position of the person. The speed at which the person is jumping will determine the frequency of the generated ripples in the fabric.

    An important trace of the Big Bang is the Cosmic Microwave Background.

    CMB per ESA/Planck

    ESA/Planck

    This is the radiation left over from the birth of the universe, created about 300,000 years after the Big Bang. But the birth of our universe also created gravitational waves – and these would have originated just a fraction of a second after the event. Because these gravitational waves contain invaluable information about the origin of the universe, there is a lot of interest in detecting them. The waves with the highest frequencies may have originated during phase transitions of the primitive universe or by vibrations and snapping of cosmic strings.

    An instant flash of brightness

    Our research team, from the universities of Aberdeen and Leeds, think that atoms may have an edge in detecting elusive, high-frequency gravitational waves. We have calculated that a group of “highly excited” atoms (called Rydberg atoms – in which the electrons have been pushed out far away from the atom’s nucleus, making it huge – will emit a bright pulse of light when hit by a gravitational wave.

    To make the atoms excited, we shine a light on them. Each of these enlarged atoms is usually very fragile and the slightest perturbation will make them collapse, releasing the absorbed light. However, the interaction with a gravitational wave may be too weak, and its effect will be masked by the many interactions such as collisions with other atoms or particles.

    Rather than analysing the interaction with individual atoms, we model the collective behaviour of a big group of atoms packed together. If the group of atoms is exposed to a common field, like our oscillating gravitational field, this will induce the excited atoms to decay all at the same time. The atoms will then release a large number of photons (light particles), generating an intense pulse of light, dubbed “superradiance”.

    As Rydberg atoms subjected to a gravitational wave will superradiate as a result of the interaction, we can guess that a gravitational wave has passed through the atomic ensemble whenever we see a light pulse.

    By changing the size of the atoms, we can make them radiate to different frequencies of the gravitational wave. This can be this useful for detection in different ranges. Using the proper kind of atoms, and under ideal conditions, it could be possible to use this technique to measure relic gravitational waves from the birth of the universe. By analysing the signal of the atoms it is possible to determine the properties, and therefore the origin, of the gravitational waves.

    There may be some challenges for this experimental technique: the main one is getting the atoms in an highly excited state. Another one is to have enough atoms, as they are so big that they become very hard to contain.

    A theory of everything?

    Beyond the possibility of studying gravitational waves from the birth of the universe, the ultimate goal of the research is to detect gravitational fluctuations of empty space itself – the vacuum. These are extremely faint gravitational variations that occur spontaneously at the smallest scale, popping up out of

    Discovering such waves could lead to the unification of general relativity and quantum mechanics, one of the greatest challenges in modern physics. General relativity is unparalleled when it comes to describing the world on a large scale, such as planets and galaxies, while quantum mechanics perfectly describes physics on the smallest scale, such as the atom or even parts of the atom. But working out the gravitational impact of the tiniest of particles will therefore help bridge this divide.

    But discovering the waves associated with such quantum fluctuations would require a great number of atoms prepared with an enormous amount of energy, which may not be possible to do in the laboratory. Rather than doing this, it might be possible to use Rydberg atoms in outer space. Enormous clouds of these atoms exist around white dwarfs – stars which have run out of fuel – and inside nebulas with sizes more than four times larger than anything that can be created on Earth. Radiation coming from these sources could contain the signature of the vacuum gravitational fluctuations, waiting to be unveiled.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:01 pm on July 17, 2017 Permalink | Reply
    Tags: , , , , , COSMOS, , How big is the universe?, , NASA/Cobe   

    From COSMOS: “How big is the universe?” 

    Cosmos Magazine bloc

    COSMOS

    17 July 2017
    Cathal O’Connell

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    “Space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.” – Douglas Adams, Hitchhikers Guide to the Galaxy.

    In one sense the edge of the universe is easy to mark out: it’s the distance a beam of light could have travelled since the beginning of time. Anything beyond is impossible for us to observe, and so outside our so-called ‘observable universe’. You might guess that the distance from the centre of the universe to the edge is simply the age of the universe (13.8 billion years) multiplied by the speed of light: 13.8 billion light years.

    But space has been stretching all this time; and just as an airport walkway extends the stride of a walking passenger, the moving walkway of space extends the stride of light beams. It turns out that in the 13.8 billion years since the beginning of time, a light beam could have travelled 46.3 billion light years from its point of origin in the Big Bang. If you imagine this beam tracing a radius, the observable universe is a sphere whose diameter is double that: 92.6 billion light years.

    “Since nothing is faster than light, absolutely anything could in principle happen outside the observable universe,” says Andrew Liddle, an astronomer at the University of Edinburgh. “It could end and we’d have no way of knowing.”

    But we have good reasons to suspect the entire Universe (capitalised now to distinguish from the merely observable universe) goes on a lot further than the part we can observe – and that it is possibly infinite. So how can we know what goes on beyond the observable universe?

    Imagine a bacterium swimming in a fishbowl. How could it know the true extent of its seemingly infinite world? Well, distortions of light from the curvature of the glass might give it a clue. In the same way, the curvature of the universe tells us about its ultimate size.

    “The geometry of the universe can be of three different kinds,” says Robert Trotta, an astrophysicist at Imperial College London. It could be closed (like a sphere), open (like a saddle) or flat (like a table).

    2
    Universal geometry: the universe could be closed like sphere, open like a saddle or flat like a table. The first option would make it finite; the other two, infinite.
    Cosmos Magazine.

    The key to measuring its curvature is the cosmic microwave background (CMB) radiation – a wash of light given out by the fireball of plasma that pervaded the universe 400,000 years after the Big Bang. It’s our snapshot of the universe when it was very young and about 1,000 times smaller than it is today.

    Cosmic Infrared Background, Credit: Michael Hauser (Space Telescope Science Institute), the COBE/DIRBE Science Team, and NASA

    NASA/COBE

    Cosmic Microwave Background NASA/WMAP

    NASA/WMAP satellite

    CMB per ESA/Planck

    ESA/Planck

    Just as ancient geographers once used the curviness of the Earth’s horizon to work out the size of our planet, astronomers are using the curvinesss of the CMB at our cosmic horizon to estimate the size of the universe.

    The key is to use satellites to measure the temperature of different features in the CMB. The way these features distort across the CMB landscape is used to calculate its geometry. “So determining the size and geometry, of the Universe helps us determine what happened right after its birth,” Trotta says.

    Since the late 1980s, three generations of satellites have mapped the CMB with ever improving resolution, generating better and better estimates of the universe’s curvature. The latest data, released in March 2013, came from the European Space Agency’s Planck telescope. It estimated the curvature to be completely flat, at least to within a measurement certainty of plus or minus 0.4%.

    The extreme flatness of the universe supports the theory of cosmic inflation. This theory holds that in a fraction of a second (10−36 second to be precise) just after its birth, the universe inflated like a balloon, expanding many orders of magnitude while stretching and flattening its surface features.

    Perfect flatness would mean the universe is infinite, though the plus or minus 0.4% margin of error means we can’t be sure. It might still be finite but very big. Using the Planck data, Trotta and his colleagues worked out the minimum size of the actual Universe would have to be at least 250 times greater than the observable universe.

    The next generation of telescopes should improve on the data from the Planck telescope. Whether they will give us a definitive answer about the size of the universe remains to be seen. “I imagine that we will still treat the universe as very nearly flat and still not know well enough to rule out open or closed for a long time to come,” says Charles Bennet, head of the new CLASS array of microwave telescopes in Chile.

    As it turns out, owing to background noise there are fundamental limits to how well we can ever measure the curvature, no matter how good the telescopes get. In July 2016, physicists at Oxford worked out we cannot possibly measure a curvature below about 0.01%. So we still have a ways to go, though measurements so far, and the evidence from inflation theory, has most physicists weighing toward the view the universe is probably infinite. An impassioned minority, however, have had a serious problem with that.

    Getting rid of infinity, the great British physicist Paul Dirac said, is the most important challenge in physics. “No infinity has ever been observed in nature,” notes Columbia University astrophysicist Janna Levin in her 2001 memoir How the Universe got its Spots. “Nor is infinity tolerated in a scientific theory.”

    So how come physicists keep allowing that the universe itself may be infinite? The idea goes back to the founding fathers of physics. Newton, for example, reasoned that the universe must be infinite based on his law of gravitation. It held that everything in the universe attracted everything else. But if that were so, eventually the universe would be pulled towards a single point, in the way that a star eventually collapses under its own weight. This was at odds with his firm belief the universe had always existed. So, he figured, the only explanation was infinity – the equal pull in all directions would keep the universe static, and eternal.

    Albert Einstein, 250 years later at the start of the 20th century, similarly envisioned an eternal and infinite universe. General relativity, his theory of the universe on the grandest scales, plays out on an infinite landscape of spacetime.

    Mathematically speaking, it is easier to propose a universe that goes on forever than to have to deal with the edges. Yet to be infinite is to be unreal – a hyperbole, an absurdity.

    In his short story The Library of Babel, Argentinian writer Jorge Luis Borges imagines an infinite library containing every possible book of exactly 410 pages: “…for every sensible line of straightforward statement, there are leagues of senseless cacophonies, verbal jumbles and incoherences.” Because there are only so many possible arrangements of letters, the possible number of books is limited, and so the library is destined to repeat itself.

    An infinite Universe leads to similar conclusions. Because there are only so many ways that atoms can be arranged in space (even within a region 93 billion light years across), an infinite Universe requires that there must be, out there, another huge region of space identical to ours in every respect. That means another Milky Way, another Earth, another version of you and another of me.

    Physicist Max Tegmark, of the Massachusetts Institute of Technology, has run the numbers. He estimates that, in an infinite Universe, patches of space identical to ours would tend to come along about every 1010115 metres (an insanely huge number, one with more zeroes after it than there are atoms in the observable universe). So no danger of bumping into your twin self down at the shops; but still Levin does not accept it: “Is it arrogance or logic that makes me believe this is wrong? There’s just one me, one you. The universe can’t be infinite.”

    Levin was one of the first theorists to approach general relativity from a new perspective. Rather than thinking about geometry, which describes the shape of space, she looked at its topology: the way it was connected.

    All those assumptions about flat, closed or open universes were only valid for huge, spherical universes, she argued. Other shapes could be topologically ‘flat’ and still finite.

    “Your idea of a donut-shaped universe is intriguing, Homer,” says Stephen Hawking in a 1999 episode of The Simpsons. “I may have to steal it.” Actually, the show’s writers had already stolen the idea from Levin—who published her analysis of a donut-shaped universe in 1998.

    A donut, she noted, actually had – “topologically speaking” – zero curvature because the negative curvature on the inside is balanced by the positive curvature on the outside. The (near) zero curvature measured in the CMB was therefore as consistent with a donut as with a flat surface.

    5
    One ring theory to rule them all: CMB data doesn’t rule out a donut-shape, but it would be an awfully big one. Mehau Kulyk / Getty Images.

    In such a universe, Levin realised, you might cross the cosmos in a spaceship, the way sailors crossed the globe, and find yourself back where you started. This idea inspired Australian physicist Neil Cornish, now based at Montana State University, to think about how the very oldest light, from the CMB, might have circumnavigated the cosmos. If the donut universe were below a threshold size, that would create a telltale signature, which Cornish called “circles in the sky”. Alas, when CMB data came back from the Wilkinson Microwave Anisotropy Probe (WMAP) in 2001, no such signatures were found. That doesn’t rule out the donut theory entirely; but it does mean that the universe, if it is a donut, is an awfully big one.

    Attempts to directly prove or disprove the infinity of the universe seem to lead us to a dead-end, at least with current technology. But we might do it by inference, Cornish believes. Inflation theory does a compelling job of explaining the key features of our universe; and one of the offshoots of inflation is the multiverse theory.

    It’s the kind of theory that, when you first hear it, seems to have sprung from the mind of a science-fiction author indulging in mind-expanding substances. Actually it was first proposed by influential Stanford physicist Andrei Linde in the 1980s. Linde – together with Alan Guth at MIT and Alexei Starobinsky at Russia’s Landau Institute for Theoretical Physics – was one of the architects of inflation theory.

    Guth and Starobinsky’s original ideas had inflation petering out in the first split second after the big bang; Linde, however, had it going on and on, with new universes sprouting off like an everlasting ginger root.

    Linde has since showed that “eternal inflation” is probably an inevitable part of any inflation model. This eternal inflation, or multiverse, model is attractive to Linde because it solves the greatest mystery of all: why the laws of physics seem fine-tuned to allow our existence.

    The strength of gravity is just enough to allow stable stars to form and burn, the electromagnetic and nuclear forces are just the right strength to allow atoms to form, for complex molecules to evolve, and for us to come to be.

    In each newly sprouted universe these constants get assigned randomly. In some, gravity might be so strong that the universe recollapses immediately after its big bang. In others, gravity would be so weak that atoms of hydrogen would never condense into stars or galaxies. With an infinite number of new universes sprouting into and out of existence, by chance one will pop up that is fit for life to evolve.

    6
    Infinite variety: in the the eternal inflation model, new universes sprout off like an everlasting ginger root. Andrei Linde.

    The multiverse theory has its critics, notably another co-founder of inflation theory, Paul Steinhardt. who told Scientific American in 2014: “Scientific ideas should be simple, explanatory, predictive. The inflationary multiverse as currently understood appears to have none of those properties.” Meanwhile Paul Davies at the University of Arizona wrote in The New York Times that “invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen creator”.

    But in another sense the multiverse is the simpler of the two inflation models. In a few lines of equations, or just a few sentences of speech, the multiverse gives us a mechanism to explain the origin of our universe, just as Charles Darwin’s theory of natural selection explained the origin of species. As Max Tegmark puts it: “Our judgment therefore comes down to which we find more wasteful and inelegant: many worlds or many words.”

    To settle the issue, we will need to know more about what went down in the first split-second of the universe. Perhaps gravitational waves will be the answer, a way to ‘hear’ the vibrations of the big bang itself.

    Whether infinite or finite, stand-alone or one of an endless multitude, the universe is surely a mindbending place. Which brings us back to The Hitchhiker’s Guide to the Galaxy: “If there’s any real truth, it’s that the entire multidimensional infinity of the Universe is almost certainly being run by a bunch of maniacs.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 11:01 am on July 13, 2017 Permalink | Reply
    Tags: , , COSMOS, , The calving of a massive iceberg in Antarctica is not a sign of climate doom but it may weaken the remainder of the Larsen C ice shelf, What the trillion-tonne Larsen C iceberg means   

    From COSMOS: “What the trillion-tonne Larsen C iceberg means” 

    Cosmos Magazine bloc

    COSMOS

    13 July 2017
    Adrian Luckman

    The calving of a massive iceberg in Antarctica is not a sign of climate doom, but it may weaken the remainder of the Larsen C ice shelf.

    One of the largest icebergs ever recorded has just broken away from the Larsen C Ice Shelf in Antarctica. Over the past few years I’ve led a team that has been studying this ice shelf and monitoring change. We spent many weeks camped on the ice investigating melt ponds and their impact – and struggling to avoid sunburn thanks to the thin ozone layer. Our main approach, however, is to use satellites to keep an eye on things.

    ESA/Sentinal 1


    The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band synthetic aperture radar imaging, enabling them to acquire imagery regardless of the weather.

    We’ve been surprised by the level of interest in what may simply be a rare but natural occurrence. Because, despite the media and public fascination, the Larsen C rift and iceberg “calving” is not a warning of imminent sea level rise, and any link to climate change is far from straightforward. This event is, however, a spectacular episode in the recent history of Antarctica’s ice shelves, involving forces beyond the human scale, in a place where few of us have been, and one which will fundamentally change the geography of this region.

    1
    The iceberg would barely fit inside Wales. Adrian Luckman / MIDAS, Author provided

    Ice shelves are found where glaciers meet the ocean and the climate is cold enough to sustain the ice as it goes afloat. Located mostly around Antarctica, these floating platforms of ice a few hundred meters thick form natural barriers which slow the flow of glaciers into the ocean and thereby regulate sea level rise. In a warming world, ice shelves are of particular scientific interest because they are susceptible both to atmospheric warming from above and ocean warming from below.

    2
    The ice shelves of the Antarctic peninsula. Note Larsen A and B have largely disappeared. AJ Cook & DG Vaughan, 2014, CC BY-SA

    Back in the 1890s, a Norwegian explorer named Carl Anton Larsen sailed south down the Antarctic Peninsula, a 1,000km long branch of the continent that points towards South America. Along the east coast he discovered the huge ice shelf which took his name.

    For the following century, the shelf, or what we now know to be a set of distinct shelves – Larsen A, B, C and D – remained fairly stable. However the sudden disintegrations [Science] of Larsen A and B in 1995 and 2002 respectively, and the ongoing speed-up [Geophysical Research Letters] of glaciers which fed them, focused scientific interest on their much larger neighbour, Larsen C, the fourth biggest ice shelf in Antarctica.

    This is why colleagues and I set out in 2014 to study the role of surface melt [Cambridge Core] on the stability of this ice shelf. Not long into the project, the discovery by our colleague, Daniela Jansen, of [The Cryosphere]a rift growing rapidly through Larsen C, immediately gave us something equally significant to investigate.

    Nature at work

    The development of rifts and the calving of icebergs is part of the natural cycle of an ice shelf. What makes this iceberg unusual is its size – at around 5,800 km² it’s the size of a small US state. There is also the concern that what remains of Larsen C will be susceptible to the same fate as Larsen B, and collapse almost entirely.

    3
    Larsen B once extended hundreds of kilometres over the ocean. Today, one of its glaciers runs straight into the sea. Armin Rose / shutterstock

    Our work has highlighted significant similarities [Nature Communications] between the previous behaviour of Larsen B and current developments at Larsen C, and we have shown that stability may be compromised. Others, however, are confident that Larsen C will remain stable [Nature Climate Change].

    What is not disputed by scientists is that it will take many years to know what will happen to the remainder of Larsen C as it begins to adapt to its new shape, and as the iceberg gradually drifts away and breaks up [The Conversation]. There will certainly be no imminent collapse, and unquestionably no direct effect on sea level because the iceberg is already afloat and displacing its own weight in seawater.

    This means that, despite much speculation [On The Verge], we would have to look years into the future for ice from Larsen C to contribute significantly to sea level rise. In 1995 Larsen B underwenta similar calving event [Nature Communications]. However, it took a further seven years of gradual erosion of the ice-front before the ice shelf became unstable enough to collapse, and glaciers held back by it were able to speed up [Geophysical Research Letters], and even then the collapse process may have depended on the presence of surface melt ponds [Geophysical Research Letters].

    Even if the remaining part of Larsen C were to eventually collapse, many years into the future, the potential sea level rise is quite modest [Journal of Geophysical Research]. Taking into account only the catchments of glaciers flowing into Larsen C, the total, even after decades, will probably be less than a centimetre.

    Is this a climate change signal?

    This event has also been widely but over-simplistically linked to climate change [The Guardian]. This is not surprising because notable changes in the earth’s glaciers and ice sheets are normally associated with rising environmental temperatures. The collapses of Larsen A and B have previously been linked to regional warming [Letters to Nature], and the iceberg calving will leave Larsen C at its most retreated position in records going back over a hundred years.

    However, in satellite images from the 1980s, the rift was already clearly a long-established feature, and there is no direct evidence to link its recent growth to either atmospheric warming, which is not felt deep enough within the ice shelf, or ocean warming, which is an unlikely source of change given that most of Larsen C has recently been thickening [Science]. It is probably too early to blame this event directly on human-generated climate change.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:07 am on July 10, 2017 Permalink | Reply
    Tags: , , COSMOS, Explainer: what is cancer radiotherapy and why do we need proton beam therapy?, , Proton Beam Therapy, Radiation cancer therapy,   

    From COSMOS via U Sidney: “Explainer: what is cancer radiotherapy and why do we need proton beam therapy?” 

    U Sidney bloc

    University of Sidney

    COSMOS

    10 July 2017
    Paul Keall

    Proton beam therapy is radiation therapy that uses heavier particles instead of the X-rays used in conventional radiotherapy.

    2
    1
    Both above from New Jersey’s ProCure Proton Therapy Center

    In the 2017 federal budget, the government dedicated up to A$68 million to help set up Australia’s first proton beam therapy facility in South Australia. The government says this will help Australian researchers develop the next generation of cancer treatments, including for complex children’s cancers.

    Proton beam therapy is radiation therapy that uses heavier particles (protons) instead of the X-rays used in conventional radiotherapy. These particles can more accurately target tumours closer to vital organs, which can be especially beneficial to patients suffering from brain cancer and children whose organs are still developing and are more vulnerable to damage.

    So, the facility will also be an alternative to conventional radiotherapy for treating certain cancer. But what is traditional radiotherapy, and how will access to proton beam therapy improve how we manage cancer?

    What is radiotherapy?

    Radiotherapy, together with surgery, chemotherapy and palliative care, are the cornerstones of cancer treatment. Radiotherapy is recommended for half of cancer patients.

    It is mostly used when the cancer is localised to one or more areas. Depending on the cancer site and stage, radiotherapy can be used alone or in combination with surgery and chemotherapy. It can be used before or after other treatments to make them more effective by, for example, shrinking the tumour before chemotherapy or treating cancer that remains after surgery.

    Most radiotherapy treats cancer by directing beams of high energy X-rays at the tumour (although other radiation beams, such as gamma rays, electron beams or proton/heavy particle beams can also be used).

    The X-rays interact with tumour cells, damaging their DNA and restricting their ability to reproduce. But because X-rays don’t differentiate between cancerous and healthy cells, normal tissues can be damaged. Damaged healthy tissue can lead to minor symptoms such as fatigue, or, in rare cases, more serious outcomes such as hospitalisation and death.

    Getting the right amount of radiation is a fine balance between therapy and harm. A common way to improve the benefit-to-cure ratio is to fire multiple beams at the tumour from different directions. If they overlap, they can maximise the damage to the tumour while minimising damage to healthy tissue.

    How it works

    3
    A drawing of the X-ray machine used by Wilhelm Röntgen to produce images of the hand. Golan Levin/Flickr, CC BY-SA

    Wilhelm Röntgen discovered X-rays in 1895 and within a year, the link between exposure to too much radiation and skin burns led scientists and doctors to pursue radiation in cancer treatment.

    There are three key stages in the radiotherapy process. The patient is first imaged – using such machines as computer tomography (CT) or magnetic resonance imaging (MRI). This estimates the extent of the tumour and helps to understand where it is with respect to healthy tissues and other critical structures.

    In the second stage, the doctor and treatment team will use these images and the patient’s case history to plan where the radiation beams should be placed – to maximise the damage to the tumour while minimising it to healthy tissues. Complex computer simulations model the interactions of the radiation beams with the patient to give a best estimate of what will happen during treatment.

    4
    A single radiotherapy treatment takes 15 to 30 minutes. IAEA Imagebank/Flickr, CC BY

    During the third, treatment-delivery stage, the patient lies still while the treatment beam rotates, delivering radiation from multiple angles.

    Each treatment generally takes 15 to 30 minutes. Depending on the cancer and stage, there are between one and 40 individual treatments, typically one treatment a day. The patient cannot feel the radiation being delivered.

    Benefits and side effects

    Radiotherapy’s targeting technology has made a significant difference to many cancers, in particular early-stage lung and prostate cancers. It is now possible to have effective, low toxicity treatments for these with one to five radiotherapy sessions.

    For early-stage lung cancer studies estimate with radiotherapy, survival three years after diagnosis is at 95%. For prostate cancer, one study estimates survival at the five year mark is about 93%.

    Side effects for radiotherapy vary markedly between treatment sites, cancer stages and individual patients. They are typically moderate but can be severe. A general side effect of radiotherapy is fatigue.

    5
    Radiotherapy is often used to treat brain tumours. Eric Lewis/Flickr, CC BY

    Other side effects include diarrhoea, appetite loss, dry mouth and difficulty swallowing for head and neck cancer radiotherapy, as well as incontinence and reduction in sexual function for pelvic radiotherapy.

    Long-term effects of radiotherapy are a concern, particularly for children. For instance, radiation to treat childhood brain tumours can have long-lasting cognitive effects that can affect relationships and academic achievement.

    Again doctors will need to weigh up the risks and benefits of treatment for individual patients. Proton beam therapy is arguably most beneficial in these cases.

    Other radiotherapy challenges

    There are several challenges to current radiotherapy. It is often difficult to differentiate the tumour from healthy tissue, and even experts do not always agree on where exactly the tumour is.

    Radiotherapy can’t easily adapt to the complex changes in patients’ anatomy when a patient moves – for instance, when they breathe, swallow, their heart beats or as they digest food. As a result, radiation beams can be off-target, missing the tumour and striking healthy tissue.

    Also, we currently treat all parts of the tumour equally, despite knowing some of the tumour’s regions are more aggressive, resistant to radiation and likely to spread to other parts of the body.

    The tumour itself also changes in response to the treatment, further confounding the problem. An ideal radiotherapy solution would image and adapt the treatment continuously based on these changes.

    Improvements in technology, including in imaging systems that can better find the tumour, can help overcome these challenges.


    Proton therapy requires large accelerators to give protons enough energy to penetrate deep into patients. No video credit.

    Proton beam therapy and other innovations

    Proton beam therapy will help maximise benefits for many patients, including those with cancers near the spinal cord and pelvis. It requires large accelerators to give protons enough energy to penetrate deep into patients. The energetic protons are transported into the treatment room using complex steering magnets and directed to the tumour inside the patient.

    Protons slow down and lose energy inside the patient, with most of the energy loss planned to occur in the tumour. This reduces energy loss in healthy tissues and reduces side effects.

    The problems of changing patient anatomy and physiology in other forms of radiotherapy are also challenges for proton beam therapy.

    The ConversationAustralia has a number of research teams tackling such challenges, including developing new radiation treatment devices, breathing aids for cancer patients, radiation measurement devices, shorter and more convenient treatment schedules and the optimal combination of radiotherapy with other treatments, such as chemotherapy and immunotherapy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    U Sidney campus

    Our founding principle as Australia’s first university was that we would be a modern and progressive institution. It’s an ideal we still hold dear today.

    When Charles William Wentworth proposed the idea of Australia’s first university in 1850, he imagined “the opportunity for the child of every class to become great and useful in the destinies of this country”.

    We’ve stayed true to that original value and purpose by promoting inclusion and diversity for the past 160 years.

    It’s the reason that, as early as 1881, we admitted women on an equal footing to male students. Oxford University didn’t follow suit until 30 years later, and Jesus College at Cambridge University did not begin admitting female students until 1974.

    It’s also why, from the very start, talented students of all backgrounds were given the chance to access further education through bursaries and scholarships.

    Today we offer hundreds of scholarships to support and encourage talented students, and a range of grants and bursaries to those who need a financial helping hand.

     
  • richardmitnick 12:38 pm on July 6, 2017 Permalink | Reply
    Tags: 100 billion brown dwarfs may populate our galaxy, , , , , , COSMOS   

    From COSMOS: “100 billion brown dwarfs may populate our galaxy” 

    Cosmos Magazine bloc

    COSMOS

    06 July 2017
    Michael Lucy

    1
    An artist’s impression of a brown dwarf. NASA / JPL-Caltech.

    The Milky Way may contain as many as 100 billion brown dwarfs, according to new research to be published in the Monthly Notices of the Royal Astronomical Society.

    Brown dwarfs are failed stars that were not quite heavy enough to sustain the hydrogen fusion reactions that make real stars shine. They weigh about 10 to 100 times as much as Jupiter, which means their internal gravitational pressure is enough to fuse deuterium (a kind of hydrogen that contains an extra neutron in each atom) and sometimes also lithium. This means they glow only dimly. Most of the radiation they do give off is in the infrared spectrum and hence invisible to the human eye, though some would appear faintly purple or red.

    All of this makes brown dwarfs very hard for astronomers to spot. While scientists have speculated since the 1960s that they might exist, the first definite sightings did not occur until 1995.

    The new research, by an international team of astronomers lead by Koraljka Muzic from the University of Lisbon and Aleks Scholz from the University of St Andrews, used the European Southern Observatory’s Very Large Telescope to make infrared observations of distant dense star clusters where many new stars were being formed.

    ESO/VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    They counted as many brown dwarfs as they could, and came to the conclusion there were about half as many brown dwarfs as stars.

    Earlier studies of brown dwarfs – which focused mainly on regions nearer to Earth where stars are less dense, simply because they are easier to see if they are closer – concluded the substellar objects were much less common.

    The researchers estimate the minimum number of brown dwarfs in the Milky Way is at least 25 billion and as high as 100 billion – but they note even this upper figure may be an underestimate, given the probability of there being many more failed stars too faint to see at all.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:06 am on July 3, 2017 Permalink | Reply
    Tags: and the Big Bang?, , , , Can faster-than-light particles explain dark matter, , COSMOS, , Tachyons   

    From COSMOS: “Can faster-than-light particles explain dark matter, dark energy, and the Big Bang?” 

    Cosmos Magazine bloc

    COSMOS

    30 June 2017
    Robyn Arianrhod

    1
    Tachyons may explain dark matter, dark energy and the black holes at the core of many galaxies. Andrzej Wojcicki / Science Photo Library / Getty.

    Here are six big questions about our universe that current physics can’t answer:

    What is dark energy, the mysterious energy that appears to be accelerating the expansion of the universe?
    What is dark matter, the invisible substance we can only detect by its gravitational effect on stars and galaxies?
    What caused inflation, the blindingly fast expansion of the universe immediately after the Big Bang?
    For that matter, what caused the Big Bang?
    Are there many possible Big Bangs or universes?
    Is there a telltale characteristic associated with the death of a universe?

    Despite the efforts of some of the world’s brightest brains, the Standard Model of particle physics – our current best theory of how the universe works at a fundamental level – has no solution to these stumpers.

    A compelling new theory claims to solve all six in a single sweep. The answer, according to a paper published in European Physical Journal C by Herb Fried from Brown University and Yves Gabellini from INLN-Université de Nice, may be a kind of particle called a tachyon.

    Tachyons are hypothetical particles that travel faster than light. According to Einstein’s special theory of relativity – and according to experiment so far – in our ‘real’ world, particles can never travel faster than light. Which is just as well: if they did, our ideas about cause and effect would be thrown out the window, because it would be possible to see an effect manifest before its cause.

    Although it is elegantly simple in conception, Fried and Gabellini’s model is controversial because it requires the existence of these tachyons: specifically electrically charged, fermionic tachyons and anti-tachyons, fluctuating as virtual particles in the quantum vacuum (QV). (The idea of virtual particles per se is nothing new: in the Standard Model, forces like electromagnetism are regarded as fields of virtual particles constantly ducking in and out of existence. Taken together, all these virtual particles make up the quantum vacuum.)

    But special relativity, though it bars faster-than-light travel for ordinary matter and photons, does not entirely preclude the existence of tachyons. As Fried explains, “In the presence of a huge-energy event, such as a supernova explosion or the Big Bang itself, perhaps these virtual tachyons can be torn out of the QV and sent flying into the real vacuum (RV) of our everyday world, as real particles that have yet to be measured.”

    If these tachyons do cross the speed-of-light boundary, the researchers believe that their high masses and small distances of interaction would introduce into our world an immeasurably small amount of ‘a-causality’.

    Fried and Gabellini arrived at their tachyon-based model while trying to find an explanation for the dark energy throughout space that appears to fuel the accelerating expansion of the universe. They first proposed that dark energy is produced by fluctuations of virtual pairs of electrons and positrons.

    However, this model ran into mathematical difficulties with unexpected imaginary numbers. In special relativity, however, the rest mass of a tachyon is an imaginary number, unlike the rest mass of ordinary particles. While the equations and imaginary numbers in the new model involve far more than simple masses, the idea is suggestive: Gabellini realized that by including fluctuating pairs of tachyons and anti-tachyons he and Fried could cancel and remove the unwanted imaginary numbers from their calculations. What is more, a huge bonus followed from this creative response to mathematical necessity: Gabellini and Fried realized that by adding their tachyons to the model, they could explain inflation too.

    “This assumption [of fluctuating tachyon-anti-tachyon pairs] cannot be negated by any experimental test,” says Fried – and the model fits beautifully with existing experimental data on dark energy and inflation energy.

    Of course, both Fried and Gabellini recognize that many physicists are wary of theories based on such radical assumptions.

    But, taken as a whole, their model suggests the possibility of a unifying mechanism that gives rise not only to inflation and dark energy, but also to dark matter. Calculations suggest that these high-energy tachyons would re-absorb almost all of the photons they emit and hence be invisible.

    And there is more: as Fried explains, “If a very high-energy tachyon flung into the real vacuum (RV) were then to meet and annihilate with an anti-tachyon of the same species, this tiny quantum ‘explosion’ of energy could be the seed of another Big Bang, giving rise to a new universe. That ‘seed’ would be an energy density, at that spot of annihilation, which is so great that a ‘tear’ occurs in the surface separating the Quantum Vacuum from the RV, and the huge energies stored in the QV are able to blast their way into the RV, producing the Big Bang of a new universe. And over the course of multiple eons, this situation could happen multiple times.”

    This model – like any model of such non-replicable phenomena as the creation of the universe – may be simply characterized as a tantalizing set of speculations. Nevertheless, it not only fits with data on inflation and dark energy, but also offers a possible solution to yet another observed mystery.

    Within the last few years, astronomers have realized that the black hole at the centre of our Milky Way galaxy is ‘supermassive’, containing the mass of a million or more suns. And the same sort of supermassive black hole (SMBH) may be seen at the centres of many other galaxies in our current universe.

    Exactly how such objects form is still an open question. The energy stored in the QV is normally large enough to counteract the gravitational tendency of galaxies to collapse in on themselves. In the theory of Fried and Gabellini, however, when a new universe forms, a huge amount of the QV energy from the old universe escapes through the ‘tear’ made by the tachyon-anti-tachyon annihilation (the new Big Bang). Eventually, even faraway parts of the old universe will be affected, as the old universe’s QV energy leaks into the new universe like air escaping through a hole in a balloon. The decrease in this QV-energy buffer against gravity in the old universe suggests that as the old universe dies, many of its galaxies will form SMBHs in the new universe, each containing the mass of the old galaxy’s former suns and planets. Some of these new SMBHs may form the centres of new galaxies in the new universe.

    “This may not be a very pleasant picture,” says Fried, speaking of the possible fate of our own universe. “But it is at least scientifically consistent.”

    And in the weird, untestable world of Big Bangs and multiple universes, consistency may be the best we can hope for.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:59 am on June 29, 2017 Permalink | Reply
    Tags: A 100-dimensional quantum system from the entanglement of two subatomic particles, , COSMOS, Multi-coloured photons in 100 dimensions may make quantum computers stronger, , , Qutrits   

    From COSMOS: “Multi-coloured photons in 100 dimensions may make quantum computers stronger” 

    Cosmos Magazine bloc

    COSMOS

    29 June 2017
    Andrew Masterson

    1
    An illustration showing high-dimensional color-entangled photon states from a photonic chip, manipulated and transmitted via telecommunications systems.
    Michael Kues.

    By using manipulating the frequency of entangled photons, researchers have found a way to make more stable tools for quantum computing from off-the-shelf equipment.

    Researchers using off-the-shelf telecommunications equipment have created a 100-dimensional quantum system from the entanglement of two subatomic particles.

    The system can be controlled and manipulated to perform high-level gateway functions – a critical component of any viable quantum computer – the scientists report in the journal Nature.

    The team, led by Michael Kues of the University of Glasgow, effectively created a quantum photon generator on a chip. The tiny device uses a micro-ring resonator generate entangled pairs of photons from a laser input.

    The entanglement is far from simple. Each photon is composed of a superposition of several different colours, all expressed simultaneously, giving the photon several dimensions. The expression of any individual colour – or frequency, if you like – is mirrored across the two entangled photons, regardless of the distance between them.

    The complexity of the photon pairs represents a major step forward in manipulating quantum entities.

    Almost all research into quantum states, for the purpose of developing quantum computing, has to date focussed on qubits: artificially created subatomic particles that exist in a superposition two possible states. (They are the quantum equivalent of standard computing ‘bits’, basic units that are capable only of being switched between 1 and 0, or yes/no, or on/off.)

    Kues and colleagues are instead working with qudits, which are essentially qubits with superpositions comprising three or more states.

    In 2016, Russian researchers showed that qudit-based quantum computing systems were inherently more stable than their two dimensional predecessors.

    The Russians, however, were working with a subset of qudits called qutrits, which comprise a superposition of three possible states. Kues and his team upped the ante considerably, fashioning qudits comprising 10 possible states – one for each of the colours, or frequencies, of the photon – giving an entangled pair a minimum of 100.

    And that’s just the beginning. Team member Roberto Morandotti of the University of Electronic Science and Technology of China, in Chengdu, suggests that further refinement will produce entangled two-qudit systems containing as many as 9000 dimensions, bringing a robustness and complexity to quantum computers that is at present unreachable.

    Kues adds that perhaps the most attractive feature of his team’s achievement is that it was done using commercially available components. This means that the strategy can be quickly and easily adapted by other researchers in the field, potentially ushering in a period of very rapid development.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 10:27 am on June 22, 2017 Permalink | Reply
    Tags: Aneuploidy, , , COSMOS, Immune system response points way to beating cancer,   

    From COSMOS: “Immune system response points way to beating cancer” 

    Cosmos Magazine bloc

    COSMOS

    22 June 2017
    Ariella Heffernan-Marks

    1
    A scanning electron micrograph of a cancer cell in the process of dividing.
    Steve Gschmeissner / Getty

    After so many decades of searching for a cure for cancer, new research suggests a solution might have been within our own natural immune system the whole time.

    Angelika Amon, a biologist with the Massachusetts Institute of Technology in Boston, and colleague suggest exactly this in a study published in Developmental Cell. They report finding that cells with a high level of ‘chromosome mis-segregation’ – also known as ‘aneuploidy’ – elicit an innate immune response that results in their own cell-specific death.

    If this response could be replicated in cancer cells, the researchers say, it might provide a mechanism for their successful elimination.

    Aneuploidy occurs when chromosomes do not separate evenly during cellular division. This results in a chromosomal – and therefore genetic – imbalance in the cell. DNA damage, cellular stress, metabolic defect and alterations in gene dosage can also occur.

    Many diseases and disorders have consequently been associated with aneuploidy – including 70–90% of cancer tumours. It has been suggested that alterations in gene dosage can lead to changes in cancer-driver genes, resulting in the erratic proliferation patterns seen in cancer cells.

    Despite aneuploidy being confirmed as a hallmark of cancer, however, there is still debate over the exact link. Not all tumours show the same aneuploidy phenotype, and non-cancer sufferers with aneuploidy phenotypes, such as Down syndrome, tend to demonstrate lower chances of developing cancer, according to the Koch Institute for Integrative Cancer Research, with which Amon is also associated.

    Most normal tissues do not demonstrate aneuploidy. Even mutations in chromosome-alignment proteins do not result in high numbers of aneuploid cells, according to research published in Molecules and Cells.

    Thus the question is: what happens to the aneuploid cells?

    A popular explanation has been a “p53-activated mechanism”, whereby the complex karyotype (or chromosomal arrangement) of an aneuploid cell activates the protein p53, which stimulates mitotic arrest and cell death.

    Amon and her team, however, discovered this was not the case; rather, arrest and death was the result of an innate immune system response. Using live cell imaging and immunofluorescence, they observed chromosome mis-segregation through mutating chromosome alignment proteins and recorded the time to mitotic arrest. The p53 protein was activated regardless of chromosomes being mis-segregated.

    Amon and her colleague investigated the level of DNA damage due to aneuploidy by analysing protein gamma-H2AX, which is found only during double-strand DNA breaks. Elevated levels were found in aneuploid cells, indicating significant DNA stress and damage due to chromosome mis-segregation. Immunofluorescence confirmed this was generating complex karyotypes. “These cells are in a downward spiral where they start out with a little bit of genomic mess,” Amon explains, “and it just gets worse.”

    Additional gene analysis also indicated these cells had higher levels of innate immune cells compared to normal cells. Re-exposing both normal and aneuploid cells to these factors confirmed that specific factors were acting to selectively destroy aneuploid cells – most commonly the natural killer cell NK92. It is believed this could be in response to signals from DNA damage, cellular stress or irregularities in protein levels.

    So what does this mean with regards to finding the “cure to cancer”?

    Cancer cells have found a way to evade this cellular culling strategy. If researchers can find a way to re-activate this mechanism in aneuploid cancer cells, cancer treatment could use a NK92-mediated elimination method instead of toxic and expensive radioactive therapy.

    “We have really no understanding of how that works,” Amon concedes. “If we can figure this out, that probably has tremendous therapeutic implications, given the fact that virtually all cancers are aneuploid.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: