Tagged: The Conversation Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:58 pm on January 16, 2018 Permalink | Reply
    Tags: , , , , , , , , , QUB-Queens University Belfast, , , The Conversation   

    From QUB via The Conversation: “How we created a mini ‘gamma ray burst’ in the lab for the first time” 

    QUB bloc

    Queens University Belfast (QUB)

    The Conversation

    January 15, 2018
    GIANLUCA SARRI

    Gamma ray bursts, intense explosions of light, are the brightest events ever observed in the universe – lasting no longer than seconds or minutes. Some are so luminous that they can be observed with the naked eye, such as the burst “GRB 080319B” discovered by NASA’s Swift GRB Explorer mission on March 19, 2008.

    NASA Neil Gehrels Swift Observatory

    But despite the fact that they are so intense, scientists don’t really know what causes gamma ray bursts. There are even people who believe some of them might be messages sent from advanced alien civilisations. Now we have for the first time managed to recreate a mini version of a gamma ray burst in the laboratory – opening up a whole new way to investigate their properties. Our research is published in Physical Review Letters.

    One idea for the origin of gamma ray bursts [Science] is that they are somehow emitted during the emission of jets of particles released by massive astrophysical objects, such as black holes. This makes gamma ray bursts extremely interesting to astrophysicists – their detailed study can unveil some key properties of the black holes they originate from.

    The beams released by the black holes would be mostly composed of electrons and their “antimatter” companions, the positrons – all particle have antimatter counterparts that are exactly identical to themselves, only with opposite charge. These beams must have strong, self-generated magnetic fields. The rotation of these particles around the fields give off powerful bursts of gamma ray radiation. Or, at least, this is what our theories predict [MNRAS]. But we don’t actually know how the fields would be generated.

    Unfortunately, there are a couple of problems in studying these bursts. Not only do they last for short periods of time but, most problematically, they are originated in distant galaxies, sometimes even billion light years from Earth (imagine a one followed by 25 zeroes – this is basically what one billion light years is in metres).

    That means you rely on looking at something unbelievably far away that happens at random, and lasts only for few seconds. It is a bit like understanding what a candle is made of, by only having glimpses of candles being lit up from time to time thousands of kilometres from you.

    World’s most powerful laser

    It has been recently proposed that the best way to work out how gamma ray bursts are produced would be by mimicking them in small-scale reproductions in the laboratory – reproducing a little source of these electron-positron beams and look at how they evolve when left on their own. Our group and our collaborators from the US, France, UK, and Sweden, recently succeeded in creating the first small-scale replica of this phenomenon by using one of the most intense lasers on Earth, the Gemini laser, hosted by the Rutherford Appleton Laboratory in the UK.

    1
    The Gemini laser, hosted by the Rutherford Appleton Laboratory in the UK.

    How intense is the most intense laser on Earth? Take all the solar power that hits the whole Earth and squeeze it into a few microns (basically the thickness of a human hair) and you have got the intensity of a typical laser shot in Gemini. Shooting this laser onto a complex target, we were able to release ultra-fast and dense copies of these astrophysical jets and make ultra-fast movies of how they behave. The scaling down of these experiments is dramatic: take a real jet that extends even for thousands of light years and compress it down to a few millimetres.

    In our experiment, we were able to observe, for the first time, some of the key phenomena that play a major role in the generation of gamma ray bursts, such as the self-generation of magnetic fields that lasted for a long time. These were able to confirm some major theoretical predictions of the strength and distribution of these fields. In short, our experiment independently confirms that the models currently used to understand gamma ray bursts are on the right track.

    The experiment is not only important for studying gamma ray bursts. Matter made only of electrons and positrons is an extremely peculiar state of matter. Normal matter on Earth is predominantly made of atoms: a heavy positive nucleus surrounded by clouds of light and negative electrons.

    2
    Artist impression of gamma ray burst. NASA [no additional credit for which facility or which artist].

    Due to the incredible difference in weight between these two components (the lightest nucleus weighs 1836 times the electron) almost all the phenomena we experience in our everyday life comes from the dynamics of electrons, which are much quicker in responding to any external input (light, other particles, magnetic fields, you name it) than nuclei. But in an electron-positron beam, both particles have exactly the same mass, meaning that this disparity in reaction times is completely obliterated. This brings to a quantity of fascinating consequences. For example, sound would not exist in an electron-positron world.

    So far so good, but why should we care so much about events that are so distant? There are multiple reasons indeed. First, understanding how gamma ray bursts are formed will allow us to understand a lot more about black holes and thus open a big window on how our universe was born and how it will evolve.

    But there is a more subtle reason. SETI – Search for Extra-Terrestrial Intelligence – looks for messages from alien civilisations by trying to capture electromagnetic signals from space that cannot be explained naturally (it focuses mainly on radio waves, but gamma ray bursts are associated with such radiation too).

    Breakthrough Listen Project

    1

    Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA



    GBO radio telescope, Green Bank, West Virginia, USA


    CSIRO/Parkes Observatory, located 20 kilometres north of the town of Parkes, New South Wales, Australia

    U Manchester Jodrell Bank Lovell Telescope


    SETI@home, BOINC project at UC Berkeley Space Science Lab

    Laser SETI, the future of SETI Institute research

    Of course, if you put your detector to look for emissions from space, you do get an awful lot of different signals. If you really want to isolate intelligent transmissions, you first need to make sure all the natural emissions are perfectly known so that they can excluded. Our study helps towards understanding black hole and pulsar emissions, so that, whenever we detect anything similar, we know that it is not coming from an alien civilisation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    QUB campus

    An international institution

    Queen’s is in the top one per cent of global universities.

    With more than 23,000 students and 3,700 staff, it is a dynamic and diverse institution, a magnet for inward investment, a major employer and investor, a patron of the arts and a global player in areas ranging from cancer studies to sustainability, and from pharmaceuticals to creative writing.
    World-leading research

    Queen’s is a member of the Russell Group of 24 leading UK research-intensive universities, alongside Oxford, Cambridge and Imperial College London.

    In the UK top ten for research intensity

    The Research Excellence Framework (REF) 2014 results placed Queen’s joint 8th in the UK for research intensity, with over 75 per cent of Queen’s researchers undertaking world-class or internationally leading research.

    The University also has 14 subject areas ranked within the UK’s top 20 and 76 per cent of its research classified in the top two categories of world leading and internationally excellent.

    This validates Queen’s as a University with world-class researchers carrying out world-class or internationally leading research.

    Globally recognised education

    The University has won the Queen’s Anniversary Prize for Higher and Further Education on five occasions – for Northern Ireland’s Comprehensive Cancer Services programme and for world-class achievement in green chemistry, environmental research, palaeoecology and law.

    Advertisements
     
  • richardmitnick 5:09 pm on January 8, 2018 Permalink | Reply
    Tags: , , , , Piercing the mystery of the cosmic origins of gold, The Conversation,   

    From The Conversation: “Piercing the mystery of the cosmic origins of gold” 

    Conversation
    The Conversation

    December 17, 2017
    Jérôme Margueron

    Where does gold, the precious metal coveted by mortals through the ages, come from? How, where and when was it produced? Last August, a single astrophysical observation finally gave us the key to answer these questions. The results of this research were published on October 16, 2017 [Physical Review Letters , The Astrophysical Letters and Nature].

    Gold pre-exists the formation of Earth: this is what differentiates it from, for example, diamond. However valuable it may be, this precious stone is born out of mere carbon, whose atomic structure is modified by enormous pressure from the earth’s crust. Gold is totally different – the strongest forces in the earth’s mantle are unable to change the composition of its atomic nucleus. Too bad for the alchemists who dreamed of transforming lead into gold.

    Yet there is gold on Earth, both in its deep core, where it has migrated together with heavy elements such as lead or silver, and in the planet’s crust, which is where we extract this precious metal. While the gold in the core was already there at the formation of our planet, that in the crust is mostly extraterrestrial and arrived after the formation of Earth. It was brought by a gigantic meteor shower that bombarded the Earth (and the Moon) about 3.8 billion years ago.

    Formation of heavy elements

    How gold is produced in the universe? The elements heavier than iron, including gold, are partially produced by the s process during the ultimate evolution phases of the stars. It is a slow process (s stands for slow) that operates in the core of what are referred to as AGB stars – those of low and intermediate mass (less than 10 solar masses) that can produce chemical elements up to polonium. The other half of the heavy elements is produced by the r process (r stands for rapid). But the site where this nucleo-synthesis process takes place has long remained a mystery.

    To understand the discovery enabled by the August 17, 2017, observation, we need to understand the scientific status quo that existed beforehand. For about 50 years, the dominant assumption among the scientific community was that the r process took place during the final explosion of massive stars (specialists speak of a core collapse supernova). Indeed, the formation of light elements (those up to iron) implies nuclear reactions that ensure the stability of the stars by counteracting contraction induced by gravity. For heavier elements – those from iron and beyond – it is necessary to add energy or to take very specific paths, such as the s and r processes. Researchers believed that the r process could occur in ejected matter from the explosion of massive stars, capturing a part of the released energy and participating to the dissemination of material in the interstellar medium.

    Despite the simplicity of this explanation, numerical modelling of supernovae has proved extremely complicated. After 50 years of efforts, researchers have just begun to understand its mechanism. Most of these simulations do unfortunately not provide the physical conditions for the r process.

    These conditions are however quite simple: you need a lot of neutrons and a really warm environment.

    Fusion of neutron stars

    In the last decade or so, some researchers have begun to seriously investigate an alternative scenario of the heavy-element production site. They focused their attention on neutron stars. As befits their name, they constitute a gigantic reservoir of neutrons, which are released occasionally. The strongest of these releases occurs during their merging, in a binary system, also called kilonova. There are several signatures of this phenomenon that luckily were seen on August 17: a gravitational-wave emission culminating a fraction of a second before the final fusion of the stars and a burst of highly energetic light (known as a gamma-ray burst) emitted by a jet of matter approaching the speed of light. Although these bursts have been observed regularly for several decades, it is only since 2015 that gravitational waves have been detectable on Earth thanks to the Virgo and LIGO interferometers.

    August 17 will remain a major date for the scientific community. Indeed, it marks the first simultaneous detection of the arrival of gravitational waves – whose origin in the sky was fairly well identified – and a gamma-ray burst, whose origin was also fairly well localized and coincided with the first one. Gamma-ray burst emissions are focused in a narrow cone, and the astronomers’ lucky break was that this one was emitted in the Earth’s direction.

    In the following days, telescopes continuously analysed the light from this kilonova and found confirmation of the production of elements heavier than iron. They were also able to estimate the frequency of the phenomenon and the amount of material ejected. These estimates are consistent with the average abundance of the elements observed in our galaxy.

    From UCSC:

    UC Santa Cruz

    UC Santa Cruz

    14

    A UC Santa Cruz special report

    Tim Stephens

    Astronomer Ryan Foley says “observing the explosion of two colliding neutron stars” [see https://sciencesprings.wordpress.com/2017/10/17/from-ucsc-first-observations-of-merging-neutron-stars-mark-a-new-era-in-astronomy ]–the first visible event ever linked to gravitational waves–is probably the biggest discovery he’ll make in his lifetime. That’s saying a lot for a young assistant professor who presumably has a long career still ahead of him.

    2
    The first optical image of a gravitational wave source was taken by a team led by Ryan Foley of UC Santa Cruz using the Swope Telescope at the Carnegie Institution’s Las Campanas Observatory in Chile. This image of Swope Supernova Survey 2017a (SSS17a, indicated by arrow) shows the light emitted from the cataclysmic merger of two neutron stars. (Image credit: 1M2H Team/UC Santa Cruz & Carnegie Observatories/Ryan Foley)

    Carnegie Institution Swope telescope at Las Campanas, Chile, 100 kilometres (62 mi) northeast of the city of La Serena. near the north end of a 7 km (4.3 mi) long mountain ridge. Cerro Las Campanas, near the southern end and over 2,500 m (8,200 ft) high, at Las Campanas, Chile

    A neutron star forms when a massive star runs out of fuel and explodes as a supernova, throwing off its outer layers and leaving behind a collapsed core composed almost entirely of neutrons. Neutrons are the uncharged particles in the nucleus of an atom, where they are bound together with positively charged protons. In a neutron star, they are packed together just as densely as in the nucleus of an atom, resulting in an object with one to three times the mass of our sun but only about 12 miles wide.

    “Basically, a neutron star is a gigantic atom with the mass of the sun and the size of a city like San Francisco or Manhattan,” said Foley, an assistant professor of astronomy and astrophysics at UC Santa Cruz.

    These objects are so dense, a cup of neutron star material would weigh as much as Mount Everest, and a teaspoon would weigh a billion tons. It’s as dense as matter can get without collapsing into a black hole.

    THE MERGER

    Like other stars, neutron stars sometimes occur in pairs, orbiting each other and gradually spiraling inward. Eventually, they come together in a catastrophic merger that distorts space and time (creating gravitational waves) and emits a brilliant flare of electromagnetic radiation, including visible, infrared, and ultraviolet light, x-rays, gamma rays, and radio waves. Merging black holes also create gravitational waves, but there’s nothing to be seen because no light can escape from a black hole.

    Foley’s team was the first to observe the light from a neutron star merger that took place on August 17, 2017, and was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO).


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    Now, for the first time, scientists can study both the gravitational waves (ripples in the fabric of space-time), and the radiation emitted from the violent merger of the densest objects in the universe.

    3
    The UC Santa Cruz team found SSS17a by comparing a new image of the galaxy N4993 (right) with images taken four months earlier by the Hubble Space Telescope (left). The arrows indicate where SSS17a was absent from the Hubble image and visible in the new image from the Swope Telescope. (Image credits: Left, Hubble/STScI; Right, 1M2H Team/UC Santa Cruz & Carnegie Observatories/Ryan Foley)

    It’s that combination of data, and all that can be learned from it, that has astronomers and physicists so excited. The observations of this one event are keeping hundreds of scientists busy exploring its implications for everything from fundamental physics and cosmology to the origins of gold and other heavy elements.


    A small team of UC Santa Cruz astronomers were the first team to observe light from two neutron stars merging in August. The implications are huge.

    ALL THE GOLD IN THE UNIVERSE

    It turns out that the origins of the heaviest elements, such as gold, platinum, uranium—pretty much everything heavier than iron—has been an enduring conundrum. All the lighter elements have well-explained origins in the nuclear fusion reactions that make stars shine or in the explosions of stars (supernovae). Initially, astrophysicists thought supernovae could account for the heavy elements, too, but there have always been problems with that theory, says Enrico Ramirez-Ruiz, professor and chair of astronomy and astrophysics at UC Santa Cruz.

    4
    The violent merger of two neutron stars is thought to involve three main energy-transfer processes, shown in this diagram, that give rise to the different types of radiation seen by astronomers, including a gamma-ray burst and a kilonova explosion seen in visible light. (Image credit: Murguia-Berthier et al., Science)

    A theoretical astrophysicist, Ramirez-Ruiz has been a leading proponent of the idea that neutron star mergers are the source of the heavy elements. Building a heavy atomic nucleus means adding a lot of neutrons to it. This process is called rapid neutron capture, or the r-process, and it requires some of the most extreme conditions in the universe: extreme temperatures, extreme densities, and a massive flow of neutrons. A neutron star merger fits the bill.

    Ramirez-Ruiz and other theoretical astrophysicists use supercomputers to simulate the physics of extreme events like supernovae and neutron star mergers. This work always goes hand in hand with observational astronomy. Theoretical predictions tell observers what signatures to look for to identify these events, and observations tell theorists if they got the physics right or if they need to tweak their models. The observations by Foley and others of the neutron star merger now known as SSS17a are giving theorists, for the first time, a full set of observational data to compare with their theoretical models.

    According to Ramirez-Ruiz, the observations support the theory that neutron star mergers can account for all the gold in the universe, as well as about half of all the other elements heavier than iron.

    RIPPLES IN THE FABRIC OF SPACE-TIME

    Einstein predicted the existence of gravitational waves in 1916 in his general theory of relativity, but until recently they were impossible to observe. LIGO’s extraordinarily sensitive detectors achieved the first direct detection of gravitational waves, from the collision of two black holes, in 2015. Gravitational waves are created by any massive accelerating object, but the strongest waves (and the only ones we have any chance of detecting) are produced by the most extreme phenomena.

    Two massive compact objects—such as black holes, neutron stars, or white dwarfs—orbiting around each other faster and faster as they draw closer together are just the kind of system that should radiate strong gravitational waves. Like ripples spreading in a pond, the waves get smaller as they spread outward from the source. By the time they reached Earth, the ripples detected by LIGO caused distortions of space-time thousands of times smaller than the nucleus of an atom.

    The rarefied signals recorded by LIGO’s detectors not only prove the existence of gravitational waves, they also provide crucial information about the events that produced them. Combined with the telescope observations of the neutron star merger, it’s an incredibly rich set of data.

    LIGO can tell scientists the masses of the merging objects and the mass of the new object created in the merger, which reveals whether the merger produced another neutron star or a more massive object that collapsed into a black hole. To calculate how much mass was ejected in the explosion, and how much mass was converted to energy, scientists also need the optical observations from telescopes. That’s especially important for quantifying the nucleosynthesis of heavy elements during the merger.

    LIGO can also provide a measure of the distance to the merging neutron stars, which can now be compared with the distance measurement based on the light from the merger. That’s important to cosmologists studying the expansion of the universe, because the two measurements are based on different fundamental forces (gravity and electromagnetism), giving completely independent results.

    “This is a huge step forward in astronomy,” Foley said. “Having done it once, we now know we can do it again, and it opens up a whole new world of what we call ‘multi-messenger’ astronomy, viewing the universe through different fundamental forces.”

    IN THIS REPORT

    Neutron stars
    A team from UC Santa Cruz was the first to observe the light from a neutron star merger that took place on August 17, 2017 and was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO)

    5
    Graduate students and post-doctoral scholars at UC Santa Cruz played key roles in the dramatic discovery and analysis of colliding neutron stars.Astronomer Ryan Foley leads a team of young graduate students and postdoctoral scholars who have pulled off an extraordinary coup. Following up on the detection of gravitational waves from the violent merger of two neutron stars, Foley’s team was the first to find the source with a telescope and take images of the light from this cataclysmic event. In so doing, they beat much larger and more senior teams with much more powerful telescopes at their disposal.

    “We’re sort of the scrappy young upstarts who worked hard and got the job done,” said Foley, an untenured assistant professor of astronomy and astrophysics at UC Santa Cruz.

    7
    David Coulter, graduate student

    The discovery on August 17, 2017, has been a scientific bonanza, yielding over 100 scientific papers from numerous teams investigating the new observations. Foley’s team is publishing seven papers, each of which has a graduate student or postdoc as the first author.

    “I think it speaks to Ryan’s generosity and how seriously he takes his role as a mentor that he is not putting himself front and center, but has gone out of his way to highlight the roles played by his students and postdocs,” said Enrico Ramirez-Ruiz, professor and chair of astronomy and astrophysics at UC Santa Cruz and the most senior member of Foley’s team.

    “Our team is by far the youngest and most diverse of all of the teams involved in the follow-up observations of this neutron star merger,” Ramirez-Ruiz added.

    8
    Charles Kilpatrick, postdoctoral scholar

    Charles Kilpatrick, a 29-year-old postdoctoral scholar, was the first person in the world to see an image of the light from colliding neutron stars. He was sitting in an office at UC Santa Cruz, working with first-year graduate student Cesar Rojas-Bravo to process image data as it came in from the Swope Telescope in Chile. To see if the Swope images showed anything new, he had also downloaded “template” images taken in the past of the same galaxies the team was searching.

    9
    Ariadna Murguia-Berthier, graduate student

    “In one image I saw something there that was not in the template image,” Kilpatrick said. “It took me a while to realize the ramifications of what I was seeing. This opens up so much new science, it really marks the beginning of something that will continue to be studied for years down the road.”

    At the time, Foley and most of the others in his team were at a meeting in Copenhagen. When they found out about the gravitational wave detection, they quickly got together to plan their search strategy. From Copenhagen, the team sent instructions to the telescope operators in Chile telling them where to point the telescope. Graduate student David Coulter played a key role in prioritizing the galaxies they would search to find the source, and he is the first author of the discovery paper published in Science.

    10
    Matthew Siebert, graduate student

    “It’s still a little unreal when I think about what we’ve accomplished,” Coulter said. “For me, despite the euphoria of recognizing what we were seeing at the moment, we were all incredibly focused on the task at hand. Only afterward did the significance really sink in.”

    Just as Coulter finished writing his paper about the discovery, his wife went into labor, giving birth to a baby girl on September 30. “I was doing revisions to the paper at the hospital,” he said.

    It’s been a wild ride for the whole team, first in the rush to find the source, and then under pressure to quickly analyze the data and write up their findings for publication. “It was really an all-hands-on-deck moment when we all had to pull together and work quickly to exploit this opportunity,” said Kilpatrick, who is first author of a paper comparing the observations with theoretical models.

    11
    César Rojas Bravo, graduate student

    Graduate student Matthew Siebert led a paper analyzing the unusual properties of the light emitted by the merger. Astronomers have observed thousands of supernovae (exploding stars) and other “transients” that appear suddenly in the sky and then fade away, but never before have they observed anything that looks like this neutron star merger. Siebert’s paper concluded that there is only a one in 100,000 chance that the transient they observed is not related to the gravitational waves.

    Ariadna Murguia-Berthier, a graduate student working with Ramirez-Ruiz, is first author of a paper synthesizing data from a range of sources to provide a coherent theoretical framework for understanding the observations.

    Another aspect of the discovery of great interest to astronomers is the nature of the galaxy and the galactic environment in which the merger occurred. Postdoctoral scholar Yen-Chen Pan led a paper analyzing the properties of the host galaxy. Enia Xhakaj, a new graduate student who had just joined the group in August, got the opportunity to help with the analysis and be a coauthor on the paper.

    12
    Yen-Chen Pan, postdoctoral scholar

    “There are so many interesting things to learn from this,” Foley said. “It’s a great experience for all of us to be part of such an important discovery.”

    13
    Enia Xhakaj, graduate student

    IN THIS REPORT

    Scientific Papers from the 1M2H Collaboration

    Coulter et al., Science, Swope Supernova Survey 2017a (SSS17a), the Optical Counterpart to a Gravitational Wave Source

    Drout et al., Science, Light Curves of the Neutron Star Merger GW170817/SSS17a: Implications for R-Process Nucleosynthesis

    Shappee et al., Science, Early Spectra of the Gravitational Wave Source GW170817: Evolution of a Neutron Star Merger

    Kilpatrick et al., Science, Electromagnetic Evidence that SSS17a is the Result of a Binary Neutron Star Merger

    Siebert et al., ApJL, The Unprecedented Properties of the First Electromagnetic Counterpart to a Gravitational-wave Source

    Pan et al., ApJL, The Old Host-galaxy Environment of SSS17a, the First Electromagnetic Counterpart to a Gravitational-wave Source

    Murguia-Berthier et al., ApJL, A Neutron Star Binary Merger Model for GW170817/GRB170817a/SSS17a

    Kasen et al., Nature, Origin of the heavy elements in binary neutron star mergers from a gravitational wave event

    Abbott et al., Nature, A gravitational-wave standard siren measurement of the Hubble constant (The LIGO Scientific Collaboration and The Virgo Collaboration, The 1M2H Collaboration, The Dark Energy Camera GW-EM Collaboration and the DES Collaboration, The DLT40 Collaboration, The Las Cumbres Observatory Collaboration, The VINROUGE Collaboration & The MASTER Collaboration)

    Abbott et al., ApJL, Multi-messenger Observations of a Binary Neutron Star Merger

    PRESS RELEASES AND MEDIA COVERAGE


    Watch Ryan Foley tell the story of how his team found the neutron star merger in the video below. 2.5 HOURS.

    Press releases:

    UC Santa Cruz Press Release

    UC Berkeley Press Release

    Carnegie Institution of Science Press Release

    LIGO Collaboration Press Release

    National Science Foundation Press Release

    Media coverage:

    The Atlantic – The Slack Chat That Changed Astronomy

    Washington Post – Scientists detect gravitational waves from a new kind of nova, sparking a new era in astronomy

    New York Times – LIGO Detects Fierce Collision of Neutron Stars for the First Time

    Science – Merging neutron stars generate gravitational waves and a celestial light show

    CBS News – Gravitational waves – and light – seen in neutron star collision

    CBC News – Astronomers see source of gravitational waves for 1st time

    San Jose Mercury News – A bright light seen across the universe, proving Einstein right

    Popular Science – Gravitational waves just showed us something even cooler than black holes

    Scientific American – Gravitational Wave Astronomers Hit Mother Lode

    Nature – Colliding stars spark rush to solve cosmic mysteries

    National Geographic – In a First, Gravitational Waves Linked to Neutron Star Crash

    Associated Press – Astronomers witness huge cosmic crash, find origins of gold

    Science News – Neutron star collision showers the universe with a wealth of discoveries

    UCSC press release
    First observations of merging neutron stars mark a new era in astronomy

    Credits

    Writing: Tim Stephens
    Video: Nick Gonzales
    Photos: Carolyn Lagattuta
    Header image: Illustration by Robin Dienel courtesy of the Carnegie Institution for Science
    Design and development: Rob Knight
    Project managers: Sherry Main, Scott Hernandez-Jason, Tim Stephens

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet

    Noted in the vdeo but not in te article:

    NASA/Chandra Telescope

    NASA/SWIFT Telescope

    NRAO/Karl V Jansky VLA, on the Plains of San Agustin fifty miles west of Socorro, NM, USA

    Prompt telescope CTIO Chile

    NASA NuSTAR X-ray telescope

    See the full article here

    In a single observation, the hypothesis that prevailed until now – of a r process occurring exclusively during supernovae – is now seriously under question and it is now certain that the r process also takes place in kilonovae. The respective contribution of supernovae and kilonovae for the heavy elements’ nucleo-synthesis remains to be determined, and it will be done with the accumulation of datum related to the next observations. The August 17 observation alone has already allowed a great scientific advance for the global understanding of the origin of heavy elements, including gold.


    This NASA animation is an artist’s view and accelerated version of the first nine days of a kilonova (the merging of two neutron stars) similar to that observed on August 17, 2017 (GW170817). In the approach phase of the two stars, the gravitational waves emitted are coloured pale blue, then after the fusion a jet near the speed of light is emitted (in orange) generating itself a gamma burst (in magenta). The material ejected from the kilonova produces an initially ultraviolet light (violet), then white in the optics, and finally infra-red (red). The jet continues its expansion by emitting light in the X-ray range (blue)

    A new window on the Universe

    A new window to the universe has just been opened, like the day that Galileo focused the first telescope on the sky. The Virgo and LIGO interferometers now make it possible to “hear” the most violent phenomena of the universe, and immense perspectives have opened up for astronomers, astrophysicists, particle physicists and nuclear physicists. This scientific achievement was only possible thanks to the fruitful collaboration between highly supportive nations, in particular the United States, Germany, France and Italy. As an example, there is only one laboratory in the world capable of reaching the required precision for the mirrors reflecting lasers, LMA in Lyon, France. New interferometers are under development in Japan and Indian, and this list will surely soon become longer given huge discoveries expected for the future.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 3:49 pm on October 26, 2017 Permalink | Reply
    Tags: , , , , , , , The Conversation   

    From The Conversation: “Dark matter: The mystery substance physics still can’t identify that makes up the majority of our universe” 

    FNAL II photo

    Fermilab

    Conversation
    The Conversation

    10.25.17
    Dan Hooper

    1
    Astronomers map dark matter indirectly, via its gravitational pull on other objects. NASA, ESA, and D. Coe (NASA JPL/Caltech and STScI), CC BY

    The past few decades have ushered in an amazing era in the science of cosmology. A diverse array of high-precision measurements has allowed us to reconstruct our universe’s history in remarkable detail.

    And when we compare different measurements – of the expansion rate of the universe, the patterns of light released in the formation of the first atoms, the distributions in space of galaxies and galaxy clusters and the abundances of various chemical species – we find that they all tell the same story, and all support the same series of events.

    This line of research has, frankly, been more successful than I think we had any right to have hoped. We know more about the origin and history of our universe today than almost anyone a few decades ago would have guessed that we would learn in such a short time.

    But despite these very considerable successes, there remains much more to be learned. And in some ways, the discoveries made in recent decades have raised as many new questions as they have answered.

    One of the most vexing gets at the heart of what our universe is actually made of. Cosmological observations have determined the average density of matter in our universe to very high precision. But this density turns out to be much greater than can be accounted for with ordinary atoms.

    After decades of measurements and debate, we are now confident that the overwhelming majority of our universe’s matter – about 84 percent – is not made up of atoms, or of any other known substance. Although we can feel the gravitational pull of this other matter, and clearly tell that it’s there, we simply do not know what it is. This mysterious stuff is invisible, or at least nearly so. For lack of a better name, we call it “dark matter.” But naming something is very different from understanding it.

    For almost as long as we’ve known that dark matter exists, physicists and astronomers have been devising ways to try to learn what it’s made of. They’ve built ultra-sensitive detectors, deployed in deep underground mines, in an effort to measure the gentle impacts of individual dark matter particles colliding with atoms.

    They’ve built exotic telescopes – sensitive not to optical light but to less familiar gamma rays, cosmic rays and neutrinos – to search for the high-energy radiation that is thought to be generated through the interactions of dark matter particles.

    And we have searched for signs of dark matter using incredible machines which accelerate beams of particles – typically protons or electrons – up to the highest speeds possible, and then smash them into one another in an effort to convert their energy into matter. The idea is these collisions could create new and exotic substances, perhaps including the kinds of particles that make up the dark matter of our universe.

    As recently as a decade ago, most cosmologists – including myself – were reasonably confident that we would soon begin to solve the puzzle of dark matter. After all, there was an ambitious experimental program on the horizon, which we anticipated would enable us to identify the nature of this substance and to begin to measure its properties. This program included the world’s most powerful particle accelerator – the Large Hadron Collider – as well as an array of other new experiments and powerful telescopes.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    2
    Experiments at CERN are trying to zero in on dark matter – but so far no dice. CERN, CC BY-ND

    But things did not play out the way that we expected them to. Although these experiments and observations have been carried out as well as or better than we could have hoped, the discoveries did not come.

    Over the past 15 years, for example, experiments designed to detect individual particles of dark matter have become a million times more sensitive, and yet no signs of these elusive particles have appeared. And although the Large Hadron Collider has by all technical standards performed beautifully, with the exception of the Higgs boson, no new particles or other phenomena have been discovered.

    3
    At Fermilab, the Cryogenic Dark Matter Search uses towers of disks made from silicon and germanium to search for particle interactions from dark matter. Reidar Hahn/Fermilab, CC BY

    The stubborn elusiveness of dark matter has left many scientists both surprised and confused. We had what seemed like very good reasons to expect particles of dark matter to be discovered by now. And yet the hunt continues, and the mystery deepens.

    In many ways, we have only more open questions now than we did a decade or two ago. And at times, it can seem that the more precisely we measure our universe, the less we understand it. Throughout the second half of the 20th century, theoretical particle physicists were often very successful at predicting the kinds of particles that would be discovered as accelerators became increasingly powerful. It was a truly impressive run.

    But our prescience seems to have come to an end – the long-predicted particles associated with our favorite and most well-motivated theories have stubbornly refused to appear. Perhaps the discoveries of such particles are right around the corner, and our confidence will soon be restored. But right now, there seems to be little support for such optimism.

    In response, droves of physicists are going back to their chalkboards, revisiting and revising their assumptions. With bruised egos and a bit more humility, we are desperately attempting to find a new way to make sense of our world.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 4:14 pm on July 16, 2017 Permalink | Reply
    Tags: , Comparing competitiveness, Is America’s digital leadership on the wane?, Just five companies – Comcast Spectrum Verizon CenturyLink and AT&T – serve more than 80 percent of wired-internet customers in ths US, The Conversation, The US is stalling out, We looked not only at current conditions but also at how fast those conditions are changing   

    From The Conversation: “Is America’s digital leadership on the wane?” Simple Answer Yes 

    Conversation
    The Conversation

    July 16, 2017
    Bhaskar Chakravorti

    American leadership in technology innovation and economic competitiveness is at risk if U.S. policymakers don’t take crucial steps to protect the country’s digital future. The country that gave the world the internet and the very concept of the disruptive startup could find its role in the global innovation economy slipping from reigning incumbent to a disrupted has-been.

    My research, conducted with Ravi Shankar Chaturvedi, investigates our increasingly digital global society, in which physical interactions – in communications, social and political exchange, commerce, media and entertainment – are being displaced by electronically mediated ones. Our most recent report, “Digital Planet 2017: How Competitiveness and Trust in Digital Economies Vary Across the World,” confirms that the U.S. is on the brink of losing its long-held global advantage in digital innovation.

    Our yearlong study examined factors that influence innovation, such as economic conditions, governmental backing, startup funding, research and development spending and entrepreneurial talent across 60 countries. We found that while the U.S. has a very advanced digital environment, the pace of American investment and innovation is slowing. Other countries – not just major powers like China, but also smaller nations like New Zealand, Singapore and the United Arab Emirates – are building significant public and private efforts that we expect to become foundations for future generations of innovation and successful startup businesses.

    Based on our findings, I believe that rolling back net neutrality rules [NYT]will jeopardize the digital startup ecosystem that has created value for customers, wealth for investors and globally recognized leadership for American technology companies and entrepreneurs. The digital economy in the U.S. is already on the verge of stalling; failing to protect an open internet [freepress] would further erode the United States’ digital competitiveness, making a troubling situation even worse.

    2
    Comparing 60 countries’ digital economies. Harvard Business Review, used and reproducible by permission, CC BY-ND.

    Comparing competitiveness

    In the U.S., the reins of internet connectivity are tightly controlled. Just five companies – Comcast, Spectrum, Verizon, CenturyLink and AT&T – serve more than 80 percent of wired-internet customers. What those companies provide is both slower and more expensive than in many countries around the world. Ending net neutrality, as the Trump administration has proposed, would give internet providers even more power, letting them decide which companies’ innovations can reach the public, and at what costs and speeds.

    However, our research shows that the U.S. doesn’t need more limits on startups. Rather, it should work to revive the creative energy that has been America’s gift to the digital planet. For each of the 60 countries we examined, we combined 170 factors – including elements that measure technological infrastructure, government policies and economic activity – into a ranking we call the Digital Evolution Index.

    To evaluate a country’s competitiveness, we looked not only at current conditions, but also at how fast those conditions are changing. For example, we noted not only how many people have broadband internet service, but also how quickly access is becoming available to more of a country’s population. And we observed not just how many consumers are prepared to buy and sell online, but whether this readiness to transact online is increasing each year and by how much.

    The countries formed four major groups:

    “Stand Out” countries can be considered the digital elite; they are both highly digitally evolved and advancing quickly.
    “Stall Out” countries have reached a high level of digital evolution, but risk falling behind due to a slower pace of progress and would benefit from a heightened focus on innovation.
    “Break Out” countries score relatively low for overall digital evolution, but are evolving quickly enough to suggest they have the potential to become strong digital economies.
    “Watch Out” countries are neither well advanced nor improving rapidly. They have a lot of work to do, both in terms of infrastructure development and innovation.

    The US is stalling out

    The picture that emerges for the U.S. is not a pretty one. Despite being the 10th-most digitally advanced country today, America’s progress is slowing. It is close to joining the major EU countries and the Nordic nations in a club of nations that are, digitally speaking, stalling out.

    The “Stand Out” countries are setting new global standards of high states of evolution and high rates of change, and exploring various innovations such as self-driving cars or robot policemen. New Zealand, for example, is investing in a superior telecommunications system and adopting forward-looking policies that create incentives for entrepreneurs. Singapore plans to invest more than US$13 billion in high-tech industries by 2020. The United Arab Emirates has created free-trade zones and is transforming the city of Dubai into a “smart city,” linking sensors and government offices with residents and visitors to create an interconnected web of transportation, utilities and government services.

    3
    India’s smartphone market – a key element of internet connectivity there – is growing rapidly. Shailesh Andrade/Reuters.

    The “Break Out” countries, many in Asia, are typically not as advanced as others at present, but are catching up quickly, and are on pace to surpass some of today’s “Stand Out” nations in the near future. For example, China – the world’s largest retail and e-commerce market, with the world’s largest number of people using the internet – has the fastest-changing digital economy. Another “Break Out” country is India, which is already the world’s second-largest smartphone market. Though only one-fifth of its 1.3 billion people have online access today, by 2030, some estimates suggest, 1 billion Indians will be online.

    By contrast, the U.S. is on the edge between “Stand Out” and “Stall Out.” One reason is that the American startup economy is slowing down: Private startups are attracting huge investments, but those efforts aren’t paying off when the startups are either acquired by bigger companies or offer themselves on the public stock markets.

    Investors, business leaders and policymakers need to take a more realistic look at the best way to profit from innovation, balancing efforts toward both huge results and modest ones. They may need to recall the lesson from the founding of the internet itself: If government invests in key aspects of digital infrastructure, either directly or by creating subsidies and tax incentives, that lays the groundwork for massive private investment and innovation that can transform the economy.

    In addition, investments in Asian digital startups have exceeded those in the U.S. for the first time. According to CB Insights and PwC, US$19.3 billion in venture capital from sources around the world was invested in Asian tech startups in the second quarter of 2017, while the U.S. had $18.4 billion in new investment over the same period.

    This is consistent with our findings that Asian high-momentum countries are the ones in the “Break Out” zone; these countries are the ones most exciting for investors. Over time, the U.S.-Asia gap could widen; both money and talent could migrate to digital hot spots elsewhere, such as China and India, or smaller destinations, such as Singapore and New Zealand.

    For the country that gave the world the foundations of the digital economy and a president who seems perpetually plugged in, falling behind would, indeed, be a disgrace.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:29 am on May 8, 2017 Permalink | Reply
    Tags: , Cycling to work: major new study suggests health benefits are staggering, The Conversation   

    From The Conversation: “Cycling to work: major new study suggests health benefits are staggering” 

    Conversation
    The Conversation

    April 19, 2017 [Not exactly prompt.]
    Jason Gill
    Reader, Institute of Cardiovascular and Medical Sciences, University of Glasgow

    Carlos Celis-Morales
    Research Associate, Institute of Cardiovascular and Medical Sciences, University of Glasgow

    1
    Pump action. Csaba Peterdi

    Research has consistently shown that people who are less physically active are both more likely to develop health problems like heart disease and type 2 diabetes, and to die younger. Yet there is increasing evidence that physical activity levels are on the decline.

    The problem is that when there are many demands on our time, many people find prioritising exercise difficult. One answer is to multi-task by cycling or walking to work. We’ve just completed the largest ever study into how this affects your health.

    Published in the British Medical Journal today, the results for cycling in particular have important implications. They suggest that councils and governments need to make it a top priority to encourage as many commuters to get on their bikes as possible.

    The findings

    Cycling or walking to work, sometimes referred to as active commuting, is not very common in the UK. Only three per cent of commuters cycle to work and 11% walk, one of the lowest rates in Europe. At the other end of the scale, 43% of the Dutch and 30% of Danes cycle daily.

    To get a better understanding of what the UK could be missing, we looked at 263,450 people with an average age of 53 who were either in paid employment or self-employed, and didn’t always work at home. Participants were asked whether they usually travelled to work by car, public transport, walking, cycling or a combination.

    We then grouped our commuters into five categories: non-active (car/public transport); walking only; cycling (including some who also walked); mixed-mode walking (walking plus non-active); and mixed-mode cycling (cycling plus non-active, including some who also walked).

    We followed people for around five years, counting the incidences of heart disease, cancers and death. Importantly, we adjusted for other health influences including sex, age, deprivation, ethnicity, smoking, body mass index, other types of physical activity, time spent sitting down and diet. Any potential differences in risk associated with road accidents is also accounted for in our analysis, while we excluded participants who had heart disease or cancer already.

    We found that cycling to work was associated with a 41% lower risk of dying overall compared to commuting by car or public transport. Cycle commuters had a 52% lower risk of dying from heart disease and a 40% lower risk of dying from cancer. They also had 46% lower risk of developing heart disease and a 45% lower risk of developing cancer at all.

    Walking to work was not associated with a lower risk of dying from all causes. Walkers did, however, have a 27% lower risk of heart disease and a 36% lower risk of dying from it.

    The mixed-mode cyclists enjoyed a 24% lower risk of death from all causes, a 32% lower risk of developing cancer and a 36% lower risk of dying from cancer. They did not have a significantly lower risk of heart disease, however, while mixed-mode walkers did not have a significantly lower risk of any of the health outcomes we analysed.

    For both cyclists and walkers, there was a trend for a greater lowering of risk in those who commuted longer distances. In addition, those who cycled part of the way to work still saw benefits – this is important as many people live too far from work to cycle the entire distance.

    As for walkers, the fact that their health benefits were more modest may be related to distance, since they commute fewer miles on average in the UK – six per week compared to 30 for cyclists. They may therefore need to walk longer distances to elicit meaningful benefits. Equally, however, it may be that the lower benefits from walking are related to the fact that it’s a less intense activity.

    What now?

    Our work builds on the evidence from previous studies [American Journel of Epedemiology] in a number of important ways. Our quarter of a million participants was larger than all previous studies combined, which enabled us to show the associations between cycling/walking to work and health outcomes more clearly than before.

    In particular, the findings resolve previous uncertainties about the association with cancer, and also with heart attacks and related fatalities. We also had enough participants to separately evaluate cycling, walking and mixed-mode commuting for the first time, which helped us confirm that cycling to work is more beneficial than walking.

    In addition, much of the previous research was undertaken in places like China and the Nordic countries where cycling to work is common and the supporting infrastructure is good. We now know that the same benefits apply in a country where active commuting is not part of the established culture.

    It is important to stress that while we did our best to eliminate other potential factors which might influence the findings, it is never possible to do this completely. This means we cannot conclusively say active commuting is the cause of the health outcomes that we measured. Nevertheless, the findings suggest policymakers can make a big difference to public health by encouraging cycling to work in particular. And we should not forget other benefits such as reducing congestion and motor emissions.

    Some countries are well ahead of the UK in encouraging cyclists. In Copenhagen and Amsterdam, for instance, people cycle because it is the easiest way to get around town.

    2
    Dutch courage. S-F

    It was not always this way – both cities pursued clear strategies to improve cycle infrastructure first. Ways to achieve this include increasing provision for cycle lanes, city bike hire schemes, subsidised bike purchase schemes, secure cycle parking and more facilities for bicycles on public transport.

    For the UK and other countries that have lagged behind, the new findings suggest there is a clear opportunity. If decision makers are bold enough to rise to the challenge, the long-term benefits are potentially transformative.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:22 am on April 8, 2017 Permalink | Reply
    Tags: , , , , , , Exoplanet discovery by an amateur astronomer shows the power of citizen science, The Conversation   

    From CSIRO via The Conversation: “Exoplanet discovery by an amateur astronomer shows the power of citizen science “ 

    CSIRO bloc

    Commonwealth Scientific and Industrial Research Organisation

    The Conversation

    4.7.17
    Ray Norris

    You don’t need to be a professional astronomer to find new worlds orbiting distant stars. Darwin mechanic and amateur astronomer Andrew Grey this week helped to discover a new exoplanet system with at least four orbiting planets.

    2
    An artist’s impression of some of the thousands of exoplanets discovered by NASA’s Kepler Space Telescope. Credit: NASA/JPL

    But Andrew did have professional help and support.

    The discovery was a highlight moment of this week’s three-evening special ABC Stargazing Live, featuring British physicist Brian Cox, presenter Julia Zemiro and others.

    Viewers were encouraged to join in the search for exoplanets – planets orbiting distant stars – using the Exoplanet Explorers website. After a quick tutorial they were then asked to trawl through data on thousands of stars recently observed with NASA’s Kepler Space Telescope.

    NASA/Kepler Telescope

    Grey checked out more than 1,000 stars on the website before discovering the characteristic dips in brightness of the star in the data that signify an exoplanet.

    2
    As the planet passes in front of the star, it hides part of the star, causing a characteristic dip in brightness. ABC/Zooniverse

    Together with other co-discoverers, Grey’s name will appear on a scientific paper reporting the very significant discovery of a star with four planets, orbiting closer to the star than Mercury is to our Sun.

    Grey told Stargazing Live:

    “That is amazing. Definitely my first scientific publication … just glad that I can contribute. It feels very good.”

    Cox was clearly impressed by the new discovery:

    “In the seven years I’ve been making Stargazing Live this is the most significant scientific discovery we’ve ever made.”

    A breakthrough for citizen science

    So just what does this discovery signify? First, let’s be clear: this is no publicity stunt, or a bit of fake news dressed up to make a good story.

    This is a real scientific discovery, to be reported in the scientific literature like other discoveries made by astronomers.

    It will help us understand the formation of our own Earth. It’s also a step towards establishing whether we are alone in the universe, or whether there are other planets populated by other civilisations.

    On the other hand, it must be acknowledged that this discovery joins the list of more than 2,300 known exoplanets discovered by Kepler so far. There are thousands more candidate planets to be examined.

    If Grey and his colleagues hadn’t discovered this new planetary system, then somebody else would have eventually discovered it. But that can be said of all discoveries. The fact remains that this particular discovery was made by Grey and his fellow citizen scientists.

    Amateurs and professionals working together

    I think that the greatest significance of this discovery is that it heralds a change in the way we do science.

    As I said earlier, Grey didn’t make this discovery alone. He used data from the Kepler spacecraft with a mission cost of US$600 million.

    Although we can build stunning telescopes that produce vast amounts of valuable data, we can’t yet build an algorithm that approaches the extraordinary abilities of the human brain to examine that data.

    A human brain can detect patterns in the data far more effectively than any machine-learning algorithm yet devised. Because of the large volume of data generated by Kepler and other scientific instruments, we need large teams of human brains – larger than any research lab.

    But the brains don’t need to be trained astrophysicists, they just need to have the amazing cognitive abilities of the human brain.

    This results in a partnership where big science produces data, and citizen scientists inspect the data to help make discoveries. It means that anyone can be involved in cutting-edge science, accelerating the growth of human knowledge.

    A gathering of brainpower

    This is happening all over science and even the arts, from butterfly hunting to transcribing Shakespeare’s handwriting.

    Last year citizen scientists in the Australian-led Radio Galaxy Zoo project discovered the largest known cluster of galaxies.

    None of these projects would be possible without widespread access to the internet, and readily-available tools to build citizen science projects, such as the Zooniverse project.

    Will machines ever make citizen scientists redundant? I have argued before that we need to build algorithms called “machine scientists” to make future discoveries from the vast volumes of data we are generating.

    But these algorithms still need to be trained by humans. The larger our human-generated training set, the better our machine scientists will work.

    So rather than making citizen scientists redundant, the machine scientists multiply the power of citizen scientists, so that a discovery made by a future Andrew Grey may result in hundreds of discoveries by machines trained using his discovery.

    I see the power of citizen scientists continuing to grow. I suspect this is only the start. We can do much more. We can increase the “fun” of doing citizen science by introducing “gaming” elements into citizen science programs, or by taking advantage of new technologies such as augmented reality and immersive virtual reality.

    Perhaps we can tap into other human qualities such as imagination and creativity to achieve goals that still frustrate machines.

    I look forward to the day when a Nobel prize is won by someone in a developing country without access to a traditional university education, but who uses the power of their mind, the wealth of information on the web and the tools of citizen science to transcend the dreams of traditional science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 8:17 am on April 6, 2017 Permalink | Reply
    Tags: , , Paradoxes of probability and other statistical strangeness, The Conversation   

    From COSMOS: “Paradoxes of probability and other statistical strangeness” 

    Cosmos Magazine bloc

    COSMOS

    1
    Statistics and probability can sometimes yield mind bending results. Shutterstock

    Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about statistics, probability and risk.

    You don’t have to wait long to see a headline proclaiming that some food or behaviour is associated with either an increased or a decreased health risk, or often both. How can it be that seemingly rigorous scientific studies can produce opposite conclusions?

    Nowadays, researchers can access a wealth of software packages that can readily analyse data and output the results of complex statistical tests. While these are powerful resources, they also open the door to people without a full statistical understanding to misunderstand some of the subtleties within a dataset and to draw wildly incorrect conclusions.

    Here are a few common statistical fallacies and paradoxes and how they can lead to results that are counterintuitive and, in many cases, simply wrong.

    Simpson’s paradox, What is it?

    This is where trends that appear within different groups disappear when data for those groups are combined. When this happens, the overall trend might even appear to be the opposite of the trends in each group.

    One example of this paradox is where a treatment can be detrimental in all groups of patients, yet can appear beneficial overall once the groups are combined.
    How does it happen?

    This can happen when the sizes of the groups are uneven. A trial with careless (or unscrupulous) selection of the numbers of patients could conclude that a harmful treatment appears beneficial.
    Example

    Consider the following double blind trial of a proposed medical treatment. A group of 120 patients (split into subgroups of sizes 10, 20, 30 and 60) receive the treatment, and 120 patients (split into subgroups of corresponding sizes 60, 30, 20 and 10) receive no treatment.

    The overall results make it look like the treatment was beneficial to patients, with a higher recovery rate for patients with the treatment than for those without it.

    3
    The Conversation, CC BY-ND

    However, when you drill down into the various groups that made up the cohort in the study, you see in all groups of patients, the recovery rate was 50% higher for patients who had no treatment.

    4
    The Conversation, CC BY-ND

    But note that the size and age distribution of each group is different between those who took the treatment and those who didn’t. This is what distorts the numbers. In this case, the treatment group is disproportionately stacked with children, whose recovery rates are typically higher, with or without treatment.

    Base rate fallacy
    What is it?

    This fallacy occurs when we disregard important information when making a judgement on how likely something is.

    If, for example, we hear that someone loves music, we might think it’s more likely they’re a professional musician than an accountant. However, there are many more accountants than there are professional musicians. Here we have neglected that the base rate for the number of accountants is far higher than the number of musicians, so we were unduly swayed by the information that the person likes music.
    How does it happen?

    The base rate fallacy occurs when the base rate for one option is substantially higher than for another.
    Example

    Consider testing for a rare medical condition, such as one that affects only 4% (1 in 25) of a population.

    Let’s say there is a test for the condition, but it’s not perfect. If someone has the condition, the test will correctly identify them as being ill around 92% of the time. If someone doesn’t have the condition, the test will correctly identify them as being healthy 75% of the time.

    So if we test a group of people, and find that over a quarter of them are diagnosed as being ill, we might expect that most of these people really do have the condition. But we’d be wrong.

    5
    In a typical sample of 300 patients, for every 11 people correctly identified as unwell, a further 72 are incorrectly identified as unwell. The Conversation, CC BY-ND

    According to our numbers above, of the 4% of patients who are ill, almost 92% will be correctly diagnosed as ill (that is, about 3.67% of the overall population). But of the 96% of patients who are not ill, 25% will be incorrectly diagnosed as ill (that’s 24% of the overall population).

    What this means is that of the approximately 27.67% of the population who are diagnosed as ill, only around 3.67% actually are. So of the people who were diagnosed as ill, only around 13% (that is, 3.67%/27.67%) actually are unwell.

    Worryingly, when a famous study asked general practitioners to perform a similar calculation to inform patients of the correct risks associated with mammogram results, just 15% of them did so correctly.

    Will Rogers paradox
    What is it?

    This occurs when moving something from one group to another raises the average of both groups, even though no values actually increase.

    The name comes from the American comedian Will Rogers, who joked that “when the Okies left Oklahoma and moved to California, they raised the average intelligence in both states”.

    Former New Zealand Prime Minister Rob Muldoon provided a local variant on the joke in the 1980s, regarding migration from his nation into Australia.

    How does it happen?

    When a datapoint is reclassified from one group to another, if the point is below the average of the group it is leaving, but above the average of the one it is joining, both groups’ averages will increase.
    Example

    Consider the case of six patients whose life expectancies (in years) have been assessed as being 40, 50, 60, 70, 80 and 90.

    The patients who have life expectancies of 40 and 50 have been diagnosed with a medical condition; the other four have not. This gives an average life expectancy within diagnosed patients of 45 years and within non-diagnosed patients of 75 years.

    If an improved diagnostic tool is developed that detects the condition in the patient with the 60-year life expectancy, then the average within both groups rises by 5 years.

    6
    The Conversation, CC BY-ND

    Berkson’s paradox
    What is it?

    Berkson’s paradox can make it look like there’s an association between two independent variables when there isn’t one.
    How does it happen?

    This happens when we have a set with two independent variables, which means they should be entirely unrelated. But if we only look at a subset of the whole population, it can look like there is a negative trend between the two variables.

    This can occur when the subset is not an unbiased sample of the whole population. It has been frequently cited in medical statistics. For example, if patients only present at a clinic with disease A, disease B or both, then even if the two diseases are independent, a negative association between them may be observed.

    Example

    Consider the case of a school that recruits students based on both academic and sporting ability. Assume that these two skills are totally independent of each other. That is, in the whole population, an excellent sportsperson is just as likely to be strong or weak academically as is someone who’s poor at sport.

    If the school admits only students who are excellent academically, excellent at sport or excellent at both, then within this group it would appear that sporting ability is negatively correlated with academic ability.

    To illustrate, assume that every potential student is ranked on both academic and sporting ability from 1 to 10. There are an equal proportion of people in each band for each skill. Knowing a person’s band in either skill does not tell you anything about their likely band in the other.

    Assume now that the school only admits students who are at band 9 or 10 in at least one of the skills.

    If we look at the whole population, the average academic rank of the weakest sportsperson and the best sportsperson are both equal (5.5).

    However, within the set of admitted students, the average academic rank of the elite sportsperson is still that of the whole population (5.5), but the average academic rank of the weakest sportsperson is 9.5, wrongly implying a negative correlation between the two abilities.

    7
    The Conversation, CC BY-ND

    Multiple comparisons fallacy
    What is it?

    This is where unexpected trends can occur through random chance alone in a data set with a large number of variables.

    How does it happen?

    When looking at many variables and mining for trends, it is easy to overlook how many possible trends you are testing. For example, with 1,000 variables, there are almost half a million (1,000×999/2) potential pairs of variables that might appear correlated by pure chance alone.

    While each pair is extremely unlikely to look dependent, the chances are that from the half million pairs, quite a few will look dependent.
    Example

    The Birthday paradox is a classic example of the multiple comparisons fallacy.

    In a group of 23 people (assuming each of their birthdays is an independently chosen day of the year with all days equally likely), it is more likely than not that at least two of the group have the same birthday.

    People often disbelieve this, recalling that it is rare that they meet someone who shares their own birthday. If you just pick two people, the chance they share a birthday is, of course, low (roughly 1 in 365, which is less than 0.3%).

    However, with 23 people there are 253 (23×22/2) pairs of people who might have a common birthday. So by looking across the whole group you are testing to see if any one of these 253 pairings, each of which independently has a 0.3% chance of coinciding, does indeed match. These many possibilities of a pair actually make it statistically very likely for coincidental matches to arise.

    For a group of as few as 40 people, it is almost nine times as likely that there is a shared birthday than not.

    8
    The probability of no shared birthdays drops as the number of people in a group increases. The Conversation, CC BY-ND

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:57 am on February 20, 2017 Permalink | Reply
    Tags: , Man-made earthquakes, The Conversation   

    From The Conversation: “Earthquakes triggered by humans pose growing risk” 

    Conversation
    The Conversation

    January 22, 2017
    No writer credit found

    1
    Devastation in Sichuan province after the 2008 Wenchuan earthquake, thought to be induced by industrial activity at a nearby reservoir. dominiqueb/flickr

    People knew we could induce earthquakes before we knew what they were. As soon as people started to dig minerals out of the ground, rockfalls and tunnel collapses must have become recognized hazards.

    Today, earthquakes caused by humans occur on a much greater scale. Events over the last century have shown mining is just one of many industrial activities that can induce earthquakes large enough to cause significant damage and death. Filling of water reservoirs behind dams, extraction of oil and gas, and geothermal energy production are just a few of the modern industrial activities shown to induce earthquakes.

    As more and more types of industrial activity were recognized to be potentially seismogenic, the Nederlandse Aardolie Maatschappij BV, an oil and gas company based in the Netherlands, commissioned us to conduct a comprehensive global review of all human-induced earthquakes.

    Our work assembled a rich picture from the hundreds of jigsaw pieces scattered throughout the national and international scientific literature of many nations. The sheer breadth of industrial activity we found to be potentially seismogenic came as a surprise to many scientists. As the scale of industry grows, the problem of induced earthquakes is increasing also.

    In addition, we found that, because small earthquakes can trigger larger ones, industrial activity has the potential, on rare occasions, to induce extremely large, damaging events.

    How humans induce earthquakes

    As part of our review we assembled a database of cases that is, to our knowledge, the fullest drawn up to date. On Jan. 28, we will release this database publicly. We hope it will inform citizens about the subject and stimulate scientific research into how to manage this very new challenge to human ingenuity.

    Our survey showed mining-related activity accounts for the largest number of cases in our database.

    Earthquakes caused by humans

    Last year, the Nederlandse Aardolie Maatschappij BV commissioned a comprehensive global review of all human-induced earthquakes. The sheer breadth of industrial activity that is potentially seismogenic came as a surprise to many scientists. These examples are now catalogued at The Induced Earthquakes Database.

    Mining 37.4%
    Water reservoir impoundment 23.3%
    Conventional oil and gas 15%
    Geothermal 7.8%
    Waste fluid injection 5%
    Fracking 3.9%
    Nuclear explosion 3%
    Research experiments 1.8%
    Groundwater extraction 0.7%
    Construction 0.3%
    Carbon capture and storage 0.3%

    Source: Earth-Science Reviews Get the data

    Initially, mining technology was primitive. Mines were small and relatively shallow. Collapse events would have been minor – though this might have been little comfort to anyone caught in one.

    But modern mines exist on a totally different scale. Precious minerals are extracted from mines that may be over two miles deep or extend several miles offshore under the oceans. The total amount of rock removed by mining worldwide now amounts to several tens of billions of tons per year. That’s double what it was 15 years ago – and it’s set to double again over the next 15. Meanwhile, much of the coal that fuels the world’s industry has already been exhausted from shallow layers, and mines must become bigger and deeper to satisfy demand.

    As mines expand, mining-related earthquakes become bigger and more frequent. Damage and fatalities, too, scale up. Hundreds of deaths have occurred in coal and mineral mines over the last few decades as a result of earthquakes up to magnitude 6.1 that have been induced.

    Other activities that might induce earthquakes include the erection of heavy superstructures. The 700-megaton Taipei 101 building, raised in Taiwan in the 1990s, was blamed for the increasing frequency and size of nearby earthquakes.

    Since the early 20th century, it has been clear that filling large water reservoirs can induce potentially dangerous earthquakes. This came into tragic focus in 1967 when, just five years after the 32-mile-long Koyna reservoir in west India was filled, a magnitude 6.3 earthquake struck, killing at least 180 people and damaging the dam.

    Throughout the following decades, ongoing cyclic earthquake activity accompanied rises and falls in the annual reservoir-level cycle. An earthquake larger than magnitude 5 occurs there on average every four years. Our report found that, to date, some 170 reservoirs the world over have reportedly induced earthquake activity.

    Magnitude of human-induced earthquakes

    The magnitudes of the largest earthquakes postulated to be associated with projects of different types varies greatly. This graph shows the number of cases reported for projects of various types vs. maximum earthquake magnitude for the 577 cases for which data are available.

    4
    *”Other” category includes carbon capture and storage, construction, groundwater extraction, nuclear explosion, research experiments, and unspecified oil, gas and waste water.

    Source: Earth-Science Reviews Get the data [links are above]

    The production of oil and gas was implicated in several destructive earthquakes in the magnitude 6 range in California. This industry is becoming increasingly seismogenic as oil and gas fields become depleted. In such fields, in addition to mass removal by production, fluids are also injected to flush out the last of the hydrocarbons and to dispose of the large quantities of salt water that accompany production in expiring fields.

    A relatively new technology in oil and gas is shale-gas hydraulic fracturing, or fracking, which by its very nature generates small earthquakes as the rock fractures. Occasionally, this can lead to a larger-magnitude earthquake if the injected fluids leak into a fault that is already stressed by geological processes.

    The largest fracking-related earthquake that has so far been reported occurred in Canada, with a magnitude of 4.6. In Oklahoma, multiple processes are underway simultaneously, including oil and gas production, wastewater disposal and fracking. There, earthquakes as large as magnitude 5.7 have rattled skyscrapers that were erected long before such seismicity was expected. If such an earthquake is induced in Europe in the future, it could be felt in the capital cities of several nations.

    Our research shows that production of geothermal steam and water has been associated with earthquakes up to magnitude 6.6 in the Cerro Prieto Field, Mexico. Geothermal energy is not renewable by natural processes on the timescale of a human lifetime, so water must be reinjected underground to ensure a continuous supply. This process appears to be even more seismogenic than production. There are numerous examples of earthquake swarms accompanying water injection into boreholes, such as at The Geysers, California.

    What this means for the future

    Nowadays, earthquakes induced by large industrial projects no longer meet with surprise or even denial. On the contrary, when an event occurs, the tendency may be to look for an industrial project to blame. In 2008, an earthquake in the magnitude 8 range struck Ngawa Prefecture, China, killing about 90,000 people, devastating over 100 towns, and collapsing houses, roads and bridges. Attention quickly turned to the nearby Zipingpu Dam, whose reservoir had been filled just a few months previously, although the link between the earthquake and the reservoir has yet to be proven.

    The minimum amount of stress loading scientists think is needed to induce earthquakes is creeping steadily downward. The great Three Gorges Dam in China, which now impounds 10 cubic miles of water, has already been associated with earthquakes as large as magnitude 4.6 and is under careful surveillance.

    Scientists are now presented with some exciting challenges. Earthquakes can produce a “butterfly effect”: Small changes can have a large impact. Thus, not only can a plethora of human activities load Earth’s crust with stress, but just tiny additions can become the last straw that breaks the camel’s back, precipitating great earthquakes that release the accumulated stress loaded onto geological faults by centuries of geological processes. Whether or when that stress would have been released naturally in an earthquake is a challenging question.

    An earthquake in the magnitude 5 range releases as much energy as the atomic bomb dropped on Hiroshima in 1945. A earthquake in the magnitude 7 range releases as much energy as the largest nuclear weapon ever tested, the Tsar Bomba test conducted by the Soviet Union in 1961. The risk of inducing such earthquakes is extremely small, but the consequences if it were to happen are extremely large. This poses a health and safety issue that may be unique in industry for the maximum size of disaster that could, in theory, occur. However, rare and devastating earthquakes are a fact of life on our dynamic planet, regardless of whether or not there is human activity.

    Our work suggests that the only evidence-based way to limit the size of potential earthquakes may be to limit the scale of the projects themselves. In practice, this would mean smaller mines and reservoirs, less minerals, oil and gas extracted from fields, shallower boreholes and smaller volumes injected. A balance must be struck between the growing need for energy and resources and the level of risk that is acceptable in every individual project.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 10:17 am on February 16, 2017 Permalink | Reply
    Tags: , , , , , , , SCOAP³, The Conversation   

    From The Conversation: “How the insights of the Large Hadron Collider are being made open to everyone” 

    Conversation
    The Conversation

    January 12, 2017 [Just appeared in social media.]
    Virginia Barbour

    CERN CMS Higgs Event
    CERN CMS Higgs Event

    If you visit the Large Hadron Collider (LHC) exhibition, now at the Queensland Museum, you’ll see the recreation of a moment when the scientist who saw the first results indicating discovery of the Higgs boson laments she can’t yet tell anyone.

    It’s a transitory problem for her, lasting as long as it takes for the result to be thoroughly cross-checked. But it illustrates a key concept in science: it’s not enough to do it; it must be communicated.

    That’s what is behind one of the lesser known initiatives of CERN (European Organization for Nuclear Research): an ambitious plan to make all its research in particle physics available to everyone, with a big global collaboration inspired by the way scientists came together to make discoveries at the LHC.

    This initiative is called SCOAP³, the Sponsoring Consortium for Open Access in Particle Physics Publishing, and is now about to enter its fourth year of operation. It’s a worldwide collaboration of more than 3,000 libraries (including six in Australia), key funding agencies and research centres in 44 countries, together with three intergovernmental organisations.

    It aims to make work previously only available to paying subscribers of academic journals freely and immediately available to everyone. In its first three years it has made more than 13,000 articles available.

    Not only are these articles free for anyone to read, but because they are published under a Creative Commons attribution license (CCBY), they are also available for anyone to use in anyway they wish, such as to illustrate a talk, pass onto a class of school children, or feed to an artificial intelligence program to extract information from. And these usage rights are enshrined forever.

    The concept of sharing research is not new in physics. Open access to research is now a growing worldwide initiative, including in Australasia. CERN, which runs the LHC, was also where the world wide web was invented in 1989 by Tim Berners-Lee, a British computer scientist at CERN.

    The main purpose of the web was to enable researchers contributing to CERN from all over the world share documents, including scientific drafts, no matter what computer systems they were using.

    Before the web, physicists had been sharing paper drafts by post for decades, so they were one of the first groups to really embrace the new online opportunities for sharing early research. Today, the pre-press site arxiv.org has more than a million free article drafts covering physics, mathematics, astronomy and more.

    But, with such a specialised field, do these “open access” papers really matter? The short answer is “yes”. Downloads have doubled to journals participating in SCOAP³.

    With millions of open access articles now being downloaded across all specialities, there is enormous opportunity for new ideas and collaborations to spring from chance readership. This is an important trend: the concept of serendipity enabled by open access was explored in 2015 in an episode of ABC RN’s Future Tense program.

    Greater than the sum of the parts

    There’s also a bigger picture to SCOAP³’s open access model. Not long ago, the research literature was fragmented. Individual papers and the connections between them were only as good as the physical library, with its paper journals, that academics had access to.

    Now we can do searches in much less time than we spend thinking of the search question, and the results we are presented with are crucially dependent on how easily available the findings themselves are. And availability is not just a function of whether an article is free or not but whether it is truly open, i.e. connected and reusable.

    One concept is whether research is “FAIR”, or Findable, Accessible, Interoperable and Reusable. In short, can anyone find, read, use and reuse the work?

    The principle is most advanced for data, but in Australia work is ongoing to apply it to all research outputs. This approach was also proposed at the November 2016 meeting of the G20 Science, Technology and Innovation Ministers Meeting. Research findings that are not FAIR can, effectively, be invisible. It’s a huge waste of millions of taxpayer dollars to fund research that won’t be seen.

    There is an even bigger picture that research and research publications have to fit into: that of science in society.

    Across the world we see politicians challenging accepted scientific norms. Is the fact that most academic research remains available only to those who can pay to see it contributing to an acceptance of such misinformed views?

    If one role for science is to inform public debate, then restricting access to that science will necessarily hinder any informed public debate. Although no one suggests that most readers of news sites will regularly want to delve into the details of papers in high energy physics, open access papers are 47% more likely to end up being cited in Wikipedia, which is a source that many non-scientists do turn to.

    Even worse, work that is not available openly now may not even be available in perpetuity, something that is being discussed by scientists in the USA.

    So in the same way that CERN itself is an example of the power of international collaboration to ask some of the fundamental scientific questions of our time, SCOAP³ provides a way to ensure that the answers, whatever they are, are available to everyone, forever.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 9:29 am on January 30, 2017 Permalink | Reply
    Tags: , , , , The Conversation   

    From The Conversation: “Giant atoms could help unveil ‘dark matter’ and other cosmic secrets” 

    Conversation
    The Conversation

    January 5, 2017
    Diego A. Quiñones

    1
    Composite image showing the galaxy cluster 1E 0657-56. Chandra X-Ray Observatory/NASA

    The universe is an astonishingly secretive place. Mysterious substances known as dark matter and dark energy account for some 95% of it. Despite huge effort to find out what they are, we simply don’t know.

    We know dark matter exists because of the gravitational pull of galaxy clusters – the matter we can see in a cluster just isn’t enough to hold it together by gravity. So there must be some extra material there, made up by unknown particles that simply aren’t visible to us. Several candidate particles have already been proposed.

    Scientists are trying to work out what these unknown particles are by looking at how they affect the ordinary matter we see around us. But so far it has proven difficult, so we know it interacts only weakly with normal matter at best. Now my colleague Benjamin Varcoe and I have come up with a new way to probe dark matter that may just prove successful: by using atoms that have been stretched to be 4,000 times larger than usual.

    Advantageous atoms

    We have come a long way from the Greeks’ vision of atoms as the indivisible components of all matter. The first evidence-based argument for the existence of atoms was presented in the early 1800s by John Dalton. But it wasn’t until the beginning of the 20th century that JJ Thomson and Ernest Rutherford discovered that atoms consist of electrons and a nucleus. Soon after, Erwin Schrödinger described the atom mathematically using what is today called quantum theory.

    Modern experiments have been able to trap and manipulate individual atoms with outstanding precision. This knowledge has been used to create new technologies, like lasers and atomic clocks, and future computers may use single atoms as their primary components.

    Individual atoms are hard to study and control because they are very sensitive to external perturbations. This sensitivity is usually an inconvenience, but our study suggests that it makes some atoms ideal as probes for the detection of particles that don’t interact strongly with regular matter – such as dark matter.

    Our model is based on the fact that weakly interacting particles must bounce from the nucleus of the atom it collides with and exchange a small amount of energy with it – similar to the collision between two pool balls. The energy exchange will produce a sudden displacement of the nucleus that will eventually be felt by the electron. This means the entire energy of the atom changes, which can be analysed to obtain information about the properties of the colliding particle.

    However the amount of transferred energy is very small, so a special kind of atom is necessary to make the interaction relevant. We worked out that the so-called “Rydberg atom” would do the trick. These are atoms with long distances between the electron and the nucleus, meaning they possess high potential energy. Potential energy is a form of stored energy. For example, a ball on a high shelf has potential energy because this could be converted to kinetic energy if it falls off the shelf.

    In the lab, it is possible to trap atoms and prepare them in a Rydberg state – making them as big as 4,000 times their original size. This is done by illuminating the atoms with a laser with light at a very specific frequency.

    This prepared atom is likely much heavier than the dark matter particles. So rather than a pool ball striking another, a more appropriate description will be a marble hitting a bowling ball. It seems strange that big atoms are more perturbed by collisions than small ones – one may expect the opposite (smaller things are usually more affected when a collision occurs).

    The explanation is related to two features of Rydberg atoms: they are highly unstable because of their elevated energy, so minor perturbations would disturb them more. Also, due to their big area, the probability of the atoms interacting with particles is increased, so they will suffer more collisions.

    Spotting the tiniest of particles

    Current experiments typically look for dark matter particles by trying to detect their scattering off atomic nuclei or electrons on Earth. They do this by looking for light or free electrons in big tanks of liquid noble gases that are generated by energy transfer between the dark matter particle and the atoms of the liquid.

    1
    The Large Underground Xenon experiment installed 4,850 ft underground inside a 70,000-gallon water tank shield. Gigaparsec at English Wikipedia, CC BY-SA

    But, according to the laws of quantum mechanics, there needs to be a certain a minimum energy transfer for the light to be produced. An analogy would be a particle colliding with a guitar string: it will produce a note that we can hear, but if the particle is too small the string will not vibrate at all.

    So the problem with these methods is that the dark matter particle has to be big enough if we are to detect it in this way. However, our calculations show that the Rydberg atoms will be disturbed in a significant way even by low-mass particles – meaning they can be applied to search for candidates of dark matter that other experiments miss. One of such particles is the Axion, a hypothetical particle which is a strong candidate for dark matter.

    Experiments would require for the atoms to be treated with extreme care, but they will not require to be done in a deep underground facility like other experiments, as the Rydberg atoms are expected to be less susceptible to cosmic rays compared to dark matter.

    We are working to further improve the sensitivity of the system, aiming to extend the range of particles that it may be able to perceive.

    Beyond dark matter we are also aiming to one day apply it for the detection of gravitational waves, the ripples in the fabric of space predicted by Einstein long time ago. These perturbations of the space-time continuum have been recently discovered, but we believe that by using atoms we may be able to detect gravitational waves with a different frequency to the ones already observed.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Conversation US launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
    • gregoriobaquero 9:46 am on January 30, 2017 Permalink | Reply

      Precisely to the point of my paper. If I am right nothing is going to be found. No new particles. The density of neutrinos”hot Dark Matter”) we can measure in our frame of reference does not tell the whole picture since we have the same local time with neutrinos passing by. What had not been taken into account is that gravitational time dilation is accumulating neutrinos when compared to neutrinos passing far away from the galaxy. Sent from my iPhone

      >

      Like

    • gregoriobaquero 9:48 am on January 30, 2017 Permalink | Reply

      Also, this phenomenon is similar to how relativity explains electromagnetism. Veritasium has a good video about it.

      Sent from my iPhone

      >

      Like

    • gregoriobaquero 9:54 am on January 30, 2017 Permalink | Reply

      Precisely to the point of my paper. If I am right nothing is going to be found. No new particles. The density of neutrinos”hot Dark Matter”) we can measure in our frame of reference does not tell the whole picture since we have the same local time with neutrinos passing by. What had not been taken into account is that gravitational time dilation is accumulating neutrinos when compared to neutrinos passing far away from the galaxy.

      Also, this phenomenon is similar to how relativity explains electromagnetism. Veritasium has a good video about it.

      Like

    • richardmitnick 10:19 am on January 30, 2017 Permalink | Reply

      Thank you so much for coming on to comment. I appreciate it very much.

      Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: